Got an advise for vision kickstart?

Ok so, my team’s trying to get vision into the robot, we’re betting on a rasp pi 3, but there is this one big problem, which is that I have very little programming exp.
If you can tell me an advise or a kickstart guide, we will really appreciate it.
Greetings and thanks.

1 Like

To do what specifically? If you want to track notes this thread has some work there.

For april tags this thread has a bit of info, though it relies on PhotonVision.

The biggest, imho, hurdle you’re going to run into is getting info from your coprocessor onto the rio. The current approach I’m looking at is GitHub - robotpy/pynetworktables: Pure python implementation of the FRC NetworkTables protocol

1 Like

The thing about vision isn’t how difficult it is to get working, it’s how difficult it is to integrate into the robot.

Ideally as you have lower programming experience, I’d look at running photonvision or buying a gloworm or limelight. All of these solutions make it fairly easy to use networktables in tandem with an ethernet switch to talk to your robot.

The information you get off network tables depends on some of the settings you use, but generally at a minimum you can get an angle to the target and a distance. This lets you configure your shooter angle/speed and also turn your robot to automatically line up with the target. There are plenty of ways to do this, so I’d suggest picking what works best for you (the simplest solution is to ‘interpolate’ your values. i.e. if you are 10-9 feet away, use this speed and angle, 8-9, use a different one etc etc.)

4 Likes

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.