2014 Vision Code

Hello there everyone!

Our team is trying to work with the camera to track those vision targets to calculate distance and whether or not the goal is “hot” in autonomous.

Can anyone help us out? This is our first year using vision, and we’re lost.

We’ve tried copy/pasting the example code, but it brings up errors about lost VIs and such. We tried using the vision assistant, but as I’m sure you know, it’s pretty daunting and looks pretty complicated.

I’ve looked at MANY tutorials, but couldn’t find one that helps us in general.

Any help would be greatly appreciated :smiley:

Did you look at “Tutorial 8 - Integrating Vision into Robot Code”?

Wow! Why didn’t we see those? Thanks! We’ll look through that!

Thanks!

This section of the FIRST/WPI documentation also covers some of the theory behind the example, including topics like calibration: http://wpilib.screenstepslive.com/s/3120/m/8731

We looked through that, but like you said. Those are the theories, not the step-by-step guide we want/need for being our first time working with vision.

Thanks for the link though :slight_smile:

How are you planning on doing vision: through cRIO, send it to driver station, or use an on board computer?

I would highly suggest either doing the processing on the driver station or a coprocessor. Our team has *already *run into issues with letting the crio do the thinking…

Sounds like we’re going to do it on the Driver Station. My next question is (after going through the steps in the tutorial), how do we get that data (variables) to talk with the cRIO code?

Looking at it more…I think processing on the cRIO would be easier. We’re only using Vision for Autonomous, then we’re planning on disabling vision.

Whichever way you decide to go, it is helpful to share data between DS and robot. The tutorial touches on this, and I’ve participated in a few other CD threads that go deeper. If those don’t answer your questions, please ask.

Greg McKaskle

Careful with that. When we tried that in the past, we found that it lagged the cRIO so much that we started getting watchdog errors - the cRIO couldn’t communicate fast enough.

As for sending data from the DS back to the robot, you can usually just use NetworkTables to send the data. NI’s code and RoboRealm both have easy NetworkTables integration built-in. (Not sure about OpenCV, haven’t tried it.)

Thank you all for the help! We’ll be doing some major testing to find the best route to go with and I’ll come back with some questions if needed.

Thanks again!

That all depends on the rate and complexity of the processing. You could probably get away with processing 1 image per match this year which shouldn’t cause any problems at all running on the cRIO.

Depending on your auton routine of choice, I suppose. We’re going to try for more complex autonomous routine this year, so we’ll need as much flexibility in the vision as we can get.

The LV example code for finding a game ball is takes two passes. While it would run on the cRIO, the robot and potentially the ball are both moving. I would think it would be hard to accomplish this without a DS or additional processor. The field targets seem pretty easy to do on the cRIO or any of the other approaches. By the way, “easy-vision” is still vision – it’s all relative.

Greg McKaskle