I am a semi-new to labview (1~2 years…) and my team wants to use a camera and vision tracking to help align the robot automatically to the high goal on this years tower, the problem is I have absolutely no clue how to program such a task. I do not want a program written for me, however I need some serious assistance as we are at competition tomorrow… I have been searching and trying for several days since my team came up with this thought but to no luck.
Thanks,
Brandan
FRC Team 2518
First, I want to stress, this will not be able to be done in one night. Carefully explain to your team that such a task is a large one that requires plenty of writing, testing, re-writing, etc.
Now to the actual meat:
There are 3 main parts to vision processing/tracking.
- Finding the vision target and extracting data from it. This can be done using several different vision programs, such as robo realm or GRIP. Our team uses NI Vision Assistant.
- Processing data from this image and turning it into actual gyro headings/distances. This requires a bit of trignometry, and I suggest taking to a whiteboard or piece of paper to figure this out. Once you draw it out, it becomes a relatively simple math problem.
- Use these new-found “real-world” values (as oppose to camera values) to adjust the robot to face the right way and be the right distance away. This will require a PID loop. One thing to stress: don’t use the camera as the feedback sensor; use the gyro/encoders for this.
Notes:
Step 1 will require a lot of testing to get the program to pick out the target just right. You then will probably need to find at least the COG of the target, but probably also the bounding box. Again, it is easier to use a separate program to develop this code.
Step 2 is pretty self-explanatory.
Step 3 will be quick to set up, especially if you’re using CAN Talons with their built-in PID.
Okay, Thanks for the reply. I will inform them, I was thinking the same thing about the tracking, I figure that the camera would need to be calibrated for the arena we are in with the lighting specific to it. It is something to look into at a later time…
Thanks
The best idea would be to set this as an off-season goal. This will allow you to figure out vision prior to next year’s build season, and then implement on this year’s robot.This way, you’ll be ready to use vision next year.
Why not? It’s already written for you. Look at the vision processing example in LabVIEW, and use the instructions in the tutorial for integrating it into your project.
The idea here is that they should learn to do this themselves, so it is easier to implement later. Either way, it’s not going to be done in one night.
Why learn how to do it themselves now when they can get it working with the example? One of the best things about coding is that if you find a piece of open source software, you can use it.
And to be honest, the vision processing example in LabVIEW is pretty good. All you have to do is integrate it into a custom dashboard (for processing on the driver station computer) or integrate it into the robot project (for processing on the RoboRIO.)
It took me about an hour to get vision tracking working on our robot with the example. With a little guidance, they can do it too.
Here is a custom dashboard that I added the vision processing into. You can mess with the variables on the Basic and Custom Tabs to calibrate your camera to see the target. To get values back to your robot for use in aligning, you can pull “visionCenterX”, “visionCenterY”, and “visionRange” from NetworkTables. visionCenterX tells you how far from the center the target is on the X axis, visionCenterY tells you that about Y, and visionRange tells you how the camera is away from the target. To use the vision tracking code, send a “True” value over NetworkTables with the RefNum of “track”.
I accidentally pressed the clean code button, so it is all over the place. Dont worry about looking at the block diagram.
Hope you can use this for the rest of the competition tomorrow. If not, you can use it for off-season events or even poke around in the code to see how it works. If you have any questions, feel free to pm me.
https://drive.google.com/file/d/0B3ZYrnJZUMjxeXljdlJLcnNzaWs/view?usp=sharing