Vision Tracking Tutorial

Hello fellow programmers! Once again the Rookie team 4085 is pleading for help, for vision tracking explanations are either vague and or too confusing :frowning: so i beg of help to program in to autonoumus a vision tracking camera that will move the tank drive wheels on our robot and shoot once aimed. We use labveiw and sorry for asking so much questions gotta start somewhere >.<

Have you read this: https://decibel.ni.com/content/docs/DOC-20173

It goes through the logical steps of finding and tracking the targets using NI’s Vision assistant software (that writes Labview code).

I don’t often use Labview, but I am pretty sure there is a sample program that tracks targets.

The directory C:\Program Files\National Instruments\LabVIEW 2011\examples\FRC\Vision\Rectangular Target Processing should bring you to the Rectangular Target Processing vi’s and project. During the season we took this and adjusted it so we could find targets.

The processing takes the image and converts it to an X/Y coordinate grid, where x=0 and y=0 are the center of the image. When the program reads targets, it shows the x and y coordinates of the center of that target.

To find the target (the highest one), we compared the target’s y coordinates, and chose the target with the highest y value. Then for changing the orientation of the bot, we got the x coordinate from the target with the highest y coordinate and made that the process variable for our PID loop, and put 0 as the setpoint for our PID loop. We then used a gyro on our robot and compared it with the process variable, to see which direction the bot needed to go (a negative x value would mean turn right, a positive left) until the output of the PID was at 0.

Or at least I think that’s how we did it. It’s been a while. I’ll be sure to look over this again in the near future so I might be able to help you out more.

Oh, and all of this didn’t require the use of the NI Vision Assistant, since most of the code you need is already in the Rectangular Target Processing example.

Thanks :), im having a question relating to the vision tracking, how do I get the information coming from the processed images into a tank drive value in autonomous. please and thank you we have an off season competition tommorow :stuck_out_tongue:

Ahah! So that’s what they want you to figure out. Look at what data is available to you. Isn’t there a nice array of targets found? How about simpiy index the first array element. Now what does each element of the array hold? Well there is an X and Y coordinate. When you’re centered on the target, x should read 0.

Maybe that’s enough to get you going.

Here’s some links that should help:

Labview - incorporating rectangular target processing.vi

Vision Tracking Help:

Depending on how you processed your image, you might just get the x,y of the pixel coordinates. So if this is the case, and you want a number -1<=x<=1, use this formula:

drivingValue = (pixelValue - (maxImageDimension / 2)) / (maxImageDimension/2)
where maxImageDimension is the size of that image axis.

That way, you can get an value between that could easily be plugged into an algorithm or speed controller. (ex. 640,480 turns to 1,-1)

Hopes this helps. Good luck tomorrow! (I Wish my team was doing something offseason)