Aiming with Vision

I am having trouble aiming with my vision system in labview. I acquire the target, calculate the bounding box, then calculate the offset. Then I feed the offset to a PID loop. However, as soon as I activate the target, the turning causes the vision to lose the target and the robot spins in circles. I have a few ideas I want to try, but does anyone know a better way to do this?


teleop.png



teleop.png

Your input value ranges from -160 to 160, which means that your other tuning parameters will need to be pretty small to output a number between .5 and -.5.

Also, cameras are slow sensors in the best of cases, and my suspicion is that your is probably updating the set point at 100ms or 150ms intervals. That is a lot of tele Op loops running at 50% throttle between images. A moving camera blurs and distorts the image as well.

My guess is that your target is not centered, so your robot is jumping to half throttle spin and the next image doesn’t have a target at all. If your code doesn’t see a target, it should zero the offset and the PID will settle to 0 output, but not knowing the coefficients, it could have history and not immediately output a zero.

If you are ever tuning a PID that controls the robot orientation, here are some suggestions.

  1. Put the robot on blocks so it can’t hurt anyone.
  2. Wire the error term, set point, and output value to a chart (put all three into a cluster and wire that to the chart terminal) so that you can see what happens over time.
  3. Put the target in the center of the camera and test that the code works as you expect.
  4. Put the target just a bit off center and see what the motor values are. 50% may be too high.
  5. Have someone walk back and forth and see that the robot motors make sense.

By the way, for this, I’d set I and D to zero initially and get P in the ballpark.

Finally, after doing this, I think you’ll find that you cannot really close the loop using a camera. I’d suggest that you use the offset from the camera to compute an amount to turn the robot. Then use a gyro to close the loop. The measure with the camera and repeat if necessary.

Greg McKaskle

The best way we’ve found is to add a gyro to the robot and base the PID off the gyro. The camera gives us the delta to our new angle but we only use it to give us the new heading.

Thank you, this is great advice. I hadn’t thought of graphing the output. We were discussing closing the loop with a gyro, now I think we will definitely do that. I’m not sure if it was included in the snippets, but I was trying to set the offset to zero when we lost the target but the robot kept spinning, must have been a bug.

Also when I set the I to 0, the loop doesn’t output anything. Why is that?

When I is set to zero, the code has a special case for P and PD. When you say it doesn’t output anything, do you mean it doesn’t complete, always returns zero, or something else?

Greg McKaskle

Hello! I’m a programmer for FRC Team 4388 Ridgebotics. We are looking to use vision processing this year to be able to shoot goals into the tower. Initially, we decided to use the NI Vision Assistant VI in vision processing to detect the target, but we are having issues with it.

I came across this thread and saw that you used a Vision VI in your vision processing code. I want to see if this may be a better tool for us in successively using vision, but we can’t seem to find the VI anywhere in the library. Could you please direct us to where it is located? Thanks in advance!

Hi,
The vision VI is a custom VI that I modified from the output from the Vision assistant. I would highly recommend using the vision assistant. My team perfected our algorithm in Roborealm, the took a bunch of sample images and ported the roborealm algorithm into the vision assistant. Then, I built the VI in the vision system and made some changes to the output terminals, and made a custom picture for it.

I’ll do some more testing when I get back to the shop, but AFAIR, it outputs 0

So dubiousSwain, you used both RoboRealm and the Vision Assistant? What ‘algorithm’ did you perfect in RoboRealm?

We have two programmers on our team, so we decided to split up and research different approaches. I worked on blob filtering in RoboRealm, while my teammate Billy worked on pattern matching in the Vision assistant, a la 900. I made much quicker progress on the blob filter, and it gave good results, so we ported the pipeline into the vision assistant and had it build the VI. We decided that socketing into roborealm would not be as reliable as running a VI robot-side or using NT dashboard-side

tl;dr, we are not using both Roborealm and Vision Assistant, we tested both and then decided to use the VA.

EDIT: By algorithm I mean Pipeline.

Oh. Okay. We only have one programmer, me, so the quicker I can settle on something the better. I guess I’ll stick with Vision Assistant. Thanks!

Benjamin

The key for vision assistant is to take as many test images as you can, in all kinds of lighting. You can do this by pausing the live stream at 10.42.06.20 and right clicking “save as”. Make sure its paused, or your computer will not open the dialogue box and continuously acquire images.

Please PM me if you have any trouble, it can be tricky to figure out

Okay. Thanks! Do I use all the images to test it, or somehow factor them all in as images to look for in Vision Assistant? Sorry for my inexperience, I’ve never used it before, and I haven’t yet had the chance to play around with it.

There is an option in the file menu to import images. Just shift select all the images you saved and it will import them all. then use the green arrows above the pipeline or the grey arrows to the left to move through them.

Okay. Thanks. I’ll be trying that soon.

I’m having trouble developing the vision detecting code connecting with the vision assistant vi. Do you mind posting a picture of yours?

Check out the 2012 vision whitepaper and roborealm tutorial. That’s pretty much where I got my algorithm from.

Did you guys know about the example which comes with the FRC code? It’s located in “C:\Program Files x86\National Instruments\LabView 2015\Examples\FRC\roboRIO\FRC\Vision\2016 Vision Example” There’s even sample images.

Is that somewhere under the vision folder in the FRC examples you find iin LabView because I looked through there and other folders in those examples and couldn’t find this specific tutorial.

http://bfy.tw/3pIk