Vision Tracking Help

I’m struggling a little with my vision tracking software this year (first year trying to use vision). I’ve read several of the whitepapers on the vision and understand how it works fairly well. My issue is with incorporating the vision in to my code. Currently seen in the picture circled in red is the Rectangular Target-2013 VI which is being used in the Periodic Tasks VI. I initialize the global variable “Vision Active” as false in the Begin VI and change it to true to track targets several times in autonomous. However, my code does not really seem to be doing anything. Are there any glaring errors with it? Also, I’m open to dashboard processing and everyone keeps saying you simply have to copy the code to the dashboard which I am not entirely what exactly I must copy for it to work or what I need to add. Any help would be greatly appreciated.





What you posted has several While loops visible. None of them ever terminate. The big Case containing the Flat Sequence requires input from two of those loops. Since the loops never produce a value, the Case will never be able to run.

I suspect what you really want is to put the Case inside the loop where you’re reading the analog input named POT, and to move the stuff from the Joystick-reading loop into that same loop as well.

So this should fix my issue then?





I can’t say for certain that it will do what you want, but at least it will do something now. :slight_smile:

You should add a short wait (5-10 ms) inside the loop with the PID function, so it doesn’t monopolize the cRIO’s CPU. You can’t control the motor any more often than every 5 ms anyway.

Thanks Alan. Do you think that vision processing on the dashboard would be a better alternative? If so, any tips for getting started? I also think I would like to be able to choose a target out of the high and two middle goals. So this means I will have to use the Target Info[1] and Target Info[2] as well as Target Info[0]. Any tips for how to select a target?

I have another issue. So I ran the code on the old robot, and when i pressed button one, it switched to true momentarily, but the cRIO usage spiked to 100% and the the probes all read Not Executed (see picture). It looks as if I may have to go to dashboard processing. Can someone help me with vision processing on the dashboard?





How big are the images and how many frames are you trying to process per second? It really doesn’t take more than one good image.

You can of course move it to the laptop. The example referenced by the white paper introduces things starting on the laptop. As for frame rate, I’ve yet to see a robot shoot autonomously while driving parallel to the target at high speed, so I don’t really think that high frame rate is a necessity.

Greg McKaskle

Greg, the images are 320 x 240 and I really only want to process one image. Shouldn’t this code only be processing one image per trigger pull? It is now getting the images and giving a distance but once I pull the trigger it keeps giving me new images and never makes it to the second frame of the flat sequence structure. Also, the probe on button 1 will say “TRUE” momentarily when I pull the trigger before all of the probes that I’m running switch to “Not Executed.” Also worth noting, the cRIO usage on the DS spikes to 100% when attempting to run vision. Anything you can think of so that I only get one image when i pull the trigger?





What’s in that subVI at the top left of the first frame? It’s the only thing I can see that might fail to complete.

Your loop in the second frame has no delays and is going to use up as many CPU cycles as it can, so that would explain the 100% usage – but only if the sequence advanced to that frame.

Alan, the SubVI in the first frame is the Rectangular Target-2013 VI found in the example code. And for whatever reason, it is taking multiple images and never advancing to the second frame.

Take a look at the bottom right corner of the Rectangular Target -2013.vi block diagram. It’s a neverending While loop. It is designed to continuously process images and place the results on its front panel for display purposes.

It’s also designed to run on the PC, not on the cRIO. You probably want the Vision Processing.vi instead. It too is a neverending While loop, but it only reads images and processes them when the Enable Vision global variable is True. I don’t think it would be too difficult to make it disable itself after a single iteration – just set the variable False inside the case that runs when it’s True (preferably after the Target Info global has been set). Call Vision Processing from Robot Main where the default project has it, and communicate with it using the globals.

Thanks for the help Alan. I’ll give this a try this afternoon and see if it works for me.

Alan, I switched to doing the vision processing on the dashboard. In order to get just one image, I set it so that the Loop Iteration > 1, sends true to the loop condition which is stop if true. If the iteration < 1, it sends false to the loop condition. Unless I am mistaken, this should only analyze one image every time the loop runs, correct? I am also using smart dashboard variables to send the distance from Target Info[0] to the cRIO when I press button 1 on the joystick. I think I need a SD variable to send this to the dashboard also but it didn’t appear to be working. For whatever reason, the cRIO keeps freezing up when i press that button.

Alan, I switched to doing the vision processing on the dashboard. In order to get just one image, I set it so that the Loop Iteration > 1, sends true to the loop condition which is stop if true. If the iteration < 1, it sends false to the loop condition. Unless I am mistaken, this should only analyze one image every time the loop runs, correct? I am also using smart dashboard variables to send the distance from Target Info[0] to the cRIO when I press button 1 on the joystick. I think I need a SD variable to send this to the dashboard also but it didn’t appear to be working. For whatever reason, the cRIO CPU usage spikes when I press that button.

You’re really trying hard to do things in a way that the provided code isn’t set up to work. That loop is intended to start when the program begins and continue to run until the program is shut down. If you stop it, what’s going to start it up again? I urge you modify the example code as little as possible.

Since you’ve abandoned the easily-controlled cRIO code, the small modification I would suggest is to surround the actual vision processing functions with a Case block that 1) runs when a global boolean is set True, and 2) sets that boolean False when it runs.

Well, I was able to get it running on the dashboard, but i really only need the distance occasionally. Would my PID code still work if the distance is constantly changing? How would you go about implementing the code if it were for your robot?