I have developed a successful vision script in Vision Assistant and I am trying to now get the cRIO to do the image processing. I generated the VI from vision assistant and pasted it into the Vision Processing VI in the project file.
I have pasted the IMAQ blocks one by one, wired the Img Out to an Image viewer, RAN the Robot Main.vi, and observed the resulting image of the block on the front panel. It has worked in this way until I pasted in the Area Particle Filter block.
When I run the Robot Main.vi with this block in place, as soon as I click on ‘Enable Vision’, the DS looses connection with the cRIO and the Robot Code and Comminication lights on the dashboard turn red (program on the cRIO crashes I’m guessing).
Looking at my attached Vision VI screenshot, can you spot any glaring errors? Your comments are greatly appreciated!!
never ever do camera tracking on cRio, it is way to resource intensive. Instead, do it on driver station.
never ever …
All CPUs are limited. If you get a bigger camera or a faster camera, you will overwhelm even a XEON monster with the data that it can produce. The cRIO is perfectly capable of processing images for this year’s challenge, provided you use it appropriately.
A cRIO shouldn’t crash doing image processing, and I’ll hook mine up to see if I can reproduce it. You may want to verify that you have this year’s cRIO image installed, and if you don’t mind a few more experiments, can you change the two Boolean true’s to false to see if one of those is responsible for it?
We experienced tons of lag with cRio Vision Processing. I’ve successfully implemented the vision tracking code on the dashboard, and we get much cleaner readings and smoother tracking.
My point is not that cRIO is right and dashboard PC is wrong or vice-versa. The two are actually quite similar in how they work.
The key is to identify the strong and weak aspects of an approach. If you combine this with the ability to list alternate approaches, and you have powerful tools that will allow you to innovate and discover new and novel designs that are effective at solving a problem.
So, to play this game a bit. What would “ton of lag” mean? Do you have an idea what is causing the lag?
I’m was wondering, how would you camera track on the DS? Any info will help our team, because it is currently giving our team a lot of frustration with the CPU being overloaded.
The CPU usage is related to the pixels processed. If the image size goes down, the pixels drops by 4x. If the frame rate drops, the pixels drops linearly. So, if you want to process it on the cRIO, process fewer pixels in a given period of time.
If you want to process it on the DS, the initial LV example project has VIs for both the cRIO and the host computer. The example has a loop for getting the camera image, processing, and displaying it. Locate the similar code in the dashboard that just gets the image and displays, and merge in the processing code or the portion of it that you need.
To communicate results back to the robot, I’d suggest using SmartDashboard. Write the values in the dashboard vision loop and read them on the robot where you need them.