Log in

View Full Version : Decoupling the image acquisition from the Image Processing


Tom Line
08-03-2010, 12:01
Is there any reason not to decouple the image acquisition from the image processing?

I'm really not interested in seeing the information on my dashoard as far as the tracking goes - I'm much more interested in seeing the dashboard images as quickly as possible for our drivers.

So my thought was that I could save the image to a global variable when it comes from the camera and feed that global to the dashboard.

On the side, have a second global (call it the image processing global). Save a camera image into that image-processing global, process the image information, then allow a new image to be saved from the camera into the image-processing global to do the processing on the new image.

I believe that will get me maximum dashboard framerate while still allowing the image processing to run as fast as it can. Are there any issues with that?

slavik262
08-03-2010, 12:22
Are you suggesting creating your own PCVideoServer class? Otherwise, PCVideoServer does this already. It automatically sends the images to the Classmate (or whatever connects to it) without doing any image analysis.

Tom Line
08-03-2010, 13:27
If you dig into the labview code, you'll see things work in this order (and shame on me for not posting this in labview.

Get Image.vi gets the image. That feeds the image into Find Circular Target.vi directly. That .vi feeds the processed target information AND the image into the target info global, which is later fed into the dashboard high priority data.

That means, since the get image.vi is wired directly to the find circular target.vi wired directly to the global, that images for the dashboard are not updated to the newest camera image - they are only sent the last image to be processed by the find circular target.vi - which means you're going to get a maximum of 5-15 fps on the dashboard.

This is necessary ONLY if you want the tracking data shown on the dashboard to conform to the picture shown on the dashboard. If you have no reason to show the tracking data on the dashboard, I'm proposing that:

Feed the newest image into a global. Feed that global directly into the dashboard high priority data. Feed that global seperately into the find circular target.vi

This should decouple the image processing and the image shown on the dashboard. That will allow the fastest possible updates of dashboard images, and you'll still have the processed data to use for your robot tracking. The processed data will simply not match what you SEE on the dashboard, which doesn't really matter.

Basically I'm proposing sending the camera image straight through to the dashboard, and processing images on the side. Am I not looking at the Labview code correctly, or is there something underlying it that I don't understand?

slavik262
08-03-2010, 13:52
Sorry - I thought you were working in C++.

Greg McKaskle
08-03-2010, 17:13
The vision library is not an easy one to look at, but I'll give some more insight into what is going on.

The Open starts up two loops that run independent, one talks directly to the camera HTTP server and sends out notifications to anyone interested in the images. The Get.vi is one of the things that will wait for notification of a new image, and then it follows the path you mention. The second loop that implements the PCVideo server for LV waits for the notification and sends the images directly to the PC via TCP.

You probably don't want to copy the image strings around in globals, and simply sending the image back in the high priority user data won't work since you'll be limited to about 1K of data in a given packet, and the image is typically larger than that.

If you want to do video to the dashboard without processing on the PC, you can just turn off the Video button on the Robot Main and make current value default, or set the global in Begin or elsewhere in the code.

Greg McKaskle

Ziaholic
08-03-2010, 21:26
Thanks Greg. I really enjoyed reading that. It answered an unasked question I've had for the past month or so ... regarding the way the data from the camera was moved around.

To the OP ... if you want to get rid of the circles and lines on the dashboard, then poke around in the Dashboard.VI and remove some of the stuff in there.

Tom Line
08-03-2010, 22:28
Thanks Greg. I really enjoyed reading that. It answered an unasked question I've had for the past month or so ... regarding the way the data from the camera was moved around.

To the OP ... if you want to get rid of the circles and lines on the dashboard, then poke around in the Dashboard.VI and remove some of the stuff in there.

Thanks - we've already done that. What I don't get are some of the teams saying they're getting 30 frames per second on their driverstation. We're getting 5 with processing turned on (320x240), 10 with processing at 160x120, and 15-18 with process off at 160x120.

I don't know how they're managing that extra frame rate and it will be (in my eyes) a huge advantage to be able to aim with the camera in real-time for long shots.

Ziaholic
09-03-2010, 09:26
I've read the 30 fps threads and to be honest, I'm a skeptic. I'll have to see it to believe it. In my opinion, 4-5 fps is perfectly adequate to be able to manually (or autonomously) line up a kick-shot.

I'm of the school of thought that more/faster is not necessarily always equal to better. ... and sometimes it'll come back and bite ya'.

It's one thing to "tweak" the FPS to the max when you're alone on a tether or a private wireless LAN, but it can be an entirely different animal when you're hooked up to an FMS with 5 other 'bots.

I admire their drive to push things to the limit, but IMO, it's too risky to use it during a competition when 4-5 fps can still get the job done.

Wei
09-03-2010, 17:52
30 FPS at the highest image size is possible. This is what was we had to do to get those results. However, we did not run at 30 FPS (at competition) because it wasn't needed.

Changes to the C++ platform:
1 - Alter the Template Robot code
1.1 -(BIGGIE) remove continuously processing functions
1.2 - all code run in periodicFunctions (TeleOp and Auto. mode)
1.3 (BIGGIE) Use semaphores and Timer Task to control periodic function execution instead of reading the time/clock continuously to determine when the periodic functions should execute.

2 - Optimize the image processing functions to not process false positives.

3 - Configure Camera and Image processing functions/task to a lower priority

What the changes accomplish:
1.1 and 1.3 doubles the processing power available for Image processing (about 50% increase).
1.1, 1.3 combine with 3 decrease task switching cost (about 1 to 5 % processing gain, a few more percentage gains is possible if you area willing to play with the system clock).
2 - Depending on how much optimizing is done, Image processing cost can be decrease by at least 50%.