|
Re: Decoupling the image acquisition from the Image Processing
If you dig into the labview code, you'll see things work in this order (and shame on me for not posting this in labview.
Get Image.vi gets the image. That feeds the image into Find Circular Target.vi directly. That .vi feeds the processed target information AND the image into the target info global, which is later fed into the dashboard high priority data.
That means, since the get image.vi is wired directly to the find circular target.vi wired directly to the global, that images for the dashboard are not updated to the newest camera image - they are only sent the last image to be processed by the find circular target.vi - which means you're going to get a maximum of 5-15 fps on the dashboard.
This is necessary ONLY if you want the tracking data shown on the dashboard to conform to the picture shown on the dashboard. If you have no reason to show the tracking data on the dashboard, I'm proposing that:
Feed the newest image into a global. Feed that global directly into the dashboard high priority data. Feed that global seperately into the find circular target.vi
This should decouple the image processing and the image shown on the dashboard. That will allow the fastest possible updates of dashboard images, and you'll still have the processed data to use for your robot tracking. The processed data will simply not match what you SEE on the dashboard, which doesn't really matter.
Basically I'm proposing sending the camera image straight through to the dashboard, and processing images on the side. Am I not looking at the Labview code correctly, or is there something underlying it that I don't understand?
Last edited by Tom Line : 08-03-2010 at 13:31.
|