Quote:
Originally Posted by Greg McKaskle
The WPI libraries in LV include vision code. The code uses parallel loops -- results in parallel threads -- in case the user has set the camera faster than they can process. This is effective for keeping the lag to a minimum.
You may also want to lower the camera frame rate, though. First, there is some overhead involved in processing the frames, though hopefully you aren't decompressing the images or doing any of the expensive things. Second, the two thread approach will result in much less certainty about when the image was acquired. If you are trying to estimate time, it is likely better to run the camera at a slower, but known rate that doesn't buffer images.
The other approach of requesting an image only when you want one was used in older versions of WPILib. It worked/works far better on the 206 model than on the newer Axis cameras. There is indeed quite a bit of setup overhead and the mjpeg approach is almost required to get above 10fps.
Greg McKaskle
|
We are using a Beaglebone black, which makes us run openCV. If it could run labVIEW we would use it. For some reason limiting the FPS was not working.
I ran the threading overnight last night, and it ran for 12 hours without missing any frames or starting to lag. How I had it set up was grab the image in the fast looping thread, and store that. Then in the slow thread i would grab a frame as fast as the tracking could loop, which was about 60ms. Because at 30fps that is a new frame every 33ms, that means at maximum the image is 2 frames old, which is perfectly fine.
The reason we are on a beaglebone instead of the dashboard is i have heard stories of FTA's requiring dashboards to get shut off, and we don't want to be stuck if that happens. So we want it to be onboard.
__________________
All statements made are my own and not the feelings of any of my affiliated teams.
Teams 1510 and 2898 - Student 2010-2012
Team 4488 - Mentor 2013-2016
Co-developer of
RobotDotNet, a .NET port of the WPILib.