Quote:
Originally Posted by Goldenchest
My team is thinking of using a Pandaboard to assist in vision tracking for our robot. However, I'm not quite sure how to configure OpenCV to work on the robot.
Are the OpenCV directories supposed to be included in the WindRiver Project build properties (and if so, how?), or should the code remain exclusively on the PandaBoard? In other words, should the OpenCV code be part of the main robot code, or should it be completely separate?
|
This 2013 season, we (Team 456) used the Pandaboard running a stripped-down version of Ubuntu with OpenCV to process video from a USB camera this past season as the vision coprocessor (AKA Panda Tracking System). The code written processed the images, tracked targets, and provided angular target coordinates (target center from image center in degrees) to the shooter code on the CrIO. The primary reason we did this was to reduce network load (images vs. target coordinates) and to reduce computational processing load on the CRIO.
Initially we tried to use the Raspberry Pi but the processor speed and USB video drivers were not beefy enough to get us to a 15 fps processing goal. With the Pandaboard and some code optimization (RGB/HSV conversion), we finally got around 20 fps. Ultimate frame rate is also determined by integration time of the camera sensor which is dependent on illumination conditions.
For vision input, we chose to use the Logitech C110 (Walmart, ~$15.00). This older USB webcam had better OS support (older is sometimes better supported in opensource) and low cost was attractive for replacement options. Also having the webcam directly connected to the Pandaboard let us skip pushing frames across the network.
Other API libraries we used:
1) mongoose: A lightweight multithreaded API webserver to handle http GET requests from the CRIO. Theoretically, one CPU core on the PB did the http/GET traffic and the other CPU core did the vision processing. Found at:
http://code.google.com/p/mongoose/
2) iniParser: A API configuration file ini parser written in C. Found at:
http://ndevilla.free.fr/iniparser/
3) YAVTA (Yet Another V4L2 Test Application): A stand-alone V4L2 utility that allowed us to control exposure (integration) time of the webcam. It is very important to control and stop the auto exposure settings of the camera to get consistent target tracking results.
We are working on a whitepaper describing our 2013 system of combined vision tracking and two-axis shooter and will provide it here when complete.
On the attached photo, the webcam is in the center of the illuminated LED rings and the AXIS IP camera is located next to it. The AXIS IP camera was used for driver vision and backup targeting in case of Panda Tracking System failure.