My team is thinking of using a Pandaboard to assist in vision tracking for our robot. However, I’m not quite sure how to configure OpenCV to work on the robot.
Are the OpenCV directories supposed to be included in the WindRiver Project build properties (and if so, how?), or should the code remain exclusively on the PandaBoard? In other words, should the OpenCV code be part of the main robot code, or should it be completely separate?
OpenCV needs to run on the Panda board.
The result of the vision processing, most likely the target information, is then forwarded to the cRio.
If you need to see how it has been done by others, the search function will yield lots of results. Try searching for “Raspberry”. Also, there are a couple white papers that might help.
Spectrum 3847 has this white paper that helped me tremendously. They wrote this for a Raspberry Pi, but it should be able to give you the help you need.
I was able to use it to get a fully functional tracking system using a PCDuino.
If you would like to see the code I came up with, just PM me and I’ll get it to you.
It is written in Python, but that should not be an issue.
OpenCV is pretty much the same install on all ARM Linux-running devices. If you can do it on a Raspberry Pi, you can usually do it on a BeagleBoard, PandaBoard, ODROID, Gumstix, or other embedded system running Linux. That’s about all I know
1706 used the ODroid X2 board. The environment was QT, and we sent a udp message to the cRIO. I only sent him 2 variables, distance, and rotation needed to line up with the center of the 3 point target. And just to clear things up, the ODroid X2 is much better than the pandaboard and raspberry Pi. Of course that is very opinionated
It takes a lot of research to find them, but there are actually quite a few cheap quad core systems out there, such as the UDOO. ODROID will be supposedly be getting the next-gen 8 core smartphone processors in the next year or so. I don’t mean to be off-topic, I just remembered that fun fact. Here is the standard ARM Linux OpenCV instructions, which is a little confusing if you don’t use Linux, but the people in the OpenCV forum are really nice and know what they’re doing Here is something I found on Pandaboard specifically. Cheers!
First off, what OS are you using? And the reason for the pandaboard is for vision, so why shouldn’t the vision program be strictly on that board? As for an environment, I’d suggest QT. It provides easy means for sending data to the cRIO via a UDP message that is easy to program. To install OpenCV, use this guideline: http://opencv.willowgarage.com/wiki/InstallGuide. Depending on what Camera you are using, you may need to hack that as well. The kinect is the only one that I am aware needs hacking. IF you are using Ubuntu, then this is how to get the kinect to work: http://openkinect.org/wiki/Getting_Started.
As for finding a better OS…Ubuntu is essentially the king of all OS for programming. It has a fast compile time, terminal is very easy to learn, and doesn’t bug you with a bunch a features like windows and mac does.
This 2013 season, we (Team 456) used the Pandaboard running a stripped-down version of Ubuntu with OpenCV to process video from a USB camera this past season as the vision coprocessor (AKA Panda Tracking System). The code written processed the images, tracked targets, and provided angular target coordinates (target center from image center in degrees) to the shooter code on the CrIO. The primary reason we did this was to reduce network load (images vs. target coordinates) and to reduce computational processing load on the CRIO.
Initially we tried to use the Raspberry Pi but the processor speed and USB video drivers were not beefy enough to get us to a 15 fps processing goal. With the Pandaboard and some code optimization (RGB/HSV conversion), we finally got around 20 fps. Ultimate frame rate is also determined by integration time of the camera sensor which is dependent on illumination conditions.
For vision input, we chose to use the Logitech C110 (Walmart, ~$15.00). This older USB webcam had better OS support (older is sometimes better supported in opensource) and low cost was attractive for replacement options. Also having the webcam directly connected to the Pandaboard let us skip pushing frames across the network.
YAVTA (Yet Another V4L2 Test Application): A stand-alone V4L2 utility that allowed us to control exposure (integration) time of the webcam. It is very important to control and stop the auto exposure settings of the camera to get consistent target tracking results.
We are working on a whitepaper describing our 2013 system of combined vision tracking and two-axis shooter and will provide it here when complete.
On the attached photo, the webcam is in the center of the illuminated LED rings and the AXIS IP camera is located next to it. The AXIS IP camera was used for driver vision and backup targeting in case of Panda Tracking System failure.