Configuring OpenCV with PandaBoard

My team is thinking of using a Pandaboard to assist in vision tracking for our robot. However, I’m not quite sure how to configure OpenCV to work on the robot.

Are the OpenCV directories supposed to be included in the WindRiver Project build properties (and if so, how?), or should the code remain exclusively on the PandaBoard? In other words, should the OpenCV code be part of the main robot code, or should it be completely separate?

OpenCV needs to run on the Panda board.
The result of the vision processing, most likely the target information, is then forwarded to the cRio.

If you need to see how it has been done by others, the search function will yield lots of results. Try searching for “Raspberry”. Also, there are a couple white papers that might help.

Spectrum 3847 has this white paper that helped me tremendously. They wrote this for a Raspberry Pi, but it should be able to give you the help you need.
I was able to use it to get a fully functional tracking system using a PCDuino.
If you would like to see the code I came up with, just PM me and I’ll get it to you.
It is written in Python, but that should not be an issue.

OpenCV is pretty much the same install on all ARM Linux-running devices. If you can do it on a Raspberry Pi, you can usually do it on a BeagleBoard, PandaBoard, ODROID, Gumstix, or other embedded system running Linux. That’s about all I know :slight_smile:

1706 used the ODroid X2 board. The environment was QT, and we sent a udp message to the cRIO. I only sent him 2 variables, distance, and rotation needed to line up with the center of the 3 point target. And just to clear things up, the ODroid X2 is much better than the pandaboard and raspberry Pi. Of course that is very opinionated :wink:

It takes a lot of research to find them, but there are actually quite a few cheap quad core systems out there, such as the UDOO. ODROID will be supposedly be getting the next-gen 8 core smartphone processors in the next year or so. I don’t mean to be off-topic, I just remembered that fun fact. Here is the standard ARM Linux OpenCV instructions, which is a little confusing if you don’t use Linux, but the people in the OpenCV forum are really nice and know what they’re doing :stuck_out_tongue: Here is something I found on Pandaboard specifically. Cheers!

I should probably mention that I already possess a PandaBoard, so I am not planning to use any other board. I’m not asking WHAT type of board I should use but HOW I should use it.

But thanks for the replies so far, it’s been very helpful! :smiley:

First off, what OS are you using? And the reason for the pandaboard is for vision, so why shouldn’t the vision program be strictly on that board? As for an environment, I’d suggest QT. It provides easy means for sending data to the cRIO via a UDP message that is easy to program. To install OpenCV, use this guideline: http://opencv.willowgarage.com/wiki/InstallGuide. Depending on what Camera you are using, you may need to hack that as well. The kinect is the only one that I am aware needs hacking. IF you are using Ubuntu, then this is how to get the kinect to work: http://openkinect.org/wiki/Getting_Started.

I currently have Ubuntu 12.04 installed on the PandaBoard, but I’ve been having some trouble installing the OMAP 4 addons (is this is even necessary?), so it could change if I find a better OS.

I’m planning to use the IP Axis camera that came in the kit, but I also have a typical webcam to use as a backup if necessary. I wouldn’t need to hack either of those, would I?

And I guess that part about the code being on the PandaBoard was pretty stupid :stuck_out_tongue: I’m new to using co-processors.

it’s alright, but does that make sense?

I know for a fact that a webcam you go and by at the store does not require any work arounds to use it, not entirely sure about a webcam built into a computer however.

I have personally never used the IP Axis Camera, but due to the sheer volume of teams that do use it, i assume you dont have to hack it, and if you do, it will be something simple.

I dont believe you need the OMAP 4 addons to run opencv, if you do however: http://omappedia.org/wiki/PandaBoard_Ubuntu_PPA

As for finding a better OS…Ubuntu is essentially the king of all OS for programming. It has a fast compile time, terminal is very easy to learn, and doesn’t bug you with a bunch a features like windows and mac does.

This 2013 season, we (Team 456) used the Pandaboard running a stripped-down version of Ubuntu with OpenCV to process video from a USB camera this past season as the vision coprocessor (AKA Panda Tracking System). The code written processed the images, tracked targets, and provided angular target coordinates (target center from image center in degrees) to the shooter code on the CrIO. The primary reason we did this was to reduce network load (images vs. target coordinates) and to reduce computational processing load on the CRIO.

Initially we tried to use the Raspberry Pi but the processor speed and USB video drivers were not beefy enough to get us to a 15 fps processing goal. With the Pandaboard and some code optimization (RGB/HSV conversion), we finally got around 20 fps. Ultimate frame rate is also determined by integration time of the camera sensor which is dependent on illumination conditions.

For vision input, we chose to use the Logitech C110 (Walmart, ~$15.00). This older USB webcam had better OS support (older is sometimes better supported in opensource) and low cost was attractive for replacement options. Also having the webcam directly connected to the Pandaboard let us skip pushing frames across the network.

Other API libraries we used:

  1. mongoose: A lightweight multithreaded API webserver to handle http GET requests from the CRIO. Theoretically, one CPU core on the PB did the http/GET traffic and the other CPU core did the vision processing. Found at: Google Code Archive - Long-term storage for Google Code Project Hosting.

  2. iniParser: A API configuration file ini parser written in C. Found at: http://ndevilla.free.fr/iniparser/

  3. YAVTA (Yet Another V4L2 Test Application): A stand-alone V4L2 utility that allowed us to control exposure (integration) time of the webcam. It is very important to control and stop the auto exposure settings of the camera to get consistent target tracking results.

We are working on a whitepaper describing our 2013 system of combined vision tracking and two-axis shooter and will provide it here when complete.

On the attached photo, the webcam is in the center of the illuminated LED rings and the AXIS IP camera is located next to it. The AXIS IP camera was used for driver vision and backup targeting in case of Panda Tracking System failure.