I just posted some C++ code I wrote to read from USB cameras and display the images to a monitor. Note that there is no "vision processing" going on here, so this isn't intended as a final solution to any particular problem. It's more of a test to see how fast I can capture and display images. It might be a nice starting place for someone who wants to do vision processing on a coprocessor, and in fact, team 696 may use it for that purpose in 2015, depending on the game.
This code has been run on Odroid U2 and U3 processors running Ubuntu, reading from either 1 or 2 Playstation Eye USB cameras, and displaying the images to a monitor attached to the Odroid via HDMI cable. I'm using 2 threads for each camera, 1 for reading in the image and one for sending the image to the display. The images are buffered in a queue in shared memory with pointers passed between threads to avoid unnecessary copying.
Here are the frame rates I've seen on Odroid U2:
1 camera 480 x 640 pixels: 43 fps
2 cameras 480 x 640 pixels: 15 fps
1 camera 240 x 320 pixels: 125 fps
1 camera 480 x 640 and 1 at 240x320: 30 fps
These are uncompressed images, and I expect we are limited by the speed of the USB connection.
The code is available here:
https://github.com/team696-public/2014_odroid_camera
Dependencies: Video4Linux, OpenCV.
The capture code accesses the cameras using the Video4Linux driver interfaces. OpenCV is used only to display the images.