Last year we went down the path of vision on a BeagleBone black as co-processor, using the MS Lifecam 3000, but ended up with a horrible 2s lag and about 1-2 fps. Over the weekend before champs, I learned a ton, recompiled a lot of packages, and bought a Logitech C920. This got it to about 20fps with a 100-200ms lag.
Fast forward to this year, and we used a RPi 3 with the Logitech C920, with a stack that let us run OpenCV code as a filter for mjpg-streamer. This let us hook into the streamer and process an image, draw on it, and send it back to the dashboard very easily.
The code and tool setup we have running has been set up and run on a BeagleBone Black, RPi 2 & 3, and the Jetson TX1, so we can easily pick and choose the platform we want. This year with the Pi3, we found we got a few extra frames per second over the BBB. We didn’t use the TX1 for a few reasons, mostly space, and we didn’t have a spare.
To get values back to the RIO, we are simply using a socket to send data as a string over TCP, to a socket handler thread in our robot code. When a client connects, we send the HSV values to the coprocessor for it to use, so it is all configured through the robot software (and our config file). We spin the client socket off on another thread so it doesn’t hold up image capture and processing. In all, we were running at about 15-20fps, with about a 500ms delay, which we want to eliminate once we figure out where it is introduced.
As drive coach, the benefit of having the code as a filter was being able to draw key points on the image, as well as an indicator for if the coprocessor was not able to connect to the RIO (really useful to know before a match starts!), as opposed to the RIO thinking it is connected, and being able to do all that without having to write images to be streamed back, or just the raw camera stream.
Our instructions for setting this up are written up, but I’ll likely clean them up and post it up in the next week or so and get it on our blog.