I was impressed by the specs of the Raspberry Pi v3, and so I decided to write vision tracking code on this platform using the Raspberry Pi camera as the input device. I wrote super-optimized custom blob detection code in YUV color space. It turns out, the Pi v3 is over-kill. The code runs fine on Raspberry Pi Zero at about 25% CPU utilization. Processing is done in real-time on 640x480 images at 30 frames per second. The video is streamed via mjpg-streamer with lag of less than 250 msecs.
This works because:
- image capture, YUV conversion, JPEG compression, and image down-sampling are all done using the Pi’s GPU, and
- as I said, the blob detection code is super-optimized; to the extent possible, it touches each pixel only once, and uses small memory footprint data structures to represent partial results; memory is accessed in sequential order to improve cache hits.
Source code is available. See below. The code includes:
- an mjpg-streamer plugin in C that does image capture, blob detection, and video streaming,
- a GUI in python that displays the video, and detected bounding boxes, and allows inspection and tweaking of camera and blob detection parameters, and
- a Java test program that demonstrates how to grab the detected blobs on the Roborio.
The GUI is available at https://github.com/team696-public/dash696
The rest of the code is available at https://github.com/team696-public/mjpg-streamer
A paper describing the system is available at https://github.com/team696-public/dash696/blob/master/raspi_blobs.pdf
Enjoy.