|
Re: Raspberry Pi + Camera Module for Vision Processing
Last year we (Team 116 Epsilon Delta) used 2 USB cameras connected to a RaspberryPi (definitely stretching the power limits).
One camera was to stream video using mjpg_streamer, which barely taxed the CPU (less than 14%) and could easily stream up to 30fps with 720p images (since all the mjpg encoding work was already being done within the USB camera), but we dialed it down to 416x240@10fps since that was sufficient for the human driver and limited the amount of network bandwidth used by the camera to less than 2Mbps.
The second camera was controlled by OpenCV code which processed only a few frames per second at best. With the dedicated Pi camera, there should be significantly increased data rates, but the biggest constraint will still be the OpenCV code itself. Now maybe just trying to find the center of a single object and at most two colors for the balls versus the multiple goals like last year might be faster (we haven't done this yet).
The only real disadvantage I see to the Pi camera is the shorter length of the ribbon cable, so it might be more difficult to position the camera where it really needs to be on the robot.
Recognition for autonomous mode should be easy. If 160x120 can be processed at 30 fps, then assisted catching should be possible. That definitely wasn't the case with our USB-attached cameras and the Pi.
I'm wondering if anyone has attempted to use the BeagleBone Black to do OpenCV processing and see how it compares to the Raspberry Pi. I now think I'm going to have to get a couple of the dedicated cameras.
-Spencer
|