In the past when my team (172 NorthernForce) has tried to do vision processing, we've not had much luck. 2-3 years ago, we tried doing it on the cRIO, which had horrendous framerates, so last year we added a raspberry pi with a USB camera. This worked better, but was still pretty slow. This year, we have some of the raspberry pi camera modules - and so far this looks pretty promising. Interfacing with the camera is a real pain however, so I thought I'd share what I'm working on, in case anyone else is interested in using the camera module.
I've built a class called PiCam for interfacing with the camera. All you have to do is construct a PiCam instance, giving it a requested size for images, and a callback, which processes the frames. The callback has one parameter which is an OpenCV matrix/image of the current frame to be processed.
The code is on github at
http://github.com/NorthernForce/2014-Vision, and there is Doxygen documentation hosted at
http://northernforce.github.io/2014-Vision, which I will try to keep up to date as well.
Please keep in mind that this is in a pretty early state, and that there are many unfinished things - for instance I plan on adding more features for controlling the camera parameters (especially turning off that pesky auto exposure!).