We do our vision processing on a raspberry pi using CV, and just send the "targeting information" across to the roboRIO. The RIO side code is on our
GitHub repository, at Vision2016.py. Our (command-based java) RIO side code is also there under src/, as the CatapultPositioner subsystem and various AutoAim commands. I can forward any questions to our programming team.
Edit: we used SEPARATE driver/streaming cameras, because the exposure/speed/resolution settings for the two functions are usually quite different.