Our best success has been with OpenCV running on a raspberry pi, then just sending coordinates of targets to the RIO. This had several distinct advantages over our 2012 implementation of vision processing on the cRIO in java (and trying to do vision processing on the driver station was a disaster for us because of network congestion):
- Much less network traffic
- Programmer could take a test platform home with no impact on team progress otherwise; this was particularly helpful the first two years when we could only afford one control system.
- No lag introduced into drive experience (perhaps a side effect of #1)
- Very loose coupling of vision processing with robot control; we reused the 2013 vision processing code (and hardware) in 2014 with no changes.