Thanks a lot everyone for your comments. Here are my notes from the thread.
General notes
Many teams are using vision, but it's not required to be successful. Often times there is a simpler control strategy than vision, for a particular task.
Teams using vision can't expect a lot of troubleshooting support from the CSA at the event.
Processing
There are two main strategies for performing vision processing.
1. Event -- perform a specific task
a. Aim -- e.g., robot is driven into position, and an aiming command is given.
b. Autonomous scoring -- moving robot into known position on field
c. Ramp -- robot automatically drives over 2012 ramp
2. Continuous -- the vision subsystem runs continuously, identifying one or more objects and feeding image telemetry as an input to the robot program
a. Drive to known position
b. Create HUD style display (image with overlay) to show driver
c. Indicate when robot is in scoring position to driver
Note that most of these are using telemetry as input to an autonomous process.
You have three choices on where vision processing runs, each of which has benefits and drawbacks.
Driver station
+ have full power of PC
+ NI libraries available
+ fairly easy to interface
- Communications limits between robot and driver station prevent certain algorithms from working. This can be a big limitation
+ easy to display telemetry to drive team
+ can use DS software to move telemetry to robot
cRIO
+ NI libraries available
+ simplest interfacing between vision program and robot programs
- Running vision on separate thread/process makes programming more complicated.
- Easier to crash robot program (e.g., memory management issues)
- Limited CPU power. Current WPILib with cRIO at 100% CPU exhibits unpredictable behavior
+ Easier to move images to DS than coprocessor option
- IP camera support only -- no USB camera support
roboRIO
+ NI libraries available
+ potential for openCV to work, but some questions about whether NI Linux Real-Time has necessary libraries
+ simplest interfacing between vision program and robot programs
- Running vision on separate thread/process makes programming more complicated.
- Easier to crash robot program (e.g., memory management issues)
+ USB support allows direct interfacing to cameras
+ Much more CPU power than cRIO
CPU 4-10x faster
Has NEON instruction support, which looks like it's supported in openCV. Unclear on NI Vision.
External single-board computer (SBC or coprocessor)
+ Many choices of hardware available, some more powerful than roboRIO.
Popular examples include Arduinos, Raspberry Pi, PCDuino, GHI Fez Raptor.
Nvidia Jetson TK1 looks like a monster board -- 2GB of RAM, 192 GPU cores, Tegra K1. OpenCV 2.4 doesn't appear to support the GPU, though.
SBC with a video output is easier to troubleshoot than one without.
+ Some hardware supports hardware graphics speedup (vector instructions, GPU)
+ Many SBCs have USB support, allowing direct camera interfacing
- No NI library support
- Requires ability to do UDP packet processing
- Display of image on DS is more difficult
Software
NI Vision generally considered to be easier to set up.
If you want the option of using a single board computer (vision coprocessor), you probably want to code in C++ or Java, as code can run in any of the three locations.
Running a web server on your coprocessor can make things easier.
http://code.google.com/p/mongoose/ is one.
http://ndevilla.free.fr/iniparser/ is one of many free configuration file parsers written in C
Camera
Camera calibration is an essential part of the process. Ensure that the camera you select can be calibrated, and the settings persist through reboot/power cycles.
Mounting location also essential
Need to make sure your software library can acquire images from your camera. UVC is standard for USB cameras. UVC 1.5 supports H.264 video, which can be faster to process in certain ways if your vision proc
Some question about whether USB can support frame rates above 30hz
Cameras
Axis cameras (from KOP) are good choices for people just starting out. There is good built in WPILib support, and they maintain their settings through reboots.
Kinect works too. Depth map can be very useful. Driver support in OSS world seems rough.
Other interesting cameras: Asus Xtion, Playstation Eye
Future: Pixy
LED ring lights (typically green, don't use white) are considered essential
Vision programming tactics
. Need to be able to modify parameters at runtime
Driver station dashboard parameter setting
Config file on robot filesystem
Config file is more flexible because you could have named presets, selected via the DS dashboard, that combine several parameter settings
. openCV is very popular. NI Vision is also viable. No commenter supported RoboRealm; one felt it was too simple (but is that bad?!) and another was held back by fears about licensing issues
. It's debatable whether having a FRC specific library on top of a vision library has any use.
Lower the resolution if you need to run at a higher frame rate
Should have a calibration procedure that you use at competitions, which includes moving robot around competition field and taking a bunch of pictures through webcam to use back in pit for calibration purposes.
Some venues are really bad:
https://www.dropbox.com/s/j8ju2ttvx7...et..png?d l=0
Resources
Team 2073 Vision Code from 2014:
http://www.chiefdelphi.com/forums/sh...d.php?t=128682
pcDuino 3:
http://www.pcduino.com/pcduino-v3/
roboRIO OS whitepaper:
http://www.ni.com/white-paper/14627/en/
Team 987 Kinect Vision whitepaper from 2012:
http://www.chiefdelphi.com/media/papers/2698
openCV camera calibration:
http://docs.opencv.org/doc/tutorials...libration.html
Team 3847 Whitepaper on Raspberry Pi:
http://www.chiefdelphi.com/media/papers/2709
Team 341 sample vision program from 2012:
http://www.chiefdelphi.com/media/papers/2676