Attached is a zip file containing everything you need to run FIRST Team 341’s laptop-based vision system. We are very proud of this software, as it propelled us to 4 Regional/District Championships and the Curie Division Finals. Every shot our robot attempted in Rebound Rumble was completely controlled by this code.
To get it up and running on your computer (which can be Windows, Linux, or Mac OSX), you first should install the WPI SmartDashboard using the installer which you can find here: http://firstforge.wpi.edu/sf/frs/do/listReleases/projects.smartdashboard/frs.installer
Note that for Linux or OSX builds (or if you just want to use more recent libraries than are included in the SmartDashboard installer on Windows), you will need to do the installation by hand. The libraries you will need are OpenCV (http://opencv.willowgarage.com/wiki/), JavaCV (http://code.google.com/p/javacv/), and FFMPEG (http://ffmpeg.org/). Make sure that the OpenCV and JavaCV versions you install are compatible! Meaning the same version number of OpenCV, and the same architecture (x86 or x86-64). You then need to make sure the paths to the OpenCV libraries are in your system path.
To use the widget with the Axis camera, you should configure your camera to allow anonymous viewing and make sure it is connected directly to your wireless bridge (not the cRIO). The IP must also be set by right clicking on the SmartDashboard widget. For best results, you will also want to ensure that your camera has a very fast exposure time (we used 1/120s). The FIRST control system documentation and vision whitepaper go over how to do this.
You can also run the widget in a “stand alone” fashion, as the class has a Java main method that you can use to load images from disk and display the processed results. Sample images are included.
Our vision system uses OpenCV library functions to successively filter the raw camera image and find the vision targets. The highest target detected is then used to compute range and bearing information. With a statically mounted camera, we use inverse trigonometry to find the distance to the target based on the height in the image. Bearing is computed by adding the offset of the target horizontally in the camera frame to the heading we are getting back from the robot. This lets us use a closed loop controller on the robot to use the gyro to “aim”. If your robot has a turret, you could instead send the turret angle back to the SmartDashboard and get the same results. The computed range is used to find an optimal shooter RPM, which also gets sent back to the robot. We interpolate between points in our “range-to-RPM” lookup table in order to provide smooth outputs to our shooter wheel speed control loop.
Thanks to Brad Miller, WPI, Kevin O’Connor, and the FRC Control System team for making OpenCV-based vision processing a reality in 2012!