The physical premise of vision in most FRC games is detecting light that you send out with an LED ring, which bounces off retroreflective tape back to your camera. Retroreflective tape is a material with the property that incoming light bounces off in the same direction, instead of at an angle of reflection (like you'd expect from a mirror). That means no matter where you are, if you shine light at it, you get light back.
Quote:
Originally Posted by The Ginger
What are the best cameras
|
Anything with sufficient resolution and adjustable exposure is fine. Exposure because you need to set it correctly so the camera sensor isn't flooded with light from the retroreflective tape.
Quote:
Originally Posted by The Ginger
do you write code to recognize a specific pattern of pixels (which would blow my mind), or to pick up a specific voltage value that the camera uses as a quantification of the net light picked up by the camera's receiver
|
Those are the same thing

In the camera sensor, incoming light generates a signal (voltage). The array of signals is turned into an array of RGB colors (that is, an image). The premise of computer vision is detecting and tracking patterns in an image.
In the case of FRC, the retroreflective tape in the image will be much brighter and a different color (yes you need to carefully choose your LED ring color, which depends on the game. Red and blue are bad choices given the tower LED strips in Stronghold. Green is a good choice), so it's possible to detect with HSV filtering. HSV is a color space based on hue (color), saturation (grayscale to full color), and value (all black to full color). Using three specific filtering ranges of hue, saturation, and value, you can pick out the pixels of interest which should be the ones from the retroreflective tape. Then, you need to filter out the noise since camera images aren't perfect. This is usually accomplished by picking the largest continuous blob of filtered pixels.
Now you have a shape that corresponds to the target. You can use it to accomplish what you need to, eg a closed loop turn until the center of the shape lines up with the center of the image frame (lining up the robot to the target).
In Stronghold we took it one step further and calculated both distance and yaw angle (using a
lot of statistics and NI vision), so the robot could quickly line up using the onboard gyroscope (a much more efficient closed-loop turn because the gyro has faster feedback) and adjust the shooter for the distance needed to shoot. This preseason we're taking it another step further by working on calculating the full 3D position and rotation relative to the target using OpenCV (where it's actually a whole lot easier than in NI vision). Hopefully the vision component in Steamworks won't be as useless as in 2015
