The vision solution is able to track for the yellow totes on the field. To move towards the next tote, the robot control system moves forward and waits for a vision solution. The solution provides the rotation towards the center of the tote from the robot's perspective in degrees, which is then used by the control system. This implements a closed-loop control system in which the robot moves until the vision solution finds that it is perfectly centered (more or less).
The innovation of the system used this year is that we applied depth-map tracking using a Kinect-like camera to find objects on the field. We have been doing research for a few years on various methods of extracting information form a depth image. Me and @faust1706 took sample depth image data from last year's game to test the feasibility and wrote programs to detect objects.
Discussion about the vision is here:
http://www.chiefdelphi.com/forums/sh...d.php?t=133978