Log in

View Full Version : Question about control system at St Louis Regional


Bernini
30-03-2015, 10:49
During the autonomous period of the quarterfinals 8, 1706 attempted to do a 3 tote autonomous. They won the innovation in control award for their vision, so I assume that is how they did it, but here is the video: https://www.youtube.com/watch?v=WJCtXqVdJSY

I am wondering if anyone has any information about their vision system / control system that allowed them to autocorrect for the second tote that was severely moved.

NWChen
30-03-2015, 10:55
Try PMing faust1706 (http://www.chiefdelphi.com/forums/member.php?u=66070).

Ozuru
30-03-2015, 11:21
If you read through his thread history, you'll see his vision tracking code dumps.

cmastudios
30-03-2015, 15:08
The vision solution is able to track for the yellow totes on the field. To move towards the next tote, the robot control system moves forward and waits for a vision solution. The solution provides the rotation towards the center of the tote from the robot's perspective in degrees, which is then used by the control system. This implements a closed-loop control system in which the robot moves until the vision solution finds that it is perfectly centered (more or less).

The innovation of the system used this year is that we applied depth-map tracking using a Kinect-like camera to find objects on the field. We have been doing research for a few years on various methods of extracting information form a depth image. Me and @faust1706 took sample depth image data from last year's game to test the feasibility and wrote programs to detect objects.

Discussion about the vision is here: http://www.chiefdelphi.com/forums/showthread.php?t=133978

Bernini
30-03-2015, 16:01
Thank you so much! I'll ask questions if I have any on that thread.