|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
| Thread Tools | Rate Thread | Display Modes |
|
#1
|
|||
|
|||
|
Question about control system at St Louis Regional
During the autonomous period of the quarterfinals 8, 1706 attempted to do a 3 tote autonomous. They won the innovation in control award for their vision, so I assume that is how they did it, but here is the video: https://www.youtube.com/watch?v=WJCtXqVdJSY
I am wondering if anyone has any information about their vision system / control system that allowed them to autocorrect for the second tote that was severely moved. |
|
#3
|
||||
|
||||
|
Re: Question about control system at St Louis Regional
If you read through his thread history, you'll see his vision tracking code dumps.
|
|
#4
|
||||
|
||||
|
Re: Question about control system at St Louis Regional
The vision solution is able to track for the yellow totes on the field. To move towards the next tote, the robot control system moves forward and waits for a vision solution. The solution provides the rotation towards the center of the tote from the robot's perspective in degrees, which is then used by the control system. This implements a closed-loop control system in which the robot moves until the vision solution finds that it is perfectly centered (more or less).
The innovation of the system used this year is that we applied depth-map tracking using a Kinect-like camera to find objects on the field. We have been doing research for a few years on various methods of extracting information form a depth image. Me and @faust1706 took sample depth image data from last year's game to test the feasibility and wrote programs to detect objects. Discussion about the vision is here: http://www.chiefdelphi.com/forums/sh...d.php?t=133978 |
|
#5
|
|||
|
|||
|
Re: Question about control system at St Louis Regional
Thank you so much! I'll ask questions if I have any on that thread.
|
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|