Quote:
Originally Posted by theprgramerdude
I believe the challenge was a worth goal; it surely can be done given a small team of experience programmers with the drive to make something this awesome. However, this game doesn't really suit a full autonomous. Can someone here tell me exactly how complex the code would be to sense out all the tubes flying around the field, and differentiate them from the field, the arena, and other robots? Last year would've been a great game to try it with (not as many objects to deal with, and a more limited space at any given time). This year would just be hell if you have anything less than 10 or 15 sensors, including all the code required to operate them all effectively.
|
Actually I believe this year would be easier than last year for several reasons:
The pegs are easier to see than the targets.
The pegs relate more information than the goals.
The tubes are easier to see than balls.
Fully autonomous would help if you had a full alliance involved in the cause. I think an autonomous capper could be faster than a human capper, so if you had a team dedicated to delivering tubes, and your only job was to cap them I think you could get the average down to about 10 seconds per find and cap by staying in your endzone.
One thing I've noticed a lot of is alliance partners bumping into one another. By putting an autonomous robot on one rack, and having one runner, i think it would free up quite a bit of the field.
Again you wouldn't want to try to watch the tubes flying around, but tubes on the ground are certainly detectable as the are large and have specific shapes. Open CV contains some shape matching libraries that could help there. You would drive around scanning for a tube that was on the field.