View Single Post
  #12   Spotlight this post!  
Unread 11-01-2006, 16:06
dm0ney's Avatar
dm0ney dm0ney is offline
Will Code For Food (Food Optional)
AKA: Deepak Mishra
None #0217 (The ThunderChickens)
Team Role: Alumni
 
Join Date: Jan 2005
Rookie Year: 2004
Location: Shelby Twp., MI
Posts: 48
dm0ney will become famous soon enough
Send a message via AIM to dm0ney
Re: 2006 Autonomous Disappointment

Quote:
Originally Posted by cbolin
Hi,
Coding your bot to drive and shoot in 10 seconds is really challenging...especially if you want to shoot at 100% (10 of 10 balls). This is far more difficult than raising a tetra and sitting it on the center goal like we did a few times last year in autonomous.

Consider this aspect...Software assist during the 120 second drive time. Example:
1. Laptop computer with graphic display connected to Dashboard serial port.
2. Camera x,y data being used to move cross-hairs on laptop.
3. Gunner (human) uses the crosshairs on laptop display to move turrent azimuth and elevation to close proximity. 1 or 2 tracer rounds to allow human to get to the target...and then feed the balls into the target. One team is allowing for loading of over 25 balls.
4. Drive team can be in opposing corner from the robot shooting diagonally while the bot is moving.

Squeezing the most useful information from the sensors with good code is much more challenging this year. Add in X,Y sensors with Yaw rate sensor to dead-reckon is cool.

So, think of a 130 second autonomous mode with driver assist for 120 seconds. :-)

Regards,
ChuckB
This is exactly something I envisioned. The dashboard program can be utilized even further to more autonomous functions into the human mode.

Last year, we had buttons for predetermined arm positions such as 'short goal w. one tetra', 'center goal w. one tetra', and 'fully stowed'.

This year I can see utilizing the camera or a combination of sensors to use a point and shoot dashboard type interface. Line up the crosshairs and fire.

An alternative might be instead of using the camera targeting, use predetermined field positions and different modes of shooting to line up the crosshairs?

I think that this autonomous leaves quite a bit to the imagination and also forces programmers to start thinking about more code in the human mode.
__________________

Alumni
Team #217, The ThunderChickens



Student, Class of 2009
California Institute of Technology