View Single Post
  #8   Spotlight this post!  
Unread 10-07-2014, 07:07
Aren Siekmeier's Avatar
Aren Siekmeier Aren Siekmeier is offline
on walkabout
FRC #2175 (The Fighting Calculators)
Team Role: Mentor
 
Join Date: Apr 2008
Rookie Year: 2008
Location: 대한민국
Posts: 735
Aren Siekmeier has a reputation beyond reputeAren Siekmeier has a reputation beyond reputeAren Siekmeier has a reputation beyond reputeAren Siekmeier has a reputation beyond reputeAren Siekmeier has a reputation beyond reputeAren Siekmeier has a reputation beyond reputeAren Siekmeier has a reputation beyond reputeAren Siekmeier has a reputation beyond reputeAren Siekmeier has a reputation beyond reputeAren Siekmeier has a reputation beyond reputeAren Siekmeier has a reputation beyond repute
Re: Hypothetical auto proggraming method

Quote:
Originally Posted by AustinSchuh View Post
While this sounds great in theory, robots don't do the same thing twice without software support.

For example, different batteries have different internal resistances and hold different charges, resulting in different amounts of power being supplied to the wheels. This results in them responding differently.

When you drive a robot, as the tread and carpet wear, it responds differently when turning, and will coast different amounts when power is cut.

Different starting positions will run the robot over different carpet (the floor isn't flat), or the robot will rock a different way at critical points in the path, resulting in the wheels catching on the carpet differently, resulting in different motion.

It is really hard to control every last variable so that the robot responds the exact same way each time. This is typically solved by implementing feedback loops to correct for deviations in the response. You could implement all your driver controls so that they provide inputs to feedback loops. That would make replaying the driver commands during auto mode work like you imagined, assuming that the error left over from your feedback loops is low enough to achieve the goal.
And with control loops good enough to reproduce that variety of inputs, the inputs probably end up looking a lot like "follow this path" and "do this at this time" instead of direct control over actuators. Which is exactly what existing top-notch autonomous modes already do: schedule goal states for existing control loops.

Edit: To elaborate a little more. If we know what the bot will do in autonomous ahead of time (up to reading a switch/sensor/packet to select paths or hot goals), programming the precise goal states we want will give better results than supplying these goal states by recording joystick inputs. The human piece of the puzzle always comes in when the system needs to adapt, and sacrifices precision in the results.

I believe the recording used in training industrial robots only captures waypoints for the motion. These need to be provided by a human who knows the process. The control system for the articulation fills in the gaps in the motion.

Last edited by Aren Siekmeier : 10-07-2014 at 07:15.
Reply With Quote