Quote:
Originally Posted by compwiztobe
And with control loops good enough to reproduce that variety of inputs, the inputs probably end up looking a lot like "follow this path" and "do this at this time" instead of direct control over actuators. Which is exactly what existing top-notch autonomous modes already do: schedule goal states for existing control loops.
Edit: To elaborate a little more. If we know what the bot will do in autonomous ahead of time (up to reading a switch/sensor/packet to select paths or hot goals), programming the precise goal states we want will give better results than supplying these goal states by recording joystick inputs. The human piece of the puzzle always comes in when the system needs to adapt, and sacrifices precision in the results.
I believe the recording used in training industrial robots only captures waypoints for the motion. These need to be provided by a human who knows the process. The control system for the articulation fills in the gaps in the motion.
|
could you expand some more on what you mean? You mean record the joystick and have the computer just drive to certain points in it?