Learning Autonomous Mode?

So an interesting idea was proposed to me last night, and I wanted to receive some input on how possible it was. It is possible to put a robot in Autonomous Mode and have it learn the correct path to take and remember it for future reference. Almost all Industrial autonomous robots have this ability, by either software or hardware. For example put encoders on all of the wheel outputs to count revolutions and so when its gone the right amount have it remember how to do it. Maybe a combination of sonar and encoders could help avoid unnecessary collisions in AUTO mode. I program in LabVIEW right now now and our autonomous mode is mostly a State Machine structured program. Just looking for some input.

John Fogarty
Programmer M’Aiken Magic 1102
Programmer/Captain GForce 3864

This is definitely possible, it may not be exactly the same each time but it will be close. As far as LabView goes I’m not sure, but with java or c++ you could log what your driver does, and then replay it.

Also due to the inaccuracies of wheels, I would recommend logging the gyro too. The combination of js inputs and a gyro should be pretty solid.

This is the problem with a sequential taught algorithm though, once you get off track, if it is simply a sequential memorized steps, it most likely won’t be able to find its way back to that sequential path.

That being said here is a clip from our 2006 robot. http://video.google.com/videoplay?docid=-5902573997861101882#
We used sequential (dead reckoning) to get to the goal, and then a state machine once we were there (finding, aiming, and shooting).

What this allowed us to do was change the angle we pointed it for the initial approach, and that’s how we were able to dodge the robot who was going to block us. However if we were bumped during the sequential phase or changed the angle too much, we were done for (why we lost the finals
http://video.google.com/videoplay?docid=-5902573997861101882#docid=-4391866434275131148)

What you are proposing works well for accomplishing goals like this years where you did not have to be overly accurate and there was not much defense. Also it would have been good if you wanted several approach paths in '06, you could quickly record them and rerun them. However if accuracy is an issue, or there is a ton of defense ('09) a state machine may be better.

A sequential set of state machines would have worked well for this year. Sequential for driving, a state machine for ball detection and kicking.

So I would say sequential is good for joystick logs, encoders, dead reckoning, accelerometers, and gyros.

If you are using any other sensors they should be in a state machine.

I made something like this for LabVIEW last year during the summer. Except instead of using encoders or sensors, I just used joystick values recorded at a regular interval. I was going to use encoders, but the robot I was working with was not equipped with encoders on the drivetrain (nor any elsewhere).

I had it working quite well, though it required modifying the code to change which data you want to record and later when replaying what to do with it.

I was working on a version this year to record everything and a few other things, but I’m having a bit of trouble with the file I/O’s and clusters and haven’t worked on it since the beginning of the season.

You can see a video of it here:

Not entirely the best system, but it worked ok for just relying on joystick data with no sensors. Would probably work better with sensors.

If you want to know anything else, ask.

-Tanner

Tanner! Are you saying you made program that basically records what you do while driving it? Then you go back and have the robot and do it on it’s own? Like a record/replay sort-of thing? THAT sounds awsome! May i get a copy of the code to look at??

The biggest problem with recording the driver input is that as the battery voltage changes, so does the speed of the motors. If you record the speed from the encoders and control to that, and trigger things like kicker motions out of encoder distances, assuming no wheel slip, you will be much more accurate then simply recording driver input over time.

yeah, that would be the idea to drive it while recording encoder data. The take the encoder data to make the autonomous. Our Palmetto Regional Kicker device was a motor Kicker Paddle wheel sooo we could have used it there. But yeah if only I had a clue how to begin trying this.

From the video, it looks like the issue occurred during the rotation. The encoders would have the same issue as they would have different slip each time. If I were to try it I would hook up a gyro and 2 joysticks, one for going straight, one for rotation (just so you don’t do both at the same time) Driving straight could be by the joystick, and then any time you rotate the code could reference the gyro values. Also by doing it this way, the robot could probably accomplish the task faster as you could speed up the rotation period.

Also by using the gyro, you could also make sure that straight was actually straight.

The encoders may be a good idea for the straight period though, as long as there is not too much slip. For going straight, if you have a reference point, a rangefinder may work well too. Allowing the robot to speed up the process faster than a driver could.

I believe that wheel slip is the most variable thing in autonomous. So I would tend to lean away from anything dealing directly with the wheels (ie encoders, js values)

Yeah that’s pretty much what it does. In the LabVIEW vi’s I actually have a giant “record/replay” button. :stuck_out_tongue:

The code is actually on a desktop computer at the school, so it’ll be a few days as I need to bring that computer home.

Though the robot did run into a trashcan, which is problem caused because of trying to align the robot with no markings on the floor. The slick wheels probably aren’t helping the situation either.

As I said in a earlier post, the robot wasn’t outfitted for sensors, so I didn’t use them. If I had them on the robot, I would certainly have chosen those over the joystick values. If I have time this summer, I might play with this year’s robot which has a few sensors on the drivetrain which I could use (plus no more of those slick wheels - yay).

I’m not sure how you would get around the wheels slipping without some sort of positioning system.

It works good as long as you can get the robot aligned in the right position every time. With my team’s robot, we probably would’ve used this system for this year and last year. Both had really simple autonomouses.

Cheers
-Tanner

When driving straight or in arcs the wheel slip should be negligible. When turning sharply it is different, but then the gyro is preferred over the encoders.

Story time: Back at Kettering (week one), I had no feedback on speed during autonomous. It had no compensation for drift, it just drove at X speed for Y distance and kicked. It was not reliable. After the competition, I tuned feedback to speed on each side. It now drove straight and had no issues of not moving when the battery was low. Use sensors. They make life easier.

We implemented this type of system this year. I’m trying to find time to write a white paper about it soon.

We implemented a recording system for our robot this year. It is written in C++, as well as in Java. The main reason was to allow us after a match to review what happened out on the field. So we captured all user and senor data, and stored it in a text file. After implementing the recorder, we verified that it had minimal impact on the run time behavior of the robot. Each record contains a timestamp, and is written only when the user inputs change. There’s plenty of flash disk space, so this was never a problem. But a match would have 1000’s of records.

With the recorder in place, we then proceeded to write a replay function. The file can be read, and the data used to replace the users specified actions. For the first version of the code, we simply ignored the sensor data. This, as has been pointed out, leads to the robot not quite doing the exact same thing every time. Without sensors, you are using time as a means of controlling behavior. 3 seconds forward, 2 seconds back, etc. Since the mechanical system has quite a bit of variance, this isn’t particularly accurate. This can be enhanced by using the captured sensor data. Say you have a gyro, and you record the gyro data. Then during replay you can compare the current gyro data to the recorded data, and if they aren’t the same (within some epsilon), you adjust the movement of the robot to compensate for the drift.

Finally we then had our driver, do what we wanted for our autonomous mode. Pulled the file off the CRIO, cleaned it up a bit, and then saved it on the CRIO as autonomousXXX.txt. Then all our autonomous code did was read a switch that indicated what autonomous file to run, and opened the file and replayed it.

At the end of the day, there is a lot of uses for the record data, above and beyond just finding out what the robot thought it was doing out on the field.
Some that are on our todo list:

  1. Macro actions, saved as files and replayed.
  2. Creating a simulation so we could replay the files and see what happened during the match.

—Michael J Coss Lead programming mentor for team 303

I’m wondering if you would be willing to contribute this to the Bobotics ADK or open sourcing it so I could? This is one tool I’d like to provide to all teams.

Sure. It needs cleaning up, and as I said, it right now just replays joystick data and ignores the sensor data. I really wanted to make it a bit more flexible with regards to the data capture (what get’s capture, and when), and how the replay works. It’s going to be my summer project.

—Michael J Coss