Hypothetical auto proggraming method

I am not a programmer, so I was wondering if the idea I had would even be plausible or worthwhile. In many video games the cpu will record your key presses and other various actions you make with a d-pad. Example the Fifa soccer app can replay what you just did on the game and show what motions you did with your finger on the screen. Forza 3 also has the capability of when you are using a racing wheel and pedal set to replay what you did on screen, and move the steering wheels accordingly to what you did.
This leads me to the idea:
Have a driver drive the autonomous mode on the practice field. While he is doing this, record the key presses and d-pad movements. Then the programming team for could write a program that just sends the 10 seconds of commands that the driver did to the robot for the autonomous. It would be a quick way to make a deadly autonomous.
I think this could lead to so many more variations for autonomous if it actually worked.

Is it actually possible for this to be done? I am not a programmer so I would assume that this would take a lot of programming to actually make it work. Being a driver myself and having to set up for autonomous it would be great to be able to just go spend a minute on the practice field and fix the autonomous between matches if it doesn’t work.

That is actually a brilliant idea and it is possible.

You’d have to devise a method for recording what buttons (and joystick positions) are pressed and for how long, then find a way to read that in order during autonomous.

This could be as simple as saving everything to a logfile(like an excel file) and having the first column be time, the second column be if a was pressed (1 or zero) 2nd be b, 3rd could be y position of a joystick.

This is a brilliant idea.

While this sounds great in theory, robots don’t do the same thing twice without software support.

For example, different batteries have different internal resistances and hold different charges, resulting in different amounts of power being supplied to the wheels. This results in them responding differently.

When you drive a robot, as the tread and carpet wear, it responds differently when turning, and will coast different amounts when power is cut.

Different starting positions will run the robot over different carpet (the floor isn’t flat), or the robot will rock a different way at critical points in the path, resulting in the wheels catching on the carpet differently, resulting in different motion.

It is really hard to control every last variable so that the robot responds the exact same way each time. This is typically solved by implementing feedback loops to correct for deviations in the response. You could implement all your driver controls so that they provide inputs to feedback loops. That would make replaying the driver commands during auto mode work like you imagined, assuming that the error left over from your feedback loops is low enough to achieve the goal.

Incidentally, this technique is used to set up industrial robots. Generally, those do the same thing over and over… but I suspect that it wouldn’t be too difficult to program an FRC robot to pull different “training logs” to do different things.

However, there is one minor issue.

Practice fields at FRC events are NOT representative of actual field distances. They are practice elements (that match up in terms of size to actual elements) that are set up in some semblance of the actual thing, on a patch of carpet of whatever size is available in whatever space is available. But there is no guarantee of same distance, same reaction when hit by a ball, etc.

I suggest that in view of that, some effort be taken by the hardware side to have passed inspection in time to take a lot of trips through the filler line to the competition field, and have one button designated as the “train” button, triggering a different savelog every time it is pressed and held.

I think this is a great idea. It’s so good, it’s been done. There have been a couple of additional attempts to use this method in the past as well. I searched threads for ‘autonomous record playback’.

This sounds very similar to what 1717 did in the book " The New Cool" and in real life. They would record their autonomous with robot on the practice field and replay it. Te user above has graciously shared this programs link above (thanks markmcgary for saving me some time)

2363 did this in Ultimate Ascent for the tower hang. They used a program that,when the button was pressed, would take the signals from the driver and save them. Then when the game was about to end, the driver would hit a button and the entire hanging process was automated. As long as the robot was aligned with the pyramid, it would hang perfectly. Although it was during the teleop period, I think the same type of thing could work with basic maneuvering.

And with control loops good enough to reproduce that variety of inputs, the inputs probably end up looking a lot like “follow this path” and “do this at this time” instead of direct control over actuators. Which is exactly what existing top-notch autonomous modes already do: schedule goal states for existing control loops.

Edit: To elaborate a little more. If we know what the bot will do in autonomous ahead of time (up to reading a switch/sensor/packet to select paths or hot goals), programming the precise goal states we want will give better results than supplying these goal states by recording joystick inputs. The human piece of the puzzle always comes in when the system needs to adapt, and sacrifices precision in the results.

I believe the recording used in training industrial robots only captures waypoints for the motion. These need to be provided by a human who knows the process. The control system for the articulation fills in the gaps in the motion.

could you expand some more on what you mean? You mean record the joystick and have the computer just drive to certain points in it?

Recording joystick inputs involves a human supplying output states at all points along the motion. The resulting motion can be erratic, and may not even end up in the right spot, or follow the right path at all.

The closed loop controllers that exist in FRC already take a setpoint in the form of an encoder position or velocity and transform them in a way to get outputs that converge quickly on the setpoint. Since we know in autonomous where we want to be or how fast to go, we can give the controller these setpoints exactly instead of bothering with the fuzzy joystick interface. Some controllers are even good enough that we can give them multiple setpoints in quick succession in order to follow a desired path. But this path can also be computed to satisfy certain criteria for smoothness, rate, etc., rather than relying on a human to “draw” it very approximately with a joystick.

The way we have programmed our robot’s autonomous has been very similar to that for years now (at least since 2009)

Our autonomous mode is a simple script (similar to ABS) that in essence spoofs any button command, like


Forward 1.0 10.0
Wait 2.000
Shoot

would drive forward at 100% for 10 feet, wait 2 seconds, then fire.
It allows us to quickly change autonomous mode while testing and such, because we are only altering a .txt file that doesn’t have to compile.

All we are missing is some bit of software (on the DS or cRIO) that would provide a record function and spit out a .txt

For the actual movement during auto, you should not be recording the joystick movement. If you want it to be accurate, you should put encoders on the wheels, and record how far each side travels. This would allow you to compensate for different voltages and surfaces, because you are not sending the motors a specific power for a specific time, instead you are telling them to move for a specific distance.

Figured I should chime in, as one of the writers of that code.

It worked by writing the joystick axes and buttons to a file on the CRIO 10 ms increments (it doesn’t actually write every 10 ms, it stores the values in an array until the recording finishes and then performs the write). The values are then played back one by one at the same 10 ms intervals.

We never used it in competition, nor did we expect this version to be good enough for competition. There are too many unknown factors (battery, mechanical wear, etc.) for open loop control to meet our expectations. But it did perform fairly well: our robot could consistently drive ~10 feet and end up within a foot of the destination. This is “good enough” for some applications.

Some teams only use open loop control in auto. Sensors are often expensive or tricky to implement, and a 90-100% consistent auto just isn’t a big enough priority for these teams to use them. For these teams, a playback auto should be just as good as their normal autonomous, and may be easier to implement. It also may give them more options than they would have had previously.

The further step we wanted to make was to implement closed-loop playback. The setpoint would be inferred from the sensor values during recording (maybe the gains for the control loop could also be inferred from the joystick values). This would hopefully give the reliability of a hand-tuned closed-loop autonomous, but allow the quick implementation of new auto modes.

Unfortunately, our documentation and instructions for the existing code is quite poor. This is a large impediment to its use, as the “target audience” is teams that struggle to implement autonomous at all, or who only do the bare minimum. If we do further work, I would want to rewrite the code using the Motor Get Values vi instead of requiring the user to manually wire the values into a vi. This would hopefully allow the creation of two standalone vi’s: one that gets placed in timed tasks and just runs and records on demand, and the other that gets placed in auto. Maybe with this rewrite and better documentation it would be more accessible to teams.

TL;DR We targetted this towards teams that otherwise couldn’t do auto, and we need to improve documentation/ease of use to make it helpful.

I know my old team did that way back in 2003, 2 years before I started. Their reason to record joystick movements with dead reckoning instead of using sensors with logic was the very limited amount of memory the controller had, as well as the more limited options / experience with sensors. If I recall correctly, they printed time stamps with controller inputs, then manually coded it into an array.

Now, you can get quite a few digital sensors (I2C) for very cheap, and compared to controllers of old, you have pretty much unlimited memory and processing power. The amount of control you have is staggering.

Robots generally have a hard time replicating the exact same things in such a manner as there are too many variables. What would be interesting however is to record sensor feedback on the drive train while the driver performs the auto-mode, and use that for autonomous.

Sorry if I’m butting in on the conversation here…

I figured I’d share something that we did in 2012 that is similar to this. One of our college aged mentors, Ryan Nazaretian, made something like this so that we could record autonomous. Rather than recording keystrokes or joystick movements, he used encoders to get the amount of ticks on the drivetrain and the shooter and play them back in autonomous mode. I’m not sure how he recorded things like pneumatics, but I will link our code here so that you guys may take a look. I believe that it is included in this build of the code. I was a freshman the year that he made this so I can’t really provide any detail on how it worked. Here is the link: https://www.dropbox.com/s/b0q8whtwy1q4y2a/2012%20Robot%20Code%20-%20%204-7-2012.zip

Keep in mind, this is in LabVIEW. Sorry if you use C++ or Java!