Record Robot Actions

My coach asked me on the way home from Championship to find out if there was a way to record robot actions and then play them back. My searching so far has proved fruitless and so I decided to post on CD.

Any help would be greatly appreciated! :slight_smile:

In theory, you only need to find a way to record axis values and buttons,
and then use them to operate the robot.

I can’t think right now of an efficient way to do it,
but I think you can try to create an array for each button that represents time in ms when the button changed it’s value.
using the same method for the axis can take a lot of memory so let others comment with better ways :stuck_out_tongue:

We did do this a while ago, i brlieve preseason for 2012. It was cool to see but since there are so many variables that can change how much a arm moves or how far you drive it wasn’t as useful as we thought. If you wanted to do this it would be better to record encoder information and make the robot play that back, but we didbt do that.
Ours was used to attempt a recorded auto, which didn’t work like planned.

I know a team has done it and got it working well, so it is possible. Recording the gyro and encoder and other sensor values seems like what you would been to do.

I think you should first of all see how it handles the head-against-the-wall method of recording every state of every button every 5 ms, and later being able to play it back. It’s very likely it would run out of memory very fast, but this is the basic version.
From here there are many tricks that can make it more reliable for longer recordings: saving each button separately and only when it toggles, instead of every 5 ms; saving the joystick positions less often (every 20 ms?) and using the VI that extrapolates between points during playback (I forgot what it was called). Those are just two powerful examples, and will likely get the job done.

How you do it might be dependent on exactly what you hope to accomplish with it. Are you trying to duplicate your robot’s movements in a previous match? Are you trying to capture certain movements in order to program an autonomous mode?

There are really three types of information that can be captured: driver input, motor/pneumatic output, and sensor input. I’ll deal with each one separately.

Driver input: This can be thought through the easiest: Capture the axis values and button pushes, with timestamped values. Play those back in as input instead of actually driving the robot. This is also possibly the least informative type of recording you can do - it’s highly dependent on the conditions on the field (starting position, where other robots push you, how your robot might slip or tilt when driving over a Frisbee).

Motor/pneumatic output: Where as with Drive Input you’re feeding the input through the code to determine the output, here you directly capture the output. It’s a little less intuitive than dealing with Driver Input, as you essentially have to ignore what the code says and provide the output directly.

Sensor input: This is perhaps the most precise, but is more difficult to get right - you’re relying on sensors connected to the motors and moving parts to tell you when those items run and how far they go. This lets you precisely mimic motions.

We did this for our climbing routine this year. We had the potentiometer values from the associated axis being spit out to the console, and then we manually moved the robot through its motions. This allowed us to capture exactly how we wanted it to move, and based on that data create waypoints (critical positions the mechanism needed to hit in its motion) for the motors to move through in our PID loop, essentially recreating the manual motions we made. (go to position A, then B, then C… the exact motion from A to B may not have been identical to the manual motion, but the final positions were)

This same process could be used with an autonomous mode pretty easily. start the robot, then manually move it through what you want it to do (roll it forward to pick up disks, or turn and follow the center line, for example), capturing output from the encoders on the console. Then you know that to get to a specific waypoint in the process, you need to hit X number of encoder counts on each side of the drive train.

For capturing and playing back an entire match… I would question the use. Another robot bumping into you will completely throw off the data you gathered, and make it nearly impossible to reproduce.

I believe that we are trying to program an autonomous for next year this way…but who knows? Maybe it would be useful in endgame next season!

By the way, thank you! Your post was very useful!

So the general consensus is that we should record encoder values?

I recommend you to try both ways,
start up with sensors recording - if it works well stick with that.
You can try recording button and axis values to compare.

This was how we programmed our auton this year. We would record the joystick values and button presses by the operators and then play them back. Where you recorded them played heavily into their accuracy on the field. If you are interested in more info, PM me and I can get you connected up with our programmer.

We set up recording on our drive last year and actually used it at the competitions for our hanging since it was so sensitive. What we did was record the values going into the motors instead of the joystick values. So for the drive all we had to do was save the values for the left and right sides and then play those values back again. We actually are working on getting this set up for the rest of the robot this off season also.

So I grasp the concept of recording but how would someone save those and play it back?

[Any example code? …in LabVIEW]

Unfortunately, we did ours in Java using the command based code design. So what we did was started a command using the drive subsystem and each run through of the command we appended the values to the end of a file. Then to play it back we started a new command that read the top line off of that file and sent the values to the subsystem. Using that structure all of the timing was handled for us. We are doing a complete code rewrite currently to clean up the implementation, I can get you samples of our code at some point if you’d like.

Personally text based code makes WAY more sense to me…I don’t know a ton about LabVIEW but its what our robot is coded in:/…

Is there a specific reason that you guys used Labview? We switched to Java last year and have absolutely loved the Command Based system. We had used Labview before that, main reason for switching was the use in school and both the other mentor and I were better versed in it.

You can see the recording used in this video to hang on the lower bar. Our driver lines up under the bar and pushes a button and the robot drives backwards and drives forward. This was recorded on the practice field at the competition when we realized that our hanging mechanism was very sensitive to the speed that you hit the bar at. After the recording was done it was an extremely reliable mechanism.

Because there is theoretically more support for LabVIEW and because its supposed to be “easier” because its visual…I fail to see the easy part:/ But I don’t think we can switch to anything else because I know more [even though its just a little] LabVIEW than Java or C++ or something like that…

Now would be the time to switch if you wanted to try a text based language. A great off season project if you wanted to learn one of the other languages is to reprogram the robot in another language. If you would like, once we finish the code rewrite I can send you all of our robot code so you can see what the code for a robot would look like.

The first attached image shows one way to record all data from one joystick every 20ms until the Done button is pressed. It then optionally saves it to a file. This code would fit well into periodic tasks.

The second image shows how to read all of the data from file and play it back at the same rate.

You probably don’t need all six axes of the joystick, and you may want to record more than one. If that is the case, modify the data that goes into the cluster.

If you want to record the motor values and solenoid/relay values instead, that is an easy alternative and can be output from the Drive Robot function, accumulated, and logged.

If you decide to store sensor values and add control loops to the playback, that is indeed harder, and I’d recommend instead that you put that code into Drive Robot and store the set points.

If you are worried about memory usage, you most likely don’t need more than 50 elements per second for no more than 15 seconds, or 750 elements in the array. The full joystick data is 50 bytes per element. Even if you store lots more values, for 1KByte per record, you still haven’t used a megabyte, so you shouldn’t have any memory issues. If you want to optimize for memory, you can move the file I/O inside the loop to avoid the largish array, scale the values from doubles to smaller values, use timestamps to compress the data, or do all sorts of other fun things.

As for switching to a different language, or practicing the language your team used this year, the off season is a great time to do it. I don’t know of anything in command based programming that would make this any easier. Personally, I’d encourage you to become familiar with several of the languages as they each have unique features and values.

Greg McKaskle







I agree completely Greg. No one language is better than another, and it is a great tool to have knowledge of several different languages.

Thank you so much for the pictures that should help a lot!

I will probably look into C++ in the off season…but this and vision processing in LabVIEW will come first… [Our team is also working on making an encyclopedia for everything in FRC so that’ll take some time too…]

I’ll bring this up to our coach and we’ll try it out:)

Be sure to fix my misspelling in the path.

Greg McKaskle