RoboSim: A LibGDX robot code simulator

RoboSim

Disclaimer: Although other teams are able to use this, I don’t believe that there is enough documentation yet for me to encourage others to use this.

For the past three months, I’ve been working on a LibGDX simulation of our ALL of our robot code. This allows us to simulate our drive train and will allow us to simulate our autonomous without a robot and without the driver station!

Note: The recording software I used didn’t record my mouse

robo-sim github and robot2019-sim github

Don’t read the rest of this if you aren’t a programmer (fair warning)

RoboSim works by abstracting just about every possible thing that WPI does even further. This means that the core module for robot2019 doesn’t import ANY WPI code (this might change in the future, I haven’t decided yet). RoboSim has zero global state, the entire project is driven by simple dependency injection, making it easy to abstract the code and alter behavior. There is a wpi submodule and gdx submodule that both utilize the core module.

The program supports “Practice” and “Real” modes. Practice to quickly test code and real to fully simulate a real match, just like a real driver station. You can easily switch between them by pressing exit. While creating this, there were a few bugs when testing it with shuffleboard code because of the global state, so I decided to abstract that too with abstract-dashboard! I did have a valid reason for that: When re-initializing a new program, Shuffleboard would throw errors when I add something with the same name as last time it was initialized (this is why I hate global state that you cannot reset.)

You can see in the video that the small dashes represent our Swerve Drive wheels, which is accurate to how our swerve modules move. Because LibGDX has Box2D support, I was able to easily set the velocities of each wheel and fully simulate the Swerve Drive without trying to figure out the math to make the constraints work on the wheels and make the robot move accurately. If we decide to go with tank in 2020, we’ll be able to change the simulation to support tank drive as well.

However, somethings I decided not to simulate: The lift, picking up cargo, placing hatch panels and PID loops for these things. (Basically everything that makes driving the robot for real fun).

Before kickoff or during the season, I want to create another part of the program that receives the absolute position of our robot and shows the robot on the field.

Feel free to ask questions or call me crazy for abstracting the entirety of WPILib.

1 Like

Looks awesome! I’ve been developing something similar, only I’ve been focusing on the generation of paths for pure pursuit rather than full on simulation. Can you change variables such as the maximum speed of the drive base? And can you read the position of the drive base from your autonomous program?

Yep: Link to line on GitHub
I have everything in meters. That variable is in meters/second.

Yes. Right now I have it set up to calculate the position of the robot by using the swerve drive encoders: Teams that calculated absolute position in the past (using encoders, gyros and vision)
If I wanted to, I could get the exact position that the robot entity is at from the LibGDX code, but I wanted to simulate as much as possible. The swerve drive implementation for the simulation actually calculates the drive encoders on the swerve modules. So basically I have it calculate the encoder position, then the robot code uncalculates the encoder position back to absolute position
I do a lot of calculating and uncalculating to simulate as much as possible.

Right now the autonomous doesn’t use the absolute position, but it still uses the encoders of the drive wheels. However, I’m in the middle of refactoring our 2019 autonomous and the new autonomous will use absolute positioning. Most of the changes I’m making to our 2019 code are just to practice.

1 Like

Cool, one more question. How do you reconcile the speed of the robot with the time it takes for the graphics to update? Does the physics engine take care of this for you? What I’m getting at is the conversion between pixels of movement per loop count, and inches of movement per second, can be tricky to pin down.

I made my simulator from scratch, (i.e. I implemented the physics by hand) so I’ve had to deal with a lot of little problems that I assume the physics simulator helps to solve. I’m considering moving over to a physics engine if I rewrite it in the future.

So because this is a 2D simulation, LibGDX is very good at keeping a constant 60fps. However, even if it drops below that, any time I need to do anything “per second”, I multiply by delta, which is the amount of time in seconds since the last update.

As for the pixels thing, LibGDX also handles that. I don’t have to deal with pixels anywhere in my code. I just set up a “Stage” and then add a Viewport with a Camera to it and it will scale everything correctly. I use two stages. One for stuff on the field and the robot, and then one for UI. (When messing with UI it’s usually a good idea to keep your units in pixels so text scales correctly).

LibGDX is great with these kinds of things. There’s tutorials out there to set up more advanced or even simpler Stages. Also, technically the contentStage stage isn’t being used, but its viewport is being used. I could get rid of contentStage but my original plan was to have actually decent looking graphics for certain things, but I got lazy and went with a simple Box2DDebugRenderer.

That simulator is one of the things that inspired me to make this :grinning:

1 Like