Simulating ROS Robot with Unity

This is a Unity simulator that essentially replaces the physical robot and lets the exact same ROS robot code run as if the real robot was there.

This simulator simulates the robot sub-system control (intake, shooter, hood, flywheel, and drive train) including the vision tracking, where Unity sends a camera stream over to ROS in order to use vision processing. This is meant to help speed up the process of coding, testing, and running the robot code more efficiently and without access to the physical robot. I especially see this useful in situations where the robot is still being built during build season.

Below is a short video showing the Unity simulator in action!

-https://www.youtube.com/watch?v=A66TlzYuKSo

Below is also a diagram showing the complete ROS system running alongside both Unity and the physical robot. The RoboRio and outside sensors are the parts that are replaced by the simulator while the code inside the co-processor remains the exact same.

This was done using Unity, the ros-sharp library, and ROS. The GitHub for the Unity simulator and the ROS robot control code are below.

Unity Simulator GitHub -GitHub - LeonidasVarveropoulos/UnitySimulator-ROS: This is a Unity project that works as a simulator for the ROS FRC robot code hosted in the robot-frc repo.

ROS Control Code and Documentation -GitHub - LeonidasVarveropoulos/robot-frc: This is a ROS catkin workspace for a robot in frc

-Leonidas Varveropoulos

22 Likes

This is pretty neat. I’ve used Gazebo in the past for robot simulations, any reason you didn’t start with that?

1 Like

Yeah, so Gazebo was one of the first things I tried when looking for a simulator, but I found that it was really slow and it just didn’t work in this case.

Okay this is really awesome. Did you manually convert the robot cad to URDF or was this process automated somehow? And it looks like you just have a diff drive implementation instead of a full physics engine, which probably saves a lot of resources compared to Gazebo. I’ll probably poke through it more when I have some time.

1 Like

Yeah I actually used a really nice feature from the ros-sharp library that converts a robot URDF from ROS to the Unity project. It sets up the CAD attached to the URDF and each individual movable joint. This library also handles the communication between ROS and Unity with different ROS msg types.

This is super cool! I have a bunch of questions:

  • What did your team use for a co-processor?

  • For the D435 Depth Camera, you said that it isn’t much use this year. Is there a previous game situation where you would’ve liked to use it?

  • Why did you skip out on the Kalman filter; did you just run out of time or was it not useful?

  • Are you happy with diff_drive or are you going to spend more time trying to get move_base to work?

  • Have you played with on the fly path generation for semi-auto teleop control (EX: press a button to follow a path to pickup a power cell)?

  • How do you handle power cells entering/exiting the robot?

  • What was the most difficult part to figure out?

  • If you had to do it again, what would you do the same and what would you do different?

  • Something that I’ve been thinking a lot about is how to create a simulation when no CAD exists yet. That way when the game is announced, programmers can get started testing code almost immediately without waiting for mechanical. Do you have any thoughts on how to simulate that?

5 Likes

I’ll add a few thoughts here. The alternative to the diff drive package used here is getting ros control (http://wiki.ros.org/ros_control) to run. move_base is at a higher level - it uses a global and local planner combined with map and sensor data for path planning of the robot base. The output of move_base goes into a diff drive (or swerve drive, in our case) controller which converts the commanded motion of the robot base into commands to each wheel.

There is some overlap - the local planner in move_base should account for the non-holonomic constraints in the path if necessary, for example.

But they two aren’t really interchangeable. Think of move_base as a framework for dynamic path planning with obstacle avoidance, and the drive controllers as what converts overall robot motion into commands to individual wheels.

3 Likes

We started off the last season using a LattePanda, while it did work we found that it was a little bit too expensive with a few problems. Recently we found a much cheaper option called the Up Board, while we have not yet tested it on the full robot it seems to preform pretty good.

Yeah I don’t really know how effective it would be, but it would have been interesting to try to detect and align to the 2019 rocket goals using the point cloud or depth image from the camera. We found that the depth really works best at closer distances.

It was kinda a combination of both. Originally we wanted to use the Kalman filter to combine encoder odometry with the T265 camera odometry but we found when testing out the camera that it was fairly accurate. We didn’t really have the time to experiment with other sensor sources using the Kalman filter, so we stuck with the camera. In the future once given the time we probably will end up using the Kalman filter.

With the amount of time spent on the actual robot using the diff_drive package it seems to be working fine, but it would definitely be interesting to try the move_base node again, if not on the physical robot then with the simulator.

Originally at the beginning of the season we hoped to do on the fly path generation to get to the ideal location to shoot from, but due to the lack of time it was nowhere near completion.

We didn’t really have a way to count power cells exiting or entering the robot due to the large hopper design. We mainly used a camera mounted to the inside of the robot for the driver to see the amount of power cells. We did have an automated loader of the first ball into the throat of the shooter to free up space in the hopper but that was handled by the RoboRio with Labview. As for exiting we basically ran the hopper pushing the power cells into the shooter once the turret, hood, and flywheel were all in the correct position and the command from either the manipulator or ROS code was given.

I’d have to say the issues we were having with move_base and the quick switch over we made to the diff_drive package was definitely difficult with the amount of time we had to test on the robot. It was also challenging in figuring out what role ROS would have in our robot code and how it would interact with the RoboRio as it was our first year working with this.

One of the main things I would have kept the same is the usage of the T265 camera, it seems to be really accurate and we haven’t seen any problems just yet. We probably would have used the Up Board instead of the LattePanda, but I don’t think the Up Board was released yet. I’d also change either the Depth Camera we used for vision processing or experiment more with using limelights. I would also have liked to get more people involved with ROS which we definitely will try to do in the upcoming years.

Yeah part of ROS that’s great for simulation is the creation of a robot URDF which basically just describes the different links or parts of the robot such as the intake, turret, or wheels.

Before creating the Unity simulator my URDF/ Tf tree looked like the picture above, which has no robot CAD attached to the URDF. The CAD model that I added to the URDF for visualization purposes is not really necessary to simulate the robot. All the collisions of the different parts in the Unity simulator are not handled by the complex CAD shape but rather by a series of simplified boxes and spheres. In a situation where CAD isn’t available you can always just use a cube or other simple object for visualization purposes easily created by the URDF with no CAD necessary.

2 Likes

I think you might be overselling this a bit. URDFs/SDFs are not things meant to be generated by hand - even though the unfortunate truth is that many of them are. Yes, it’s true, you can use generic boxes, spheres, and cylinders but they quickly run out their utility and even making just simple shapes is not exactly a fun programming task since you’re writing XML by hand and making things in 3d space.

There are loads of tutorials for those interested: http://wiki.ros.org/urdf/Tutorials/Building%20a%20Visual%20Robot%20Model%20with%20URDF%20from%20Scratch

2 Likes

Sure at some point you cross the line from wanting to make URDFs by hand to using Xacro to auto generate URDF files from function calls. But I think you are ignoring the the strength of throwing together a robot with primitive shapes.

Austin for example wanted a way to simulate robots without building a full cad of the robot.

1 Like

Yep… totally ignoring it:

Pay attention to the dates.

I actually completely agreed with you, but I looked into what diff_drive package they were actually using.

This is much closer to a local planner in move_base than a normal diff drive controller in ROS.

I was thinking of
http://wiki.ros.org/diff_drive_controller
or
http://wiki.ros.org/differential_drive
both of which take in desired robot velocity and output velocities to send to wheels.

1 Like

I think what was meant as a minor correction was read a bit more personally than I intended. I just meant there is a time and a place for primitive robots built with primitive shapes. In the precad phase of the competition especially. Also I would argue in gazebo to keep simulation running smoothly with collisions it sometimes makes a lot of sense to make primitive shapes your collision boxes where possible.

Here is an example robot built in Gazebo with xacro (an xml meta language for defining urdfs)

    <xacro:wheel prefix="left" suffix="front"  reflect= "1" reflectX="1"/>
    <xacro:wheel prefix="left" suffix="back"   reflect="-1" reflectX="1"/>
    <xacro:wheel prefix="right" suffix="front" reflect= "1" reflectX= "-1"/>
    <xacro:wheel prefix="right" suffix="back"  reflect="-1" reflectX= "-1"/>

image

2 Likes

Sorry, long days for me lately… might be touchy. I get the idea (trust me) but for me, I have trouble seeing XML and even xacro (to a lesser extent) as something that is useful for humans to be editing - particularly for objects in 3d space. There are easier ways to generate 3d models… though at the moment they can be challenging to get into these simulators. I do think there are opportunities to improve this entire workflow though and I’m really optimistic about the future.

2 Likes

I’d assert that the time for low fidelity models in simulation is far beyond that too.

Collision checking for low polygon models can be faster (even faster is using non mesh colliders) which could be used to speed up your testing later down the road.

Though I agree with Marshall’s sentiment that hand writing xml sucks and I’d take it one step further and say xml is the worst except for all the other things we’ve tried.

  • someone who has had to write vrml.
1 Like

Do you have any suggestions for workflows to go from mesh or even native CAD to URDF? So far the best thing I’ve seen is exporting meshes from CAD (e.g. one for main body, one for turret, etc) and writing the URDF by hand or with xacro to combine these meshes. There’s not really a way around defining joint positions by hand as far as I know.

This is what we’ve been playing with and contributing back to:

Warnings: It’s far from perfect and it requires some patience with the more advanced features of OnShape to get it working. The maintainer is highly responsive to issues and the community for it is building.

3 Likes

This is what I’ve been using along with pybullet. It works ok but the performance on my low end MacBook is sometimes less than great in a GUI mode. But just for simulations for tests it should work pretty well.

1 Like

This is great stuff! Using your project as inspiration, I’m working on an implementation for our team that ties into the robot code built with java and WPI lib. I’ve gotten a simple simulation going and would like to improve the field environment we drive around in. On your github, you mention that the CAD files aren’t included due to their size. Is there any way you could share them? If that’'s not practical, can you tell me the process you used to create them? I am assuming these are STL or OBJ files that can be imported into Unity, but let me know if this is incorrect.

Thanks in advance!

That sounds like a great project! I believe that I just downloaded the field model from the FRC page as an inventor file then just exported it as a OBJ file. I think there is probably a better way to do this as the field imported into unity did not include any colors and I had to manually add that.