We started off the last season using a LattePanda, while it did work we found that it was a little bit too expensive with a few problems. Recently we found a much cheaper option called the Up Board, while we have not yet tested it on the full robot it seems to preform pretty good.
Yeah I don’t really know how effective it would be, but it would have been interesting to try to detect and align to the 2019 rocket goals using the point cloud or depth image from the camera. We found that the depth really works best at closer distances.
It was kinda a combination of both. Originally we wanted to use the Kalman filter to combine encoder odometry with the T265 camera odometry but we found when testing out the camera that it was fairly accurate. We didn’t really have the time to experiment with other sensor sources using the Kalman filter, so we stuck with the camera. In the future once given the time we probably will end up using the Kalman filter.
With the amount of time spent on the actual robot using the diff_drive package it seems to be working fine, but it would definitely be interesting to try the move_base node again, if not on the physical robot then with the simulator.
Originally at the beginning of the season we hoped to do on the fly path generation to get to the ideal location to shoot from, but due to the lack of time it was nowhere near completion.
We didn’t really have a way to count power cells exiting or entering the robot due to the large hopper design. We mainly used a camera mounted to the inside of the robot for the driver to see the amount of power cells. We did have an automated loader of the first ball into the throat of the shooter to free up space in the hopper but that was handled by the RoboRio with Labview. As for exiting we basically ran the hopper pushing the power cells into the shooter once the turret, hood, and flywheel were all in the correct position and the command from either the manipulator or ROS code was given.
I’d have to say the issues we were having with move_base and the quick switch over we made to the diff_drive package was definitely difficult with the amount of time we had to test on the robot. It was also challenging in figuring out what role ROS would have in our robot code and how it would interact with the RoboRio as it was our first year working with this.
One of the main things I would have kept the same is the usage of the T265 camera, it seems to be really accurate and we haven’t seen any problems just yet. We probably would have used the Up Board instead of the LattePanda, but I don’t think the Up Board was released yet. I’d also change either the Depth Camera we used for vision processing or experiment more with using limelights. I would also have liked to get more people involved with ROS which we definitely will try to do in the upcoming years.
Yeah part of ROS that’s great for simulation is the creation of a robot URDF which basically just describes the different links or parts of the robot such as the intake, turret, or wheels.
Before creating the Unity simulator my URDF/ Tf tree looked like the picture above, which has no robot CAD attached to the URDF. The CAD model that I added to the URDF for visualization purposes is not really necessary to simulate the robot. All the collisions of the different parts in the Unity simulator are not handled by the complex CAD shape but rather by a series of simplified boxes and spheres. In a situation where CAD isn’t available you can always just use a cube or other simple object for visualization purposes easily created by the URDF with no CAD necessary.