In programming, we’ve also been putting time in this offseason to improve their skill set for the upcoming season. We’ve been focusing on a few things.
Computer Vision & Localization
Our overall goal for the offseason was to create a vision system that could be accurate at any relevant location on the field. For example in 2023, it doesn’t need to be particularly accurate in the middle of the field, but near the grid, it should have high accuracy in order to generate a path to score on the grid.
We did a lot of research into the options for coprocessors. Following Anand’s Coprocessor roundup, and a few tests of our own with our Limelight 2+, Raspberry Pi 3/4, and Orange Pi 5, we learned a few useful facts:
- The Limelight, though easy to set up, lacks accuracy. The maximum performance we could muster was 30 fps at a resolution of 320x240. This would work for some close-range vision, but it couldn’t accurately detect targets at a far enough range.
- The Raspberry Pis gave some more promising results. The measurements were more stable but were still limited by low framerate and high latency.
- The Orange Pis seem to be working a lot better. We can run them on 800x600 resolution with 20 fps and 50 ms latency. We bought two, and plan to buy two more for the season.
For a camera, we went with the Arducam OV2311. The global shutter is critical for avoiding tearing while the robot is in motion. These specific cameras had a lower latency than other cameras we looked at and came highly recommended by team 6328.
Using this new hardware, we set up each Orange Pi with one camera and PhotonVision. We mounted the cameras onto the robot facing in opposite directions and also mounted the Limelight as an additional vision system. Each coprocessor works individually to detect AprilTags, and their position information is sent over NetworkTables to the roboRIO. From there, we use a WPILib swerve pose estimator. Here is a video of our pose estimation (of course with the duck bot in AdvantageScope):
Robot Simulation
In past years, we have used a few simulations to help test and debug our code. In 2022, we attempted to simulate the shooter, climber, and vision systems, but all ended up being too far removed from the real systems to be useful. The simulated mechanisms were fairly arbitrary, difficult to implement into our code structure, and didn’t end up helping improve the final robot.
After being inspired by team 6328, we decided to implement an IO code structure, using AdvantageKit. Instead of our code being separated simply as subsystems and commands:
The code is now structured such that each subsystem refers to an IO layer to set its outputs:
The IO layer contains only the hardware API calls, essentially acting as a wrapper for the robot hardware. Each subsystem class has an object called IO, which represents either the real or simulated hardware depending on the code state. In order to have the same code control either a Falcon or NEO swerve, we implement one IO class for TalonFX and one for SPARK MAX, and pass the proper one to the same swerve subsystem code. The subsystem sends commands to this IO layer indifferent to the “realness” of the robot, and the IO class directs the commands to the proper hardware, whether it is running in real life or simulation.
This method allows us to find different kinds of bugs even before receiving a physical robot, such as:
- Unit conversions and kinematics miscalculations outside of the IO classes.
- Subsystem, command, and command groups logic.
- Logging incorrect units or simply the wrong fields.
Additionally, the simulation allows us to visualize the robot both in 2D and in 3D. In theory, this means that we can see when the code doesn’t work without any physical hardware to test it on. For example, when creating paths for our 2023 arm to move between setpoints, the visualization would allow us to see whether the path planner succeeded in creating a path that the real robot can follow. Here is an example:
With this setup however, there are a few things that can’t be solved in simulation: tuning PID gains, real system problems, and usage of smart motor controller API are difficult to simulate. To simulate smart motor controllers, we need a simulator class that imitates the API and unit system used by the real motors, as well as running the controller’s internal PID controller. For this, we developed TalonFXSim and SparkMaxSim classes. Of course, the PID simulation isn’t perfect, but this method has worked to help us find a starting point for empirical testing when tuning constants. In the future, we plan to add more motors and use a more accurate model.
If you want to learn more, I recommend looking into team 6328’s code, and the AdvantageKit framework, where this idea originated.
We are also working on a simulation that incorporates vision. Ideally, this will help us find the optimal place to put our cameras on the robot during the season. We’d like to do some kind of statistical analysis, so recommendations for such tools are appreciated.
Autonomous
We’ve had problems with our autonomous for years now. In 2022, our auto paths seemed slow and inconsistent. This improved somewhat in 2023, but not significantly. In order to solve this problem, we took a hard look at exactly what our autonomous routine was doing wrong, and how other teams got their auto routines to work correctly.
First, we realized that the most important criterion for a successful autonomous routine is consistency. It doesn’t matter if you can score 20 points in auto if you only do so a third of the time. Second, the accuracy of the training field is crucial. In 2023, the grid we built was crude and inaccurate, and the measurements of the field were not very good. This strongly negatively affected our ability to tune our autonomous routines.
To solve the first problem, there are a few things we plan to implement. After talking with a few teams, we realized how important vision-based localization is to their auto routines. A few teams even said they struggle running in auto without vision. This is great because we had already started working on localization in the off-season.
When it comes to improving the accuracy of the field, the only thing we can do is change how much importance we assign it. Having an accurate field is only a matter of mindset, time and resources spent, and the tools we use to measure it. This year we will get a dedicated practice space in a large tent outside of our workshop for the first time, so that will help us keep an accurate field for testing autonomous routines.
The greatest problem with the consistency of our auto is the development process, which up until now we didn’t really have. We have always been focused mainly on the teleop period, which is much simpler programmatically. We would tune and perfect the systems only to the point that the drivers could use them. But this is not enough, the solution is to plan ahead to develop requirements for the systems before tuning and continue tuning until those requirements are met. For example, in 2023, our arm would need to have an accuracy of about 2 cm to consistently place the game pieces on the grid. Additionally, the autonomous required the arm’s acceleration to be below a certain value, because otherwise it would mess up the drivetrain odometry. Creating these guidelines in advance for each system would allow us to optimize the operation of the robot for auto, as well as for teleop. We hope that the usage of simulations as described above will also allow us to start fixing bugs earlier in the season so that we can spend more time focusing on tuning and perfecting the systems once the programming team gets the robot.
Additionally this year we decided to divide between our robot code and our common libraries. Now our swerve code, our vision code, and all utilities are in a separate repository, implemented using JitPack. Doing this will allow us to implement the same code in this repository across all projects and branches, reducing Git inconsistency. This means we’ve had to learn quite a bit about Gradle and publishing code. Currently, we have a problem where the code is published with our libraries, creating two instances of libraries. If anyone has an idea on how to publish the code without the libraries, your help would be appreciated.
Finally, we’ve spent a lot of time this offseason getting accustomed to new libraries, specifically AdvantageKit, Phoenix Pro, REVlib, and PathPlanner. In the past we haven’t taken advantage of all these libraries have to offer, and we want to change that for the upcoming season.