Our team developed a swerve programming project with the goal of creating an education tool for the teams just starting working with the swerve, and using CommandRobot template.
Additional goals:
0. Make a code that can be easily understood and followed by students on a programming team. We believe that this should be a primarily student-driven project, so absolute majority of the code is designed by students, and created under mentor supervision. Our example is heavy on documentation. We tried explaining what the individual line of code do, why are they there etc.
Keeping CAN utilization low. With all 8 encoders real-time telemetry on SmartDashboard we do not exceed 50% CAN utilization. This is partially done by requiring all encoders to be physically connected to the corresponding motors, so hardware PID routines do not put anything on CAN.
Keeping CPU utilization low. When extra debug telemetry is disabled (real-time swerve states printed), CPU utilization on Rio1 is around 20%. We also update odometry only when running trajectories (though the method that updates/prints it is available for troubleshooting and other purposes)
Keeping the code compatible with many IMU and motor types. All motors and IMUs are exposed via interfaces, which will allow one to change the hardware with relatively small code modifications. Our current code implements CTR TalonSRX (the test chassis uses 775 for turn motors and mini-CIM for drive motors), and we plan to add NEO implementation as well. The NavX implementation is added, but not tested yet.
Keeping the code consistent with WPI programming suggestions for Command Robot. We really tried to adhere to the rules, though probably would need more changes. In any case, we have pretty much no logic code in Robot.java, and most of it in RobotContainer.
Perform both teleop and trajectory navigation (latter - using PathPlanner)
The link to our repo is
Let me know what you think. Any critique is welcome. It’s likely the code in the repo will change somewhat in response to the suggestions provided.
Acknowledgement:
The code was developed using two main projects as examples:
Team 125 (Nutron) - 2023 Season code
Team 3039 (Wildcat Robotics) - 2023 Quicksilver code
Thanks, I really need this with detailed explaination.
And, most codes on github have a README file to simply introduce itself. Maybe you can write one.
I love this approach and commend you on it! If you want to expand it i would highly recommend reading YAGSL to understand some more intricacies of it and improve your project.
Very good point!
We actually looked at YAGLS briefly, and decided against the library approach because our code is based on the optimization assumptions that makes it a bit less “generic” (we decided to expose the actual desired functionality via interfaces instead).
We did not, however, look into it enough to see if the are actual features there that we want to “shamelessly borrow with grateful acknowledgement”. And that will probably be our next step in tomorrow’s session.
One item we were thinking into incorporating is more “graphical” telemetry, such as nice swerve module pictures they draw. We produce most of the telemetry via real-time prints, because for trajectory troubleshooting it was important for us to get real-time numbers, as we go through it. But YAGSL-style telemetry would make it a very nice addition.
I would really caution you against this- printing to the console (particularly at high volume) incurs a pretty hefty performance cost (and can even cause network latency), as does dealing with strings in general.
@Ryan_Blue - thank you for the feedback! This is a multi-subject comment. So,
The reason why we did not use network tables - we needed to see ALL of the values produced by the telemetry, not just current one, and the number of these values can be reasonably large (thousands).
The main application of that is trajectory debugging. While analyzing chassis holonomic PID, we essentially tried to see how close we are to the expected trajectory values in time based on the telemetry, and adjust PID accordingly.
If you know an easy way for network tables to store ever-increasing set of values that can be looked at later and loaded in a spreadsheet, let me know, I am all for it.
The alternative solution, which we do plan to implement - write the values asynchronously via stream to a file on stick plugged into RIO (or SDCARD for Rio2). We already have a code for it from last year, but did not incorporate it into this example project yet.
Re: performance degradation and packet loss - we’re definitely aware of that, and have a big warning that some of the telemetry should be used only for trajectory debugging:
We indeed saw CPU usage and intermittent packet loss when such telemetry was enabled in Teleop testing. Without it turned on we’re consistently at 20-30% CPU and no packet loss. But it helped us to calibrate PID, and also discover a discrepancy in encoder-units-to-SI-units conversion for linear motors.
One line of code will start logging all NetworkTables values to a log that can be downloaded and analyzed later:
DataLogManager.start();
For analysis, the DataLog tool (installed with wpilib) will allow you to export values to a csv, or you can open the logs directly with AdvantageScope.
Thank you, yes, that would be one way, but the documentation says -
DataLogManager provides a convenience function (DataLogManager.log() ) for logging of text messages to the messages entry in the data log. The message is also printed to standard output, so this can be a replacement for System.out.println() .
So, am I reading it wrong that it still does STDOUT print, which will result in the same CPU usage issue anyway? It also does not mention whether the writes are asynchronous, which would reduce the CPU usage.
Do you know if there is a “standard” way to remove the STDOUT print, and only collect appropriate messages to the file(s) instead? That’s what out code I mentioned above does (the one that we plan to put in a project). Our code also implement serialization, though that’s probably less important if the messages contain timestamps.
It only outputs to the console if you use the DataLogManager.log(String) function. That is for outputting text to both the console and the log file. If you read further down the page it details how to log non-text data (which is not output to the console.
Again- if you call DataLogManager.start() it will automatically start logging all NetworkTable values to the log file, with no further action required.
Another benefit of using DataLog/NetworkTables is it enables using software like 6328’s AdvantageScope and WPILib’s Glass which make reviewing previous and realtime data easy
Thank you. Yes, we will definitely try it! I am all for using standard functionality.
It seems the writes are not serialized, but that is not necessarily important with timestamps being part of the data.
We used Glass in other projects, but only for real-time visualization. So, excellent points. Will implement that this coming week!
The branch 2024SwerveNEO - GitHub - FRC999/2024Swerve at 2024SwerveNEO now contains the code for the Swerve chassis with NEO motors for both drive and turn, using Duty Cycle encoders for the turn motors. It also uses NavX2 for IMU (if you need an example how to program it correctly for the swerve).
We figured out the proper calibration parameters for the hardware PID for such encoders, so the teleop works pretty smooth. As usual, since this demo is designed as a learning tool, everything is documented. The NEO-based chassis was given to the Team 999 temporarily for testing and experimentation by the Team 5142, and we believe it helps both teams to make the demo better and students more engaged.
The master branch of the same code got some important updates as well:
We implemented Subway Surfer controls (XBOX controller, with right joystick to go forward/back, left joystick to strafe left/right, and L2/R2 buttons controlling rotation)
We found such controls to be more ergonomic.
Control switcher - allows one to switch between two-joystick and XBOX controllers on the fly using the switch on our buttonbox.
Datalog manager for important debugging. Based on the previous suggestions we implemented logging via Datalog Manager, which greatly reduces packet loss if you need to debug the code with prints.
Finally, the VisionTesting branch is the one that is getting worked on weekly. This is where we test LimeLight with Google Coral. We already have implemented angular tracking of the 2022/23 cones (see a cone, turn the robot so its front faces the code). We’re currently working on the linear measurement (determine how far is the code if you see it) and will (hopefully shortly) implement dynamic trajectory generation for PathPlanner (see a cone - go to a cone, facing it, stop when the cone is certain distance from the robot).
We hope our demo will help other teams to develop their code, and more importantly, understand the inner details of the working swerve.