FRC 6328 Mechanical Advantage 2024 Build Thread

Software Update: Week 6

VTS (Vehicle Trajectory Service) :police_car: :whale: :earth_africa:

We previously mentioned that we were moving away from Choreo’s GUI as a way to plan auto paths, and instead developing our own service to do this using TrajoptLib. Now that we have finished the service for the most part, and have gotten to test autos with it, we can give a basic walkthrough on how it works.

  1. trajectory_native contains the trajectory generation logic using TrajoptLib’s C++ methods. It uses the same parameters as Choreo would such as vehicle constraints and waypoints.

  2. We then build this as a service using Earthly. We chose to use Earthly as a build tool because it simplifies the build process. Earthly builds run as Docker containers, so we only have to worry about building dependencies on Linux. This helps to make the service portable, regardless of architecture. Earthly builds are also cached, reducing build times for subsequent VTS builds.

    • Similar to a Dockerfile, Earthly uses an Earthfile to build the Docker image (see Earthfile)
  3. On the Java side, we open a gRPC channel to the VTS container and generate requests to the service using the paths we define alongside a hash (see GenerateTrajectories and DriveTrajectories. Each response to the service generates a .pathblob file that we then follow using TrajectoryController.

  4. GenerateTrajectories is called using a separate Gradle task that runs during builds. Using the generated hash we can determine whether we want to generate the trajectory or skip the task.

  5. Now we can create autonomous routines by sequencing the generated trajectories with commands (see AutoBuilder and AutoCommands).

This is a bleeding-edge development, and as such we are currently unable to provide support for other teams to use the VTS this year. If you have inquiries, feel free to ask on this thread.

Auto Strategy :robot:

With the number of possible autos to run being very high this year, we have been focused on developing autos with alliance compatibility in mind. This means that we need to make sure that other teams on our alliance have space to run their auto. We’ve been experimenting with a five-note auto that prioritizes the centerline notes and spends the least time possible near the starting line. Here is an early version of this auto on our dev bot:

Currently, this auto is well over 15 seconds, largely in part by the weaker drivetrain on our dev bot using NEOs with L2 MK4i gearing. Our competition robot will be using Kraken X60s for both steer and drive with L3 gearing.

Arm :hammer:

This year, all of our main assemblies (climber, shooter, etc.) are mounted on a pivoting arm which is powered by two Kraken X60s.

Characterization 🫨

Something new this year for us is using current control for the arm, drive, and flywheels. Since controlling current is fundamentally different from controlling voltage, the characterization routines for the arm are also different.

  1. For characterizing the arm, we first keep the arm in an upright position and then apply an increasing amount of current until the arm moves, giving us our kS value.

  2. Then, we keep the arm flat at 0 degrees and apply current until it moves again, that current value is the kG + kS .

    • Although we are not using any value for kA on our devbot, a kA value would be the ratio that converts the torque we want to apply on the arm to the torque we apply with the motor which is directly controlled making it relatively easy to calculate given our arm MOI (as measured in CAD) and motor specs.
  3. These are the only needed values for the arm feedforward as there is no need for a kV value since the torque applied is not related to our current velocity. Another thing to note is that the gains for our closed-loop control are also in different units (amps/rotation error, etc.) making our values for kP and kD higher than what we usually expect at 4000 and 120.

Motion (:dollar:) Profiling

As soon as the arm was close to being finished the programming team quickly got to work making the proper code to support it. Our first iteration of the arm code utilized CTRE’s Motion Magic® and we were able to get it running pretty well within the day with just closed-loop control. However, this meant that we were compromising on logging, replayability, and sim integration because the profiling algorithm was not available to our code. This is why we instead chose to run a TrapezpoidProfile for the arm on the roboRIO. Although we lose the 1000hz profile running on the controller, we would much rather have the profile fully known to the robot code.

We are pretty satisfied with the capabilities of the Kraken in our current setup for controlling the arm, and we are excited about playing around with the Kraken for our robot!

Pose Estimation :cowboy_hat_face::compass:

In our first software update, we wrote about our changes to the pose estimator system

From then, we have located two changes which we had to make:

  1. The first problem we observed was that the new method would output a different pose estimate if the vision frames were applied out of order, unlike our previous implementation last year. The differing pose estimates are caused by the new algorithm only keeping track of one estimatedPose member which is changed any time a new vision update is applied, rather than in older solutions having one “base” pose be calculated upon each time an estimated pose is requested. The solution to the problem is to just sort the vision updates as they come in in the vision subsystem; we do that here. However, this does not technically fix all our problems as there is still a chance that a vision update from before the last vision update in the previous robot cycle is applied, causing the order of updates to be out of order. Though this is theoretically possible, with testing we found that this did not happen at all and even if it were to occur the effects would not be noticeable.

  2. The second issue we observed was how the pose estimator will cause the current estimate to orbit around the vision pose while applying the vision pose. This problem was due to using Twist2d objects to determine and apply the difference between the vision pose and the estimated pose. Since the Pose2d.exp() function is meant to map the pose along a constant curve defined by the Twist2d object, the estimated pose orbits around the actual vision pose. Here you can see how the pose estimator reacted to a vision pose on the opposite side of the field to the estimated pose:

Using twists :japanese_ogre: (bad)

We changed the algorithm to use transforms to apply vision updates to the estimated pose to stop this from happening. We do this because the vision pose should be applied to the estimate as simply the difference between two poses rather than a curve that maps one to the other. Using this method, the estimate now just moves in a straight line toward the vision pose instead of orbiting around. See here in RobotState.java for our fix.

Using transforms :innocent: (good)

Look in Controls Engineering in the FIRST Robotics Competition section 10.2 “Pose Exponential” for a better understanding of what poses and twists are.

Structure :brick:

In the previous software update, we briefly mentioned how we plan to structure the code this year, and as the build season comes to a close we have settled on an appropriate pattern for our robot code. We have separated the robot into four parts: drive, superstructure, rollers, and flywheels. The drive is quite easy to define as a separate system, but the rest of the subsystem classes required some thought. We initially thought that the superstructure would include every subsystem on the robot other than the drive, but that quickly became too complicated as many of the internal subsystems inside the superstructure had to be run independently of each other. Instead, we split up the superstructure into flywheels, superstructure, and rollers. The flywheels are just the left and right flywheels, the superstructure is all the moving parts of the robot, and the rollers are every single roller on the robot that maneuvers the note throughout the robot.

The separation of the rollers has also introduced the GenericRollerSystem class which has helped us keep up with the quick iteration of the devbot in code. But, as more and more of the competition robot come together the code for the superstructure and rollers becomes more and more concrete.

Below is a diagram that provides a basic overview of the subsystem structure:

Software Week 6
The top row in blue are the major subsystems, and the yellow boxes below represent each part of the subsystem.

Mechanical post coming soon! :grimacing:

@SuryaT, Manthan, @nharnwal

32 Likes