FRC 6328 Mechanical Advantage 2024 Build Thread

Software Update: Compbot Bringup

We’re loading in to our first event later today, so what better time to discuss all of the software bringup the team has been working on!

Auto Paths

For Granite State, we are running with two primary autonomous routines. These paths have been designed with alliance partner compatibility in mind, focusing particularly on centerline notes.

4 Note Far Side

This auto shoots the preloaded note, then grabs and scores three notes from the center line. This video shows the auto running on the competition robot.

We particularly like this path as it enables maximum compatibility for alliance partners scoring spike notes and the close centerline notes. We also expect the far centerline notes to be less contested than those close to the speaker, making this a safe option in almost any match.

Code: Trajectories & Commands

5 Note Close Side

This auto shoots the preloaded note, grabs and shoots one spike note, then grabs and shoots three notes from the center line. Again, we like that this path allows our alliance partners to score spike and/or far side center line notes while we stay out of the way.

Code: Trajectories & Commands

The Details

As we previously discussed, all of these auto paths are generated using TrajoptLib. We’ve enjoyed experimenting with this new technique, and it has allowed us to more easily push the limits of this drivetrain.

One of our favorite maneuvers generated with the help of TrajoptLib is the intaking path for the second centerline note in our 4 note far side routine. We like to call this “the swivel.” The path allows the robot to grab the note with the intake pointed in the correct direction while losing as little speed as possible:

"The Swivel"

All of our routines are also designed to run with deterministic timing, meaning that the execution of every step of the auto takes a well-defined length of time. This is beneficial because it allows us to completely split the drive sequence and superstructure/roller sequence, using the timing of the trajectories and shots to keep everything in sync. We can then easily run complex sequencing like feeding a note to the shooter before the end of a trajectory, such that the drive is never delayed but shoots at the exact moment it is stationary. Here’s a slow-motion video of one of the shots where you can see this strategy in action:

Slow-Motion Shot

While measuring the length of auto routines, we always keep in mind that auto is not 15 seconds long:

Our original analysis was from 2022, but we observed the same grace period in 2023 (and utilized it to our advantage :innocent:). We always target 15.3 seconds as a maximum time for the auto routines, which we can essentially guarantee to be feasible when connected to the FMS.

Auto Align Controls

In addition to automatic aiming for speaker shots, there are several places where we are using full automatic alignment to assist the driver (we have some experience in this area :smile:).

First, the robot will automatically align when scoring in the amp. After the driver starts the alignment, the arm is automatically raised when the robot is within 5 feet and 120 degrees of the target. The robot is primarily utilizing AprilTags on the speaker and stage during this maneuver, as well as the amp AprilTag when approaching from mid/long range. See the last section of this post for more details on our camera placement. The code for the amp auto align can be found here.

The second use for auto align is lining up for the climb, which requires the robot to position itself between the chain and the base of the stage. This feature aligns the robot to the nearest chain, primarily utilizing AprilTags on the amp, speaker, and source.

The code for the auto alignment controller can be found here. The core algorithm is the same as our 2023 code. It drives in a straight line to the target pose using a trapezoid profile on distance, utilizing a feedforward to smoothly take over when started at a high velocity.

Of course, we always want to have fallback modes when implementing complex features like automatic alignment. We have a number of physical override switches on our operator consoles, one of which disables all auto align features and allows the driver to line up manually.

Wheel Radius Characterization

Measuring the precise wheel radius of the robot is critical for producing accurate odometry measurements. The process that we’ve always used is to push the robot along the ground by hand, measure the distance using a tape measure, and compute the wheel radius based on the rotations of the wheels. Unfortunately, this is time-consuming and error-prone. Even though wheel radius can change dramatically while wearing in new tread, we usually didn’t recharacterize often enough simply because it was too difficult.

This year, we developed a new technique for wheel radius characterization on swerve that uses the gyro instead. The robot slowly spins in a circle and uses the gyro rotation plus the known wheel base dimensions to calculate the wheel radius:

wheel radius (meters) = gyro delta (radians) * drive base radius (meters) / wheel position delta (radians)

This process is completely automated and highly precise, so we’ve been using it much more regularly to keep track of wheel radius changes. The code for this characterization routine can be found here: WheelRadiusCharacterization

Automatic Flywheel Spinup

During driver practice, we observed that we were consistently losing time waiting for the flywheels to spin up before shooting. The spin-up time is ~1s, which is usually slower than the time required to aim the arm and drive.

Our solution is to use the robot’s localization data to automatically pre-accelerate the flywheels to 75% of full speed under certain conditions. All of the following must be true to automatically pre-accelerate the flywheels:

  • A note is being held in the indexer
  • The robot is within 25 feet of the speaker
  • The robot is not actively climbing

Our goal is to minimize unnecessary drain on the battery when the flywheels are unneeded, while ensuring that they are ready when we begin aiming. This is the region where the flywheels are pre-accelerated when holding a note:

Here’s what the cycles look like after implementing this feature:

Cameras & AprilTags

As we discussed previously, we are continuing to run four cameras for AprilTag localization using Northstar. In 2023, the cameras were identical and pointed in four directions to enable consistent localization when scoring on both sides of the robot. For 2024, we are using a more unique arrangement of cameras.

Placement

On the front of the robot, we have three cameras. There are two symmetric 75° FOV cameras on the left and right to maximize our field of view, plus a single 45° FOV camera in the center. This “zoom” camera allows us to more accurately localize at long range while pointed at the goal, which improves the reliability of distance measurements for shooting.

Below you can see the three front cameras on the robot along with the fields of view. Notice that the camera frustums overlap such that the tags will always be in view even if the robot is against the subwoofer.

Another benefit of having a wide FOV on the front of the robot is that we have a solid view of multiple AprilTags from two sides of the stage, which makes automatic alignment for climbing more consistent:

In addition to the three cameras on the front, we have a single 90° FOV camera on the back. This enables more consistent localization while moving around the middle of the field. The FOV seems to be near the limit of what the fisheye camera model is able to accurately represent and undistort, but we’ve been very pleased with the accuracy and range thus far.

Filtering

We’ve made a couple of changes to the filtering for our pose estimation. First, we adjust the vision standard deviations based on which camera captured each frame. The zoom camera is trusted more than the side cameras, and the wide angle camera is trusted less. This approximately matches the noise we see from each camera based on the different FOVs.

The second change relates to disambiguation of single tags. You can read more about the problem of tag ambiguity here. In 2023, we rejected data from any tags where the reprojection error from one pose solution was more than 15% the error of the other solution. This was a very aggressive approach to filtering which ensured that our pose estimates for cone scoring were accurate.

This year, there are many more tags that are isolated on the field and are likely to be viewed individually (such as those on the stage and amp). Also, the required tolerances for pose estimation are significantly more lax compared to the <1 inch tolerance for cone scoring in 2023. Therefore we’ve adopted a new disambiguation strategy:

  • Reject all solutions with reprojection error >40% of the alternative solution.
  • Choose the better solution based on which is closer to our current rotation estimate (the gyro data is very stable and makes for a reliable “ground truth” reference).
  • Don’t update rotation estimates for single tag estimates, to ensure inaccurate disambiguations don’t skew future results.

We’ve seen very positive results with this strategy, allowing us to quickly localize off of single-tag detections. None of this disambiguation is necessary when viewing multiple tags (like on the speaker or source), so those detections are even more reliable.

Final Thoughts

It’s been a long couple of weeks, but we are very excited to see this robot in action at Granite State. Make sure to check out our teaser video too!

-Jonah

GIF
(The robot on a field that includes all of the AprilTags)

54 Likes