FRC 6328 Mechanical Advantage 2024 Build Thread

Why are you using TorqueCurrent instead of PositionTorqueCurrent? I might have missed it if you had been using the latter.

1 Like

You can absolutely run pose estimation with a single camera, the downside is that you won’t be able to consistently track the robot position while driving around the field (only when looking at the speaker, for example). If your main focus is auto aiming at the speaker and measuring distance this shouldn’t be an issue, though you may wish to tune the pose estimator such that vision estimates are more trusted (since your odometry measurements may be very far off as you approach the speaker).

Some of the main benefits of a multi-camera system are knowing your pose before you’re pointed at the speaker, and correcting the pose continuously during auto paths. In general, I would suggest that multi-camera pose estimation is best as an offseason project before implementing it in season.

Yes, the cameras require different calibrations to maximize accuracy. In 2023, this difference was significant enough that we could not have scored cones automatically if we shared calibration data. We once mixed up the four cameras and were able to re-identify all of them just based on the accuracy of the calibrations when applied to each camera.

The accuracy we require from pose estimation this year is significantly less than in 2023, but we still plan to calibrate all of the cameras individually (after setting focus, which also affects calibration).

We use TorqueCurrentFOC for characterizing kS (as described in our post), and PositionTorqueCurrentFOC or VelocityTorqueCurrentFOC for all of the other control modes (position on the arm and swerve turning, velocity on the flywheels and swerve drive).

4 Likes

What do you think are the merits of skipping out on 3D tracking in this game and just using the position of the center of the tag, robot heading, and trig to figure out your pose on the field. I remember 4414 mentioning in their reveal thread last year that they got the best results with that model and I think Orbit uses it too. How would you go about implementing this if you did this instead of 3D tracking? Is it still necessary to calibrate cameras? Any best practices to minimize jitter?

2 Likes

Was there a specific reason/benefit to moving to current-based control?

1 Like

Full 3D solving (e.g. SolvePNP) and simple trig (a la retroreflective tape) are both valid methods of calculating the robot pose based on target positions. The benefit of simple trig is that it may produce measurements that are more stable than a 3D solve since it is less sensitive to noise in the corner detections of the tags. However, the downside of this approach is that an inaccurate camera position (to a precision of a small fraction of a degree) will consistently offset every measurement. This is a significant problem for judging distance for shooting, where you might be significantly off from your true position and be unable to tell because the measurements are consistent with each other.

3D tracking (once calibrated) is much less subject to these types of offsets, and noise can be handled with properly tuned filters; this is especially true for our use case with multiple high-resolution and high-framerate cameras. Another major benefit is that data from multiple tags in a single frame can be combined to form a very accurate estimate even from far distances, while simple trig needs to deal with each tag separately (increasing noise). 3D targeting also allows for the gyro to be corrected during the match, though this is admittedly not a major consideration given the accuracy of modern FRC gyros.

There’s nothing wrong with the simple trig approach to localization, but we would much rather calibrate the cameras and tune appropriate filters in order to ensure that we can benefit from multiple tags and have distance measurements that aren’t consistently offset.

WPILib used to have a utility class to help with these calculations, but unfortunately the relevant methods were removed. The method that did this from our 2022 code is here as a reference.

The most critical part of making a pipeline like this work is getting the camera position, height, and angle exactly right. We actually used a fudge factor in 2022 where we adjusted the camera angle for the calculations by ~0.1° in order to get the distances accurate enough for shooting.

We wanted to take advantage of FOC on the Talons, and torque-based control is the native way that FOC works. Torque-based control has a number of other benefits described here, including the fact that a velocity feedforward (kv) is generally unnecessary when tuning. As we described, the only characterization we run is for static friction (we don’t use ka and haven’t found a mechanism where it is needed). While CTRE provides a hybrid method that allows for voltage and duty-cycle based FOC, we would rather use FOC in its native form instead of relying on a barely-documented algorithm that ultimately makes the behavior harder to tune.

This is our first year using FOC and torque-based control, so the other reason for using it is to gain experience with it and understand where it has more or less benefit for our use cases.

13 Likes

Pre-GSD Update

Hope everyone having a great season, we’ve been pushing very hard lately, keeping our heads down and putting in some hours to try to get Comp to a stable state before GSD. Here are some high-level updates, as always, please feel free to ask any questions and check out our CAD here.

Comp update

Finally coming together. After our last update, we spent a lot of time assembling, unassembling, and reassembling just about every subsystem on the robot. Our internal motto this year is, “Everything is Important.” which is being taken very literally(much to some of the students dismay I imagine). We’re striving to have a great attention to detail on every subsystem, mechanical, electrical, and software wise. Around this time last week we were able to get most of the systems together and passed off to software for bringup.

Autos
A large priority for our auto development this year was trying to sort out what autos would be competitive but also very compatible with a large array of other teams. With this, we decided we wanted to write some autos that would focus on the far-side notes. While a bit more aggressive to accomplish, this one will allow us to stay out of the way of our alliance partners and allow us with a high level of confidence that we’ll grab at least a few notes before someone else gets to them. Here’s an example of one of the autos we’ll have in our inventory for this weekend.

Systems Automation
This year we know that the ends of the cycles will make or break our ability to excel beyond the competition, so we wanted to put a couple of simple automation features into the system so that we can simplify things for the driver and deliver more consistent results that are extremely repeatable. Two bits of this are the Auto Amp Align and the Auto Climb Align. In this mode, the driver can hold a single button on the controller, and the robot with navigate to that position and put the robot in a condition that allows the driver to score when they’re confident the position is correct. Here are some videos of each of those.

Auto Climb Align

Auto Amp Align

Trap
A big priority for us was getting our trap mechanism brought up for GSD. Here’s a short video showing the automation of the system working. We still need to tweak it a little, but the performance is promising.

Moving forward
If you’re interested in seeing our daily progress, please head over to our YouTube channel and subscribe. We post there essentially daily as it’s a lot easier for us to keep up with updates. We don’t have a great system for doing OA posts yet with the students, it’s something we’ll work on this summer to be better for the future.

As always, happy to answer any questions,
Dave

39 Likes

Amazing work as usual. I am looking forward to seeing you guys at GSDE!

1 Like

What 3d printed material did you use for your roller ends? What keeps the set screw in place? Robot looks absolutely amazing as always, can’t wait to see it compete!

1 Like

https://a.co/d/hSllDql

This is the material and we print it on our Bambu’s. The set screws are heat-set inserts. @Maximillian will do a writeup on the roller at some point, we’ve been loving them.

2 Likes

Thanks for the response! We’ve been having some issues with our ABS prints, this seems like a good option!

2 Likes

Really looking forward to asking your team a variety of questions again at events. Incredibly smooth movements in both auto and tele.

Very nice work, your team should be super proud!

1 Like

Some were also printed using DURAMIC 3D PLA Plus because it was easier to print. Eventually we’ll probably move them all to the CF Dave linked.

2 Likes

Here’s another fun video of us doing a little driver practice last night.

A bunch of other random videos.

Comp vs Dev Walkaround

Comp vs Dev Sprint Test

Indexer Double-Sided Belt Demo

15 Likes

I noticed you guys made some tweaks to your vision system… when the best vision people out there are making changes you have to listen! … particularly on one of your ambiguity checks going from .15 to .4 from last year. Any reason for this? seeing higher ambiguities?

we barely used april tags last year, but I know this tag family ( 36h11 ) profiles as harder to detect, but less false positives. Was it related to the tag family switch?

I also noticed each camera gets a built in Std deviation! Is the reasoning behind this to trust your frontside camera the most?

2 Likes

@davepowers thank you for the side-by-side comparison video between the two bots. not sure if this is something you’d be willing to write up, but i would be most curious to hear about 6328s experience this year working with a Dev and Comp bot.

  1. at what point did you shift from Dev to Comp manufacturing? were you working off an internal Gantt chart of sorts, or did it just “feel” like the right time?

  2. were most issues worked out on the Dev bot before “final” design on the Comp bot? if new issues were found on the Comp bot, were they also fixed on the Dev bot?

  3. I can spot some differences in the video above between the two bots…so I would presume that you all didnt get it all right on the Dev bot the first time.

  4. what was your team meeting/build schedule like to produce both robots? how much time was spent at home / online (Onshape/Slack) vs. in person meeting? i would imagine to make the most of valuable in shop time, a decent chunk of time was spent at home online.

thanks for producing all this content! sad we wont be able to see yall until DCMP!

7 Likes

Software Update: Compbot Bringup

We’re loading in to our first event later today, so what better time to discuss all of the software bringup the team has been working on!

Auto Paths

For Granite State, we are running with two primary autonomous routines. These paths have been designed with alliance partner compatibility in mind, focusing particularly on centerline notes.

4 Note Far Side

This auto shoots the preloaded note, then grabs and scores three notes from the center line. This video shows the auto running on the competition robot.

We particularly like this path as it enables maximum compatibility for alliance partners scoring spike notes and the close centerline notes. We also expect the far centerline notes to be less contested than those close to the speaker, making this a safe option in almost any match.

Code: Trajectories & Commands

5 Note Close Side

This auto shoots the preloaded note, grabs and shoots one spike note, then grabs and shoots three notes from the center line. Again, we like that this path allows our alliance partners to score spike and/or far side center line notes while we stay out of the way.

Code: Trajectories & Commands

The Details

As we previously discussed, all of these auto paths are generated using TrajoptLib. We’ve enjoyed experimenting with this new technique, and it has allowed us to more easily push the limits of this drivetrain.

One of our favorite maneuvers generated with the help of TrajoptLib is the intaking path for the second centerline note in our 4 note far side routine. We like to call this “the swivel.” The path allows the robot to grab the note with the intake pointed in the correct direction while losing as little speed as possible:

"The Swivel"

All of our routines are also designed to run with deterministic timing, meaning that the execution of every step of the auto takes a well-defined length of time. This is beneficial because it allows us to completely split the drive sequence and superstructure/roller sequence, using the timing of the trajectories and shots to keep everything in sync. We can then easily run complex sequencing like feeding a note to the shooter before the end of a trajectory, such that the drive is never delayed but shoots at the exact moment it is stationary. Here’s a slow-motion video of one of the shots where you can see this strategy in action:

Slow-Motion Shot

While measuring the length of auto routines, we always keep in mind that auto is not 15 seconds long:

Our original analysis was from 2022, but we observed the same grace period in 2023 (and utilized it to our advantage :innocent:). We always target 15.3 seconds as a maximum time for the auto routines, which we can essentially guarantee to be feasible when connected to the FMS.

Auto Align Controls

In addition to automatic aiming for speaker shots, there are several places where we are using full automatic alignment to assist the driver (we have some experience in this area :smile:).

First, the robot will automatically align when scoring in the amp. After the driver starts the alignment, the arm is automatically raised when the robot is within 5 feet and 120 degrees of the target. The robot is primarily utilizing AprilTags on the speaker and stage during this maneuver, as well as the amp AprilTag when approaching from mid/long range. See the last section of this post for more details on our camera placement. The code for the amp auto align can be found here.

The second use for auto align is lining up for the climb, which requires the robot to position itself between the chain and the base of the stage. This feature aligns the robot to the nearest chain, primarily utilizing AprilTags on the amp, speaker, and source.

The code for the auto alignment controller can be found here. The core algorithm is the same as our 2023 code. It drives in a straight line to the target pose using a trapezoid profile on distance, utilizing a feedforward to smoothly take over when started at a high velocity.

Of course, we always want to have fallback modes when implementing complex features like automatic alignment. We have a number of physical override switches on our operator consoles, one of which disables all auto align features and allows the driver to line up manually.

Wheel Radius Characterization

Measuring the precise wheel radius of the robot is critical for producing accurate odometry measurements. The process that we’ve always used is to push the robot along the ground by hand, measure the distance using a tape measure, and compute the wheel radius based on the rotations of the wheels. Unfortunately, this is time-consuming and error-prone. Even though wheel radius can change dramatically while wearing in new tread, we usually didn’t recharacterize often enough simply because it was too difficult.

This year, we developed a new technique for wheel radius characterization on swerve that uses the gyro instead. The robot slowly spins in a circle and uses the gyro rotation plus the known wheel base dimensions to calculate the wheel radius:

wheel radius (meters) = gyro delta (radians) * drive base radius (meters) / wheel position delta (radians)

This process is completely automated and highly precise, so we’ve been using it much more regularly to keep track of wheel radius changes. The code for this characterization routine can be found here: WheelRadiusCharacterization

Automatic Flywheel Spinup

During driver practice, we observed that we were consistently losing time waiting for the flywheels to spin up before shooting. The spin-up time is ~1s, which is usually slower than the time required to aim the arm and drive.

Our solution is to use the robot’s localization data to automatically pre-accelerate the flywheels to 75% of full speed under certain conditions. All of the following must be true to automatically pre-accelerate the flywheels:

  • A note is being held in the indexer
  • The robot is within 25 feet of the speaker
  • The robot is not actively climbing

Our goal is to minimize unnecessary drain on the battery when the flywheels are unneeded, while ensuring that they are ready when we begin aiming. This is the region where the flywheels are pre-accelerated when holding a note:

Here’s what the cycles look like after implementing this feature:

Cameras & AprilTags

As we discussed previously, we are continuing to run four cameras for AprilTag localization using Northstar. In 2023, the cameras were identical and pointed in four directions to enable consistent localization when scoring on both sides of the robot. For 2024, we are using a more unique arrangement of cameras.

Placement

On the front of the robot, we have three cameras. There are two symmetric 75° FOV cameras on the left and right to maximize our field of view, plus a single 45° FOV camera in the center. This “zoom” camera allows us to more accurately localize at long range while pointed at the goal, which improves the reliability of distance measurements for shooting.

Below you can see the three front cameras on the robot along with the fields of view. Notice that the camera frustums overlap such that the tags will always be in view even if the robot is against the subwoofer.

Another benefit of having a wide FOV on the front of the robot is that we have a solid view of multiple AprilTags from two sides of the stage, which makes automatic alignment for climbing more consistent:

In addition to the three cameras on the front, we have a single 90° FOV camera on the back. This enables more consistent localization while moving around the middle of the field. The FOV seems to be near the limit of what the fisheye camera model is able to accurately represent and undistort, but we’ve been very pleased with the accuracy and range thus far.

Filtering

We’ve made a couple of changes to the filtering for our pose estimation. First, we adjust the vision standard deviations based on which camera captured each frame. The zoom camera is trusted more than the side cameras, and the wide angle camera is trusted less. This approximately matches the noise we see from each camera based on the different FOVs.

The second change relates to disambiguation of single tags. You can read more about the problem of tag ambiguity here. In 2023, we rejected data from any tags where the reprojection error from one pose solution was more than 15% the error of the other solution. This was a very aggressive approach to filtering which ensured that our pose estimates for cone scoring were accurate.

This year, there are many more tags that are isolated on the field and are likely to be viewed individually (such as those on the stage and amp). Also, the required tolerances for pose estimation are significantly more lax compared to the <1 inch tolerance for cone scoring in 2023. Therefore we’ve adopted a new disambiguation strategy:

  • Reject all solutions with reprojection error >40% of the alternative solution.
  • Choose the better solution based on which is closer to our current rotation estimate (the gyro data is very stable and makes for a reliable “ground truth” reference).
  • Don’t update rotation estimates for single tag estimates, to ensure inaccurate disambiguations don’t skew future results.

We’ve seen very positive results with this strategy, allowing us to quickly localize off of single-tag detections. None of this disambiguation is necessary when viewing multiple tags (like on the speaker or source), so those detections are even more reliable.

Final Thoughts

It’s been a long couple of weeks, but we are very excited to see this robot in action at Granite State. Make sure to check out our teaser video too!

-Jonah

GIF
(The robot on a field that includes all of the AprilTags)

54 Likes

Are you worried about the 4 Note Far Side getting in the way of teams doing the close 3 that start by the podium? I think you’ll see some teams start there to free up the easy center spot for 3rd robot and/or keep the path between the 2nd and 3rd note clear incase they want to go to the center line.

1 Like

What happens if you hit another robot in auto? :sunglasses:

It’s obvious how hard 6328 works and I’m happy we’ve been fortunate enough to see it up close.

6 Likes

How many coprocessors are you using to process all of these camera feeds? As of now, we’re running a singular 70º FOV OV9281 with an Orange Pi 5 @ 720p and it’s just able to hit 30FPS on Photonvision, so I’m curious to see how Northstar compares performance-wise.

1 Like

We’re running with two Orange Pi 5s (two cameras per Pi). All of the pipelines are consistently running at 20-30 FPS with 1600x1200 resolution.

3 Likes