FRC 6328 Mechanical Advantage 2022 Build Thread

Day 5-6: Launcher Prototypes, Scouting App, & Drivetrain CAD

Scouting App

The scouting team started work on the scouting app for this year. One of the ideas they came up with was recording the exact location a team shot from by tapping on the field. This requires having an accurate field modeled in the app UI. The scouting team will do a much larger deep dive into this soon.

We figured out a way to streamline the process of figuring out where each field element should go on CanvasManager [our name for the scouting app UI framework]. To do this, we first uploaded a drawing of the field into GIMP. We resize it to the resolution of the app we are making so that when we hover the mouse over the image, we see the exact x and y coordinate of the mouse. After accounting for some offsets and other calculations, we transfer these coordinates into a website called GeoGebra(similar to Desmos). We can then rotate the shape we created on GeoGebra to find the other necessary points. Although there was some initial grunt-work, this process should streamline the entire process, giving us more time to work on debugging the app.



The software team continued to bring up the baseline code for controlling the drivetrain and running motion profiles, in addition to working to finish up the prototype motor test board. There will be a much larger software update posted soon so this section is kept short for now.


Linear Catapult

The linear catapult was modified to hold two balls side by side. With this added weight the prototype struggled with range in its current form. I unfortunately don’t have any pictures/videos of this right now, so stay tuned for those over the weekend. For the next catapult revision we think we will be pivoting to a more standard rotating catapult design, but staying pneumatic.

Hooded Flywheel

This is a traditional hooded flywheel prototype with various built in adjustments. The CAD team worked on designing it and then we cut it out on our router.

Attempt 1

Maybe a little fast… turning down the speed for attempt 2

First Second try!

This was at the very end of the meeting, so we only shot these two balls. Much more testing to come…

Prototype Specs

  • Adjustable compressing by changing hood holes and thickness (currently 0.8" of compression)
  • 6" flywheel (currently 2x colsons)
  • 2x NEOs either 1:1 or 1.33:1 (currently 1 NEO 1:1)
  • Additional top flywheel attachment powered by another NEO either 1:1 or 1.33:1, similar to the GreyT Shooter V2 (not shown or tested yet)
  • 2" Kicker wheel powered by an UltraPlanetary (not used yet)
  • Mounting holes for a “ball tower” to store and feed the balls

Next Steps

  • Prototype and integrate ball tunnel into the shooter to feed balls
  • Test how rapidly two balls can be fed
  • Test the range where we can still make shots with a fixed hood angle
  • Experiment with
    • Wheel type
    • Compression
    • Hood material, potentially foam
    • Release angle
    • Upper flywheel
    • Flywheel speed(s)


The CAD team also worked on our drivetrain CAD which is shown below. There are still some final details, but it is almost there. Currently it is 29W x 30L and has an 11:66 gear ratio.

We’re off today, so we should have a week 1 summary post with our progress so far and from tomorrows meeting up on Sunday.



gif game is 100%


Software Update #1: Setting the Foundation

Our robot code work this week was focused on setting up a solid foundation for our 2022 code base. Below are a few key takeaways; while none of this is particularly revolutionary, we hope that it might prove useful.

AdvantageKit, Logging, and Multi-Robot Support

This is our first full project to make use of AdvantageKit and our logging framework. This means that hardware interaction in each subsystem is separated into an “IO layer” (see more details here) such that data can be replayed in a simulator. An extra bonus of this system is that it’s easy to swap out the hardware implementation. Currently, the drive subsystem supports SparkMAXs, TalonSRXs, or the WPILib drive sim depending on the current robot. We can test our code on older robots just by changing a constant, which has proved useful as we develop code long before any competition robot takes shape. Each subsystem and command defines constants for each robot, meaning that we don’t rely on all of them to behave identically. We’ll continue to share our progress with logging as the robot project grows beyond just a single subsystem.

Operator Interface

During the offseason, we built a new operator board (see below) focused on using Xbox controllers. It contains wells for the driver and operator controllers, plus a series of physical override switches. The panel under the laptop is magnetically attached, and lifts up for access to the cables. This makes it much easier to carry devices to the field, and means that we don’t need to rely on dashboard buttons for critical overrides.

Last year, our scheme for finding and using joysticks was extremely flexible (see this post). However, we felt that the capability just wasn’t worth the complexity. Rather than supporting 7+ control schemes, we’ve reduced down to just 2. During competitions, the driver and operator use separate Xbox controllers. For testing or demos, all of the controls can also be mapped to a single Xbox controller. Here’s how the code is set up internally:

  • All of the driver and operator controls are defined in this interface (it’s very minimal right now). Based on the number of Xbox controllers, either the single controller or dual controller implementations are instantiated.

  • This class handles the override switches on the operator board. It reads the value of each switch when the joystick is attached, but will default to every switch being off if we’re running without the board.

  • The OISelector class scans the connected joysticks to determine which versions of the OI classes to instantiate. When the robot is disabled, the code continuously scans for joystick changes and recreates the OI objects when necessary.

This command is responsible for converting joystick positions to drive speeds. The joystick mode is selectable on the dashboard, and we currently support 3 modes (again, simplifying from 10 modes last year).

  • Tank: Left stick controls left speed, right stick controls right speed. While not a very intuitive system, this is still useful when testing the drive train.
  • Split Arcade: Left stick controls forward speed, right stick controls the speed differential. This is a reliable and easy to use option for testing, demos, etc.
  • Curvature: This isn’t quite a traditional curvature drive, but a mix between full curvature and split arcade. It’s a scheme we used during the at-home challenges last year to overcome the key limitation of curvature drive: not functioning while stationary. While a “quick turn” button is an effective solution, we felt that it was too unintuitive. Instead, we slowly transition between split arcade and curvature drive up to 15% speed (0% is split arcade, 15% is curvature, and 7.5% averages the outputs from both modes). This provides the benefits of curvature drive at high speed while maintaining the ease of use from split arcade.

Odometry, Motion Profiling, and Field Constants

Our current odometry system is fairly basic right now, but will evolve as we continue to explore vision options. Odometry is handled by the drive subsystem, with getters and setters for pose. One key change from last year is that everything uses meters instead of inches. This has already made our lives much easier when it comes to setting up motion profiling.

As with other areas, we took this as an opportunity to simplify older code. Our ~800 line motion profiling command from last year is now much simpler and easier to use (see here). We focused on the most essential features while removing the less useful ones (like circular paths, which were only useful for the at-home challenges). This new command is also much more maintainable when we need to make fixes and improvements.

Before long, we’ll need to start putting together profiles for auto routines. Unfortunately, this year’s field layout is a bit of a mess when it comes to defining the positions of game elements (did the edges of tarmac really need to be nominally tilted by 1.5°?) To save many headaches later in the season, we wrote this class with lots of useful constants. It defines four “reference points” (see the diagram below) along the tarmac. The cargo positions are defined using translations from those references. The same principle of starting at a reference and translating could be used to define robot starting positions or waypoints on a profile to collect cargo. The class also includes constants for the hub vision target.


SysId is an incredibly useful tool, but the process of setting up a new project is quite tedious (selecting motors, encoders, conversion factors, etc.) Instead, we wrote a command that communicates with SysId but makes use of the code we’ve already set up to control each subsystem. See this example of using the command. Each subsystem just needs a method to set the voltage output (with no other processing except voltage compensation), and a method to return encoder data like position and velocity.


Neither of these utilities are new this year, but it’s always worth mentioning them again:

  • We use a custom Shuffleboard plugin called NetworkAlerts for displaying persistent alerts. The robot side class can be used to alert the drivers of potential problems.

  • For constants (like PID gains) that require tuning, we use our TunableNumber class. During normal operation, each acts as a constant. When the robot is in “tuning mode” (enabled via a global constant), all of the TunableNumbers are published to NT and can be changed on the fly.

Our next major software project is to explore vision options with the hub target (and maybe cargo too). We’ll be sure to post an update with our findings. In the meantime, we’re happy to answer any questions.



Scouting and Strategy Week 1 Update

This past week, the Scouting and Strategy Team worked to further analyze the game as well as begin work on our scouting app for this year. Below you can find a summary of what we have accomplished, what we have found useful, and what we haven’t.


Throughout the week, we looked at ways to gain further insight into the game as well as possible adjustments to our priority list. Early on, we looked at the Monte Carlo simulation made by Team 4926, which can be found here. When using it, we found that we were unable to extrapolate meaningful information from it when the distribution of performance was flat, as opposed to on a curve. We are looking to modify this in the near future, both so we can project how the game will play out with it, and also so we can generate mock data for our analysis systems with it.
We have also been looking at ways to predict the amount of cargo on the field as a match progresses with robots of certain capabilities. We attempted this using a sort of excel algorithm, however we found this to not be able to take in enough data to give useful results. We think this sort of analysis has potential and we are looking at python solutions.

We have also discussed and listed below our adjustments to our initial priority list. If you click on any of the changes, you can see our reasoning.

Being able to go under the low bar: Should Have -> Must Have
  • As more and more Open Alliance and Ri3D information comes out regarding the cramped nature of the Hangar, we believe it will be very important to be able to approach the hangar rungs from the middle of the field as opposed to having 2-3 robots enter from the side, turn 90 degrees and awkwardly slide into place.

  • We expect cargo to make its way over into the hangars and it will be greatly beneficial to be able to enter from either side of the hangar.

  • We expect defense to be strong against robots who are taller than the low rung who enter a hangar and get trapped in an artificial chokepoint with a defender between the hangar trusses.

  • It’s only a few inches of height sacrifice

Active Cargo Settling: Should Have -> Could Have
  • From our own early prototyping as well as other Open Alliance teams, we have found intakes that settle balls that are bouncing more than a couple inches off the ground to be challenging to say the least. We feel currently it may be better to focus on having an excellent ball-on-ground intake.

  • We don’t expect bouncing cargo to be as much of a plaguing issue as we did on kickoff weekend. Pre-Champs, we expect to able to thrive without this capability. We may assess pursuing this later in the season depending on how early events look.

Score cargo in lower hub from tarmac: Rephrased to “Score cargo from one robots length away from fender”
  • This was more or less the intended understanding from kickoff, just a clarification.

  • Goal is to be able to “score over defender,” similar to 254 2014.

  • Further back on the tarmac seems unnecessary and unreliable/unrealistic.

Score cargo from one robots length away from fender: Must Have -> Should Have

Having more than one position of scoring low may hinder other aspects of the design and we don’t believe it is absolutely necessary to have a good low scoring robot.

High Bar: Could Have -> Should Have
  • From early prototypes from other Open Alliance teams, we expect this challenge to be slightly easier than some of us originally foresaw.

  • We expect a large portion of teams to have a mid climb and a high climb + a mid climb = bonus RP. In order to retain control over our destiny as much as possible in the rankings, we would like to not require 3 robots to climb for the rank point.

Umbrella: Won’t Have -> Could Have

Umbrella refers to a mechanism that can selectively prevent cargo from entering an open hopper. If the goal is to allow bouncing Cargo to fall into the robot, we would potentially want to be able to prevent undesired Cargo from doing the same.

Operation Double Trouble: Unlisted -> Could Have


Wearing a Mask: Must Have -> Must Have

giphy (6)

5 Cargo Auto: Unlisted -> Unlisted


These have likely settled into place for the most part as we are quickly approaching the main archetype discussions of the robot. We will have a post prior to our Week 1 event to share our strategies going into it.


We have been working hard to get our scouting app off the ground quickly as we share a large amount of student resources with the programming subteam. To start off, we compiled a list of data points that we want to be able to collect both in match data as well as pit scouting data. This can be found below.

Match Data
  • Alliance color
  • Start position(tap for location)
  • Taxi
  • Ratings(intake, driver, defense, avoid defense)
  • Shooting position(tap for location)
  • Upper success/failure
  • Lower success/failure
  • Climb level(attempted, success)
  • Climb time(for each level) (collected in background)
  • Penalties(#)
Pit Data
  • Team #
  • Picture
  • Climb level
  • Height
  • Dimensions
  • Multiple drive teams?
  • Shoot high?low?both?
  • Auto mode
  • Start preference
  • Shoot from fender?tarmac?launchpad?
  • Drivetrain type
  • Holding capacity

One of the more unique aspects of our data collection is that we are taking in the “precise” location that robots scored from, as well as their accuracy from that position. We are doing this by having scouts select the field image on their tablet screen on the place where they believe they scored from. Then a pop up comes up near where they pressed, prompting them to select how many were scored and missed by hub height. We are, in the background, collecting the timestamp data of each scoring action, as well as climber timings. We are unsure as to how we will use it or how accurate it will be, but we are moving with the principle of collecting all the data we have access to that doesn’t impede the collection of other data, and selecting what we want to use later down the line.

We have spent more time than I think any of us are a fan of figuring out the most effective ways to display the field in the app.

Getting the dimensions of field elements and scaling them into our app was annoying.

We want this degree of precision so when we collect scoring location x,y data, we can overlay it onto an image of a field and have it be accurate-ish.

Pit Scouting App portion of app in browser and on the kindle.

App Field Status as of Saturday. You can probably see the slight imperfections in the Hub deflectors and tarmac lines. Trust me, it bothers us as much as it bothers you.

We have started looking at some ways we can use the substantial amount of data we are collecting. My 3 personal favorite are below:

Density/Heat Map of scoring locations/accuracy by team/event/driver station

Since we have access to every team’s scoring location as well as their accuracy from that position and their driver station, we can model this to our heart’s content. Below is an example of a model that looks kinda like how we imagine this could look.

Accuracy of event or team over distance from the goal

This is certainly less useful for low goal scorers, but as more robots try to score from range, we can model their accuracy from range and potentially know where they are the most effective, and take appropriate in match actions.

New York Times Spiral Graph

If you want to take a look at what we have going on code-wise, our repo can be found here. The 2022 specific code can be found in the Ayush branch, found here.

As always, any questions, comments, criticism, or suggestions are highly appreciated.


200 (3)


Software Update #2: “One (vision) ring to rule them all”

Last year, we started our vision journey by auto-aiming just with the horizontal angle from our Limelight. However, we quickly realized the utility of calculating the robot’s position based on a vision target instead. By integrating that data with regular wheel odometry, we could auto-aim before the target was in sight, calculate the distance for the shot, and ensure our auto routines ended up at the correct locations (regardless of where the robot was placed on the field).

Our main objective over the past week was to create a similar system for the 2022 vision target around the hub. This meant both calculating the position of the robot relative to the target and smoothly integrating that position information with regular odometry.

While both the Limelight and PhotonVision now support target grouping, we wanted to fully utilize the data available to us by tracking each piece of tape around the ring individually (more on the benefits of that later). Currently, only PhotonVision supports tracking multiple targets simultaneously; for our testing, we installed PhotonVision on the Limelight for our 2020 robot.

The Pipeline

This video shows our full vision/odometry pipeline in action. Each step is explained in detail below.

  1. PhotonVision runs the HSV thresholding and finds contours for each piece of tape. Using the same camera mount as last year, we found that the target remains visible from ~4ft in front of the ring to the back of the field (we’ll adjust the exact mount for the competition robot of course). In most of that range, 4 pieces of tape are visible. We’ve never seen >5 tracked consistently, and less optimal spots will produce just 2-3. Currently, PhotonVision is running at 960x720; the pieces of tape can be very small on the edges, so every pixel helps. The robot code reads the corners of each contour, which are seen on the left of the video.
  2. Using the coordinates in the image along with the camera angle and target heights, the code calculates a top-down translation from the camera to each corner. This requires separating the top and bottom corners and calculating each set with the appropriate heights. These translations are plotted in the middle section of the video.
  3. Based on the known radius of the vision target, the code fits a circle to the calculated points. This is where we see the key benefit of plotting 12+ points rather than just 2 (as we did last year). When the robot is stationary, the position of the circle stays within a range of just 0.2-0.5 inches frame-to-frame. Last year, we could easily see a range of >3 inches. While the translations to each individual corner are still noisy, the circle fit is able to average all of that out and stay in almost exactly the same location. It’s also able to continue solving even when just two pieces of tape are visible on the side of the frame; so long as the corners fall somewhere along the circumference of the vision ring, the circle fit will be reasonably accurate.
  4. Using the camera-to-target translation, the current gyro rotation, and the offset from the center of the robot to the camera, the code calculates the full robot pose. This “pure vision” pose is visible as a translucent robot to the right of the video. Based on measurements of the real robot, this pose is usually within ~2 inches of the correct position. For our purposes, this is more than enough precision.
  5. Finally, the vision pose needs to be combined with regular wheel odometry. We started by utilizing the DifferentialDrivePoseEstimator class, which makes use of a Kalman filter. However, we found that adding a vision measurement usually took ~10ms, which was impractical during a 20ms loop cycle. Instead, we put together a simpler system; each frame, the current pose and vision pose are combined with a weighted average (~4% vision). This means that after one second of vision data, the pose is composed of 85% vision. It also uses the current angular velocity to adjust this gain — the data tends to be less reliable when the robot is moving. This system smoothly brings the current pose closer to the vision pose, making it practical for use with motion profiling. The final combined pose is shown as the solid robot to the right of the video.

Where applicable, these steps also handle latency compensation. New frames are detected using an NT listener, then timestamped (using the arrival time, latency provided by PhotonVision, and a constant offset to account for network delay). This timestamp is used to retrieve the correct gyro angle and run the averaging filter. Note that the visualizations shown here don’t take this latency into account, so the vision data appears to lag behind the final pose.

More Visualizations

This video shows the real robot location next to the calculated vision pose. The pose shown on the laptop is based on the camera and the gyro, but ignores the wheel encoders. This video was recorded with an older version of the vision algorithm, so the final result is extra noisy.

This was a test of using vision data during a motion profile - it stops quickly to intentionally offset the odometry. The first run was performed with the vision LEDs lit, meaning the robot can correct its pose and return to the starting location. The second run was performed with the LEDs off, meaning the robot couldn’t correct its pose and so it returned to the incorrect location.

This graph shows the x and y positions from pure vision along with the final integrated pose.

  • Vision X = purple
  • Vision Y = blue
  • Final X = yellow
  • Final Y = orange

The final pose moves smoothly but stays in sync with the vision pose. The averaging algorithm is also able to reject noisy data when the robot moves quickly (e.g. 26-28s).


This project has been a perfect opportunity to use our new logging framework.

For example, at one point during the week, I found that the code was only using one corner of each piece of tape and solving it four times (twice using the height of the bottom of the tape and twice using the top). I had grabbed a log file to record a video while I was running on code without uncommitted changes. On a whim, I switched to the original commit and replayed the code in a simulator with some breakpoints during the vision calculation. I noticed that the calculated translations looked odd, and was able to track down the issue. After fixing it locally, I could replay the same log and see that the new odometry positions were shifted by a few inches throughout the test.

Logging was also a useful resource when putting together the pipeline visualization in this post. We had a log file from a previous test, but it was a) based on an older version of the algorithm and b) didn’t log all of the necessary data (like the individual corner translations). However, we could replay the log with up-to-date code and record all of the extra data we needed. Advantage Scope then converted the data to a CSV file that Python could read and use to generate the video.

Code Links

  • NT listener to read PhotonVision data (here)
  • Calculation of camera-to-target translation (here)
  • Circle fitting class (here)
  • Method to combine vision and wheel odometry (here)

As always, we’re happy to answer any questions.


where’s our gif


Jonah, 6328 has some terrifyingly impressive programming, and I love reading about this stuff. I’ve been doing programming in FRC for 10 years now and you have way better ideas (and execution) than I do. (And definitely better ideas than @came20 )

It was great meeting you at Chezy. Keep it up, dude.


My sincerest apologies.



Thank you to @Tyler_Olds and FUN for hosting us on the Open Alliance show yesterday. The students did a great job talking about where we are in our build season and what goals are next.


Since I see you have been prototyping catapults, I thought I would mention the white paper from team 230 back in 2016 about electric catapults . We used the information provided to create a prototype with a ~22" arm driven by a 20:1 NEO that launches a ball nicely and makes for a relatively light mechanism. Also, shout out to the Gaelhawks for sharing the excellent work they did.

Link to white paper:

edit: I had to add a gif of the prototype to fit in here


Does the arm hit the stop on the upright or do you use the motor to stop at a certain encoder tick?

This uses the soft limit feature and brake mode to stop at a specific encoder position. Repeatability was reasonably good but would likely be improved by smarter software setup or physical stops.


How are you getting Pose from the target? I understand how you can get distance, but aren’t there multiple places on the field where the target looks identical?


Using the camera-to-target translation, the current gyro rotation, and the offset from the center of the robot to the camera, the code calculates the full robot pose

The gyro is necessary here to get a field pose from vision. Think of polar coordinates with the gyro as theta and the calculated distance as r, with the origin as the center of the hub in field coordinates.

Ahh, I don’t trust the gyro angle enough. I was hoping to use this to account for the drift in the gyro.

Yea, that’s the problem this year with the vision target. Your “global” vision measurements are affected by your local measurement drift. I’m hoping the new pigeon 2/navx only drift 1 or 2 degrees during a match.


Week 3 Business Team Update #1

Business Plans

Our FRC and FLL business plans were last updated for the 2020 season and were 49 and 30 pages long respectively. (Sincere apologies to the event judges.) I don’t know if the new 3-page-limit was put in place for 2021 (we never got around to updating in the chaos of last year) or if it’s new this year, but that’s quite a change for us! The students working on this transition decided to use the headings given by FIRST as the outline and adapt the existing content into the new page limit, which worked out great. They are finalizing the last of the appendices this week and then we’ll post them and ask for feedback (we know exactly what these business plans are trying to say, but do you? Let’s find out…)

Of course, the FLL business plan isn’t actually necessary but we do run FLL as a self-sustaining program so we also reworked that one into the same 3-page format. Regardless of anything else, it’s a good exercise for examining our goals, growing edges, and financial plan for reinvigorating the FLL program after it languished a bit in 2020 thanks to COVID.

Sponsor Outreach Campaign

In December, we held a quick student-led training session for new FRC students on how to write an email to potential sponsors to make the ask. Each participant was then assigned a short list of potential sponsors and tasked with reaching out to them. As a result of this campaign, we have four new sponsors so far with positive replies from several others that will hopefully turn into actual sponsors. Though they only have about another week to make it onto the shirts…

Chairman’s Submission Progress

Our Chairman’s team is facing a new challenge this season. We have three very experienced presenters who have been working on Chairman’s submissions for years. But all three are now seniors with exciting futures that don’t include continuing to work on Chairman’s submissions for 6328. :rofl:

So earlier this fall, our Business Student Lead put together a Shadow Chairman’s team made up of students of various ages who are attending all of our awards meetings and learning the process with the intention of taking over next year. They made the deliberate decision to include students of varying ages to ensure that the entire Chairman’s team isn’t going to graduate all at the same time in the future, making the transition between students year-to-year and the sustainability of the Chairman’s team a priority.

As for our Chairman’s submission, the Executive Summary questions are about 95% complete (need to look up some specific data points/numbers, and rework wording for one question). The essay is outlined and students are working this week to see what content from previous years can be used as a foundation before adding in new content and adapting to reflect the most recent year. We also reworked our Chairman’s Documentation Form for this year to hopefully make it easier to update and easier to read. Next step is to fill out the presentation outline, but first we’re waiting to hear (hopefully, as we’ve been led to believe, at the town hall meetings next week) whether judging for New England will be in-person or virtual since that affects how the CA presentation gets put together.

In accordance with the Open Alliance concept, we plan on sharing our CA submission once the drafts are complete. We’re also working with a friend-team to pull together a wider pilot program for CA presentation practice, so stay tuned on that front!


This is awesome! I have a few questions (sorry for the amount, trying to understand the code before I try and implement it)

  1. Why do you use a NetworkTables listener for the Photon data as opposed to just getting the latest result in the periodic method?
  2. Why do you make the timestamp and corner arrays volatile?
  3. What is the target grace timer/“idleOn” for the leds doing?
  4. When calculating the translation from the camera to each point, why do you not use PhotonVision’s utility method estimateCameraToTargetTranslation? Is that because you only have the pixel values of the corners?
  5. Why do you only sort the top two corners specifically and not the bottom corners?
  6. In the sortCorners method, what do minPosRads and minNegRads do?
  7. Why do you use VisionPoint and not just Translation2d for the corner positions?
  8. What are the FOV constants vpw and vph?
  9. On line 229 of the vision subsystem, what’s happening to convert to robot frame?
  10. What does the precision argument do in the circle fitting method?

This is meant to help with latency compensation. Since NT listeners run in a separate thread, we can record the timestamp when the data arrived even during the middle of a cycle. That timestamp (combined with the pipeline latency provided by PhotonVision) is sent as the captureTimestamp. In the end, we also ended up adding a static offset in the Vision subsystem to estimate network latency, so the extra precision of the NT listener may be overkill.

These variables are used to transfer data from the NT listener thread to the main thread. They’re volatile just as a best practice for variables being accessed by multiple threads.

These are for controlling the idle behavior of the LEDs. To keep the odometry from drifting, we want to track the target whenever it’s visible (even if the driver isn’t actively targeting). We also want to avoid blinding everyone around the field unnecessarily. This is the compromise we came up with; as long as the target is visible, the LEDs will stay lit. They turn off once the target is lost for 0.5s (this ensures that there wasn’t just a single odd frame). When no target is visible, they blink for 0.5s every 3s in case the target comes back into view. The LEDs are also forced on a) in autonomous, where accurate odometry is extra critical and b) when a command like auto-aim calls setForceLeds.

The pixel values can be converted to angles fairly easily (this is already part of our solveCameraToTargetTranslation method), so that’s not an issue. Mostly, we just preferred to reuse the same code from previous years. In particular, this gives us some extra flexibility if we had decided to go with another vision solution and not rely on PhotonLib (at this point, I don’t see that happening for us). I haven’t looked in detail at PhotonLib, but it may be that estimateCameraToTargetTranslation would be another effective way to do the calculation if you’re looking to simplify a bit.

The purpose of separating the top and bottom corners is to calculate each set with the appropriate heights. We started just by sorting the y coordinates, but that doesn’t work when the tape is too far askew. Instead, it starts at the average of the four corners and measures the rotation from straight up to each point. minPosRads records the rotation of the closest point to straight up in the positive (counter-clockwise) direction and minNegRads is the same in the negative direction. These points become the upper-left and upper-right corners. There’s no actual need to distinguish left and right for the subsequent calculations, it’s just a side effect of how they’re sorted. This is why the lower corners aren’t ordered — they’re just the two corners that haven’t already been identified.

This is simply for clarity. We use VisionPoint for pixel coordinates and Translation2d for other position data. The coordinate systems are very different; the pixel coordinates start in the upper left and “x” and “y” don’t correspond to field coordinates (the “y” pixel is more like “z” in field coordinates). Using a separate class makes it harder to accidentally mix these up, but there’s no functional difference.

These represent the height and width of the camera’s viewport based on the vertical and horizontal FOVs; for every meter you go out from the camera, how much more width/length becomes visible? Essentially, they just specify the FOV of the Limelight camera.

All of the measurements in the Vision subsystem are really robot frame of reference since they’re based on an onboard camera. The comment on line 228 is just to point out that “nY” and “nZ” are measured relative to the robot’s position and aren’t referring to absolute field coordinates. Nothing is being converted between frames of reference here — lines 229 and 230 just rescale pixel coordinates to a range of -1 to 1.

The circle fitting class uses iterations to find a solution. Based on an initial guess, it scans in four directions to find a center with a lower total residual. This repeats until the current guess has the lowest residual of all of the options, then the scan distance is cut in half. Each iteration scans in smaller steps to narrow down to an optimized center point. The precision argument is the scan distance where this process should stop — the maximum acceptable error. Increasing the precision will increase the number of iterations. We use 1cm, which can be solved in <10 iterations and seems to be more than enough given the noise in the translations to each corner.