FRC 6328 Mechanical Advantage 2022 Build Thread

Software Update #2: “One (vision) ring to rule them all”

Last year, we started our vision journey by auto-aiming just with the horizontal angle from our Limelight. However, we quickly realized the utility of calculating the robot’s position based on a vision target instead. By integrating that data with regular wheel odometry, we could auto-aim before the target was in sight, calculate the distance for the shot, and ensure our auto routines ended up at the correct locations (regardless of where the robot was placed on the field).

Our main objective over the past week was to create a similar system for the 2022 vision target around the hub. This meant both calculating the position of the robot relative to the target and smoothly integrating that position information with regular odometry.

While both the Limelight and PhotonVision now support target grouping, we wanted to fully utilize the data available to us by tracking each piece of tape around the ring individually (more on the benefits of that later). Currently, only PhotonVision supports tracking multiple targets simultaneously; for our testing, we installed PhotonVision on the Limelight for our 2020 robot.

The Pipeline

This video shows our full vision/odometry pipeline in action. Each step is explained in detail below.

  1. PhotonVision runs the HSV thresholding and finds contours for each piece of tape. Using the same camera mount as last year, we found that the target remains visible from ~4ft in front of the ring to the back of the field (we’ll adjust the exact mount for the competition robot of course). In most of that range, 4 pieces of tape are visible. We’ve never seen >5 tracked consistently, and less optimal spots will produce just 2-3. Currently, PhotonVision is running at 960x720; the pieces of tape can be very small on the edges, so every pixel helps. The robot code reads the corners of each contour, which are seen on the left of the video.
  2. Using the coordinates in the image along with the camera angle and target heights, the code calculates a top-down translation from the camera to each corner. This requires separating the top and bottom corners and calculating each set with the appropriate heights. These translations are plotted in the middle section of the video.
  3. Based on the known radius of the vision target, the code fits a circle to the calculated points. This is where we see the key benefit of plotting 12+ points rather than just 2 (as we did last year). When the robot is stationary, the position of the circle stays within a range of just 0.2-0.5 inches frame-to-frame. Last year, we could easily see a range of >3 inches. While the translations to each individual corner are still noisy, the circle fit is able to average all of that out and stay in almost exactly the same location. It’s also able to continue solving even when just two pieces of tape are visible on the side of the frame; so long as the corners fall somewhere along the circumference of the vision ring, the circle fit will be reasonably accurate.
  4. Using the camera-to-target translation, the current gyro rotation, and the offset from the center of the robot to the camera, the code calculates the full robot pose. This “pure vision” pose is visible as a translucent robot to the right of the video. Based on measurements of the real robot, this pose is usually within ~2 inches of the correct position. For our purposes, this is more than enough precision.
  5. Finally, the vision pose needs to be combined with regular wheel odometry. We started by utilizing the DifferentialDrivePoseEstimator class, which makes use of a Kalman filter. However, we found that adding a vision measurement usually took ~10ms, which was impractical during a 20ms loop cycle. Instead, we put together a simpler system; each frame, the current pose and vision pose are combined with a weighted average (~4% vision). This means that after one second of vision data, the pose is composed of 85% vision. It also uses the current angular velocity to adjust this gain — the data tends to be less reliable when the robot is moving. This system smoothly brings the current pose closer to the vision pose, making it practical for use with motion profiling. The final combined pose is shown as the solid robot to the right of the video.

Where applicable, these steps also handle latency compensation. New frames are detected using an NT listener, then timestamped (using the arrival time, latency provided by PhotonVision, and a constant offset to account for network delay). This timestamp is used to retrieve the correct gyro angle and run the averaging filter. Note that the visualizations shown here don’t take this latency into account, so the vision data appears to lag behind the final pose.

More Visualizations

This video shows the real robot location next to the calculated vision pose. The pose shown on the laptop is based on the camera and the gyro, but ignores the wheel encoders. This video was recorded with an older version of the vision algorithm, so the final result is extra noisy.

This was a test of using vision data during a motion profile - it stops quickly to intentionally offset the odometry. The first run was performed with the vision LEDs lit, meaning the robot can correct its pose and return to the starting location. The second run was performed with the LEDs off, meaning the robot couldn’t correct its pose and so it returned to the incorrect location.

This graph shows the x and y positions from pure vision along with the final integrated pose.

  • Vision X = purple
  • Vision Y = blue
  • Final X = yellow
  • Final Y = orange

The final pose moves smoothly but stays in sync with the vision pose. The averaging algorithm is also able to reject noisy data when the robot moves quickly (e.g. 26-28s).

Logging

This project has been a perfect opportunity to use our new logging framework.

For example, at one point during the week, I found that the code was only using one corner of each piece of tape and solving it four times (twice using the height of the bottom of the tape and twice using the top). I had grabbed a log file to record a video while I was running on code without uncommitted changes. On a whim, I switched to the original commit and replayed the code in a simulator with some breakpoints during the vision calculation. I noticed that the calculated translations looked odd, and was able to track down the issue. After fixing it locally, I could replay the same log and see that the new odometry positions were shifted by a few inches throughout the test.

Logging was also a useful resource when putting together the pipeline visualization in this post. We had a log file from a previous test, but it was a) based on an older version of the algorithm and b) didn’t log all of the necessary data (like the individual corner translations). However, we could replay the log with up-to-date code and record all of the extra data we needed. Advantage Scope then converted the data to a CSV file that Python could read and use to generate the video.

Code Links

  • NT listener to read PhotonVision data (here)
  • Calculation of camera-to-target translation (here)
  • Circle fitting class (here)
  • Method to combine vision and wheel odometry (here)

As always, we’re happy to answer any questions.

39 Likes

where’s our gif

9 Likes

Jonah, 6328 has some terrifyingly impressive programming, and I love reading about this stuff. I’ve been doing programming in FRC for 10 years now and you have way better ideas (and execution) than I do. (And definitely better ideas than @came20 )

It was great meeting you at Chezy. Keep it up, dude.

19 Likes

My sincerest apologies.

j37WuEF

15 Likes

Thank you to @Tyler_Olds and FUN for hosting us on the Open Alliance show yesterday. The students did a great job talking about where we are in our build season and what goals are next.

4 Likes

Since I see you have been prototyping catapults, I thought I would mention the white paper from team 230 back in 2016 about electric catapults . We used the information provided to create a prototype with a ~22" arm driven by a 20:1 NEO that launches a ball nicely and makes for a relatively light mechanism. Also, shout out to the Gaelhawks for sharing the excellent work they did.

Link to white paper: https://www.chiefdelphi.com/uploads/default/original/3X/c/4/c4fd84c75983c0cd32572ca0f4ea36f3d4f962e5.pdf

edit: I had to add a gif of the prototype to fit in here

18 Likes

Does the arm hit the stop on the upright or do you use the motor to stop at a certain encoder tick?

This uses the soft limit feature and brake mode to stop at a specific encoder position. Repeatability was reasonably good but would likely be improved by smarter software setup or physical stops.

3 Likes

How are you getting Pose from the target? I understand how you can get distance, but aren’t there multiple places on the field where the target looks identical?

5 Likes

Using the camera-to-target translation, the current gyro rotation, and the offset from the center of the robot to the camera, the code calculates the full robot pose

The gyro is necessary here to get a field pose from vision. Think of polar coordinates with the gyro as theta and the calculated distance as r, with the origin as the center of the hub in field coordinates.

1 Like

Ahh, I don’t trust the gyro angle enough. I was hoping to use this to account for the drift in the gyro.

Yea, that’s the problem this year with the vision target. Your “global” vision measurements are affected by your local measurement drift. I’m hoping the new pigeon 2/navx only drift 1 or 2 degrees during a match.

2 Likes

Week 3 Business Team Update #1

Business Plans

Our FRC and FLL business plans were last updated for the 2020 season and were 49 and 30 pages long respectively. (Sincere apologies to the event judges.) I don’t know if the new 3-page-limit was put in place for 2021 (we never got around to updating in the chaos of last year) or if it’s new this year, but that’s quite a change for us! The students working on this transition decided to use the headings given by FIRST as the outline and adapt the existing content into the new page limit, which worked out great. They are finalizing the last of the appendices this week and then we’ll post them and ask for feedback (we know exactly what these business plans are trying to say, but do you? Let’s find out…)

Of course, the FLL business plan isn’t actually necessary but we do run FLL as a self-sustaining program so we also reworked that one into the same 3-page format. Regardless of anything else, it’s a good exercise for examining our goals, growing edges, and financial plan for reinvigorating the FLL program after it languished a bit in 2020 thanks to COVID.

Sponsor Outreach Campaign

In December, we held a quick student-led training session for new FRC students on how to write an email to potential sponsors to make the ask. Each participant was then assigned a short list of potential sponsors and tasked with reaching out to them. As a result of this campaign, we have four new sponsors so far with positive replies from several others that will hopefully turn into actual sponsors. Though they only have about another week to make it onto the shirts…

Chairman’s Submission Progress

Our Chairman’s team is facing a new challenge this season. We have three very experienced presenters who have been working on Chairman’s submissions for years. But all three are now seniors with exciting futures that don’t include continuing to work on Chairman’s submissions for 6328. :rofl:

So earlier this fall, our Business Student Lead put together a Shadow Chairman’s team made up of students of various ages who are attending all of our awards meetings and learning the process with the intention of taking over next year. They made the deliberate decision to include students of varying ages to ensure that the entire Chairman’s team isn’t going to graduate all at the same time in the future, making the transition between students year-to-year and the sustainability of the Chairman’s team a priority.

As for our Chairman’s submission, the Executive Summary questions are about 95% complete (need to look up some specific data points/numbers, and rework wording for one question). The essay is outlined and students are working this week to see what content from previous years can be used as a foundation before adding in new content and adapting to reflect the most recent year. We also reworked our Chairman’s Documentation Form for this year to hopefully make it easier to update and easier to read. Next step is to fill out the presentation outline, but first we’re waiting to hear (hopefully, as we’ve been led to believe, at the town hall meetings next week) whether judging for New England will be in-person or virtual since that affects how the CA presentation gets put together.

In accordance with the Open Alliance concept, we plan on sharing our CA submission once the drafts are complete. We’re also working with a friend-team to pull together a wider pilot program for CA presentation practice, so stay tuned on that front!

12 Likes

This is awesome! I have a few questions (sorry for the amount, trying to understand the code before I try and implement it)

  1. Why do you use a NetworkTables listener for the Photon data as opposed to just getting the latest result in the periodic method?
  2. Why do you make the timestamp and corner arrays volatile?
  3. What is the target grace timer/“idleOn” for the leds doing?
  4. When calculating the translation from the camera to each point, why do you not use PhotonVision’s utility method estimateCameraToTargetTranslation? Is that because you only have the pixel values of the corners?
  5. Why do you only sort the top two corners specifically and not the bottom corners?
  6. In the sortCorners method, what do minPosRads and minNegRads do?
  7. Why do you use VisionPoint and not just Translation2d for the corner positions?
  8. What are the FOV constants vpw and vph?
  9. On line 229 of the vision subsystem, what’s happening to convert to robot frame?
  10. What does the precision argument do in the circle fitting method?
5 Likes

This is meant to help with latency compensation. Since NT listeners run in a separate thread, we can record the timestamp when the data arrived even during the middle of a cycle. That timestamp (combined with the pipeline latency provided by PhotonVision) is sent as the captureTimestamp. In the end, we also ended up adding a static offset in the Vision subsystem to estimate network latency, so the extra precision of the NT listener may be overkill.

These variables are used to transfer data from the NT listener thread to the main thread. They’re volatile just as a best practice for variables being accessed by multiple threads.

These are for controlling the idle behavior of the LEDs. To keep the odometry from drifting, we want to track the target whenever it’s visible (even if the driver isn’t actively targeting). We also want to avoid blinding everyone around the field unnecessarily. This is the compromise we came up with; as long as the target is visible, the LEDs will stay lit. They turn off once the target is lost for 0.5s (this ensures that there wasn’t just a single odd frame). When no target is visible, they blink for 0.5s every 3s in case the target comes back into view. The LEDs are also forced on a) in autonomous, where accurate odometry is extra critical and b) when a command like auto-aim calls setForceLeds.

The pixel values can be converted to angles fairly easily (this is already part of our solveCameraToTargetTranslation method), so that’s not an issue. Mostly, we just preferred to reuse the same code from previous years. In particular, this gives us some extra flexibility if we had decided to go with another vision solution and not rely on PhotonLib (at this point, I don’t see that happening for us). I haven’t looked in detail at PhotonLib, but it may be that estimateCameraToTargetTranslation would be another effective way to do the calculation if you’re looking to simplify a bit.

The purpose of separating the top and bottom corners is to calculate each set with the appropriate heights. We started just by sorting the y coordinates, but that doesn’t work when the tape is too far askew. Instead, it starts at the average of the four corners and measures the rotation from straight up to each point. minPosRads records the rotation of the closest point to straight up in the positive (counter-clockwise) direction and minNegRads is the same in the negative direction. These points become the upper-left and upper-right corners. There’s no actual need to distinguish left and right for the subsequent calculations, it’s just a side effect of how they’re sorted. This is why the lower corners aren’t ordered — they’re just the two corners that haven’t already been identified.

This is simply for clarity. We use VisionPoint for pixel coordinates and Translation2d for other position data. The coordinate systems are very different; the pixel coordinates start in the upper left and “x” and “y” don’t correspond to field coordinates (the “y” pixel is more like “z” in field coordinates). Using a separate class makes it harder to accidentally mix these up, but there’s no functional difference.

These represent the height and width of the camera’s viewport based on the vertical and horizontal FOVs; for every meter you go out from the camera, how much more width/length becomes visible? Essentially, they just specify the FOV of the Limelight camera.

All of the measurements in the Vision subsystem are really robot frame of reference since they’re based on an onboard camera. The comment on line 228 is just to point out that “nY” and “nZ” are measured relative to the robot’s position and aren’t referring to absolute field coordinates. Nothing is being converted between frames of reference here — lines 229 and 230 just rescale pixel coordinates to a range of -1 to 1.

The circle fitting class uses iterations to find a solution. Based on an initial guess, it scans in four directions to find a center with a lower total residual. This repeats until the current guess has the lowest residual of all of the options, then the scan distance is cut in half. Each iteration scans in smaller steps to narrow down to an optimized center point. The precision argument is the scan distance where this process should stop — the maximum acceptable error. Increasing the precision will increase the number of iterations. We use 1cm, which can be solved in <10 iterations and seems to be more than enough given the noise in the translations to each corner.

11 Likes

Thank you so much! A couple more questions:

I’m still a little confused about this. After these values are recorded, they’re not used as far as I can tell? Is their function just to make sure the points aren’t more than 90 degrees from straight up? Also, when it’s comparing to Math.PI, wouldn’t it be more accurate to call it maxPosRads?

Why do you use forceLeds in addition to the override state? Wouldn’t it be simpler just to supply a “force” state to the led mode supplier?

3 Likes

The topLeftIndex and topRightIndex variables record the indices of the corners, so minPosRads and minNegRads just record the actual rotation. When iterating through the corners, a point with a positive rotation less than minPosRads overrides both minPosRads and topLeftIndex. A point with a negative rotation less than minNegRads (the absolute value of the rotation is used here) overrides both minNegRads and topRightIndex. This is what’s happening here.

minPosRads and minNegRads are both initialized to Math.PI because that’s the maximum value we would ever expect to see on each side. They could just as easily be initialized to Double.MAX_VALUE.

The LED mode supplier is specifically used for the override switch on our operator board. The supplier is set here. The “off” option is meant to be used in the pits to avoid blinding ourselves, and “on” allows us to tune the pipeline when the robot is disabled. Most of the time, the switch is just set to “auto” (meaning the code should automatically enable the LEDs depending on the robot state and target visibility). setForceLeds is a way for other parts of the code to request for the LEDs to be enabled in “auto,” but the override switch always takes precedence.

FYI, after some internal discussion we switched the NT listener to use a synchronized block rather than volatile. The up-to-date version of the class is available here.

5 Likes

1/29: FUN, Catapult, Drivetrain progress

FIRST Updates Now Interview

We had a great time on the FUN Open Alliance show a few nights ago. We were able to cover quite a lot of information during the 20 minute interview including prototype, software/odometry/logging, and scouting updates. You can find the livestream VOD here, as well as an excellently shot and edited video that our Media team lead made for the show here.

Pneumatic Catapult

As mentioned in posts above, we briefly investigated the practicality of a pneumatically-actuated catapult similar to that of 148 2016 and 1986 2016. The main benefit of a catapult to us is its reliability to consistently hit shots from a specific location (or two) without much variability- something we think flywheels could suffer from (not just this year, but also historically). It’d be nice to not have to worry about recovery times, ball inflation (looking at you incorrect gauge), and the magic numbers that compression & wheel speed bring in a flywheel.

After testing it, however, the downsides/issues that we encountered with this type of catapult were not trivial. Namely, we found that even after a few optimizations and tricks (using higher flow rate solenoids, tanks down stream from the regulator, spring loading up), it wasn’t quite fast enough to comfortably score two cargo at once. While we could have spent a few more days on potentially improving the prototype, we knew that there would be more issues to address (most notably: loading/serializing into the catapult may be a challenge) and we don’t quite have the bandwidth to do that. I hope to see some awesome double catapults out there this year :eyes:

Field Elements

Due to the limited height within our build space the manufacturing team created a mobile goal which we can take outside. Unfortunately, it was a little bit too mobile in the wrong ways…

272651561_1749198905249015_4283776562554135130_n

Prototype Bot Chassis

We’ve made good progress on manufacturing and wiring our prototype robot’s chassis. The electrical team has been thinking about the best way to arrange all of the electronics which will translate to the competition robot when we get to that.

We didn’t have the tread on yet during initial testing, so it was a bit drifty. After adding the center wheels, we unfortunately weren’t able to go sideways anymore.

We also printed these gearbox covers; they don’t really serve a huge purpose other than perhaps containing some grease sprayege.

After fixing some minor details and adding battery mounting parts, we recently started manufacturing the competition robot chassis.

CAD/Design

Intake

In an effort to achieve as much compliance and robustness as possible, we’re taking a page (or two… or three) out of 1678’s book.

Some noteworthy features:

  • The fourth “bar” of the 4-bar will just be surgical tubing, which will be tied somewhere to the superstructure.
  • When the intake is actuated outside of the robot, the cylinders will be in their retracted state. This will hopefully prevent the cylinder rod from bending when we inevitably encounter an aggressive side-loading scenario
  • The intake motor will be located within the robot frame instead of on the actual intake by having the pivot point also be where the belt reduction is. This should help make the intake nice and light, hopefully further increasing its capability to take a lot of hits at a competition and still survive.

What we still have to add:

  • Geometry such that the intake will fold up nicely when retracted back into frame (this will likely have more to do with the tower / whatever is behind the intake)

What we might change between this prototype and the final version:

  • The link lengths. With our current constraints (must prominently: the pneumatic cylinder facing “backwards” and the pivot having to be on the middle roller), the intake is geometrically forced to sit pretty high up when it’s retracted back into frame, which means that the links are pretty large. We’re thinking that we might want to change this later on, which means either tweaking the cylinder or moving the pivot point on the roller plate (or using a more traditional four bar rather than surgical tubing).

We’d like to seriously thank Citrus for publicly releasing their CAD year after year - it’s one of the resources that we find ourselves coming back to again and again for reference and it’s extremely helpful. Specifically, here is their public 2020 CAD that we’ve taken a lot of inspiration from.

Flywheel / Tower

This is the next thing on our list to finalize. We’ve begun laying out some of the basic geometry and are looking to get at least the prototype robot version assembled by the middle of this coming week.

Below is a testing video that we collected a couple meetings ago of our tower & flywheel prototype. It’s been a bit of a struggle to gather a lot of meaningful information on shooter prototypes like this because the tallest ceiling in our shop is around ~9ft, resulting in a fairly small window of error that we have to fit the cargo into in order to “score”. This has necessitated testing in the back room with all the water heaters.

ezgif-3-45fdaa87a5

We also experimented around with a top roller, which seemed to greatly reduce the backspin to help with bounce out. The main question we are trying to answer now if we can get away with a single hood angle for both the high and low goal by simply varying flywheel speed.

This is fine

We’re certainly behind on where we hoped to be, having two years off has exposed some rough edges. We’re working to make some changes to get back on track and be sure we’re ready for week 1. The main actions are to take a bit of a step back and be sure everyone is on the same page so we have as much contribution as possible. Hopefully we can share some positive takeaways shortly.

-Max

31 Likes

Would you mind giving a bit more information on the intake? If you have a view of it stowed within the frame perimeter that would be great. I am just looking to get a more complete picture of what you plan to do with the surgical tubing!

2 Likes

Great questions! Apologies for not including more information on that - hopefully this helps.

The intake is still in it’s prototype phases, so we don’t have all of the details ironed out yet but are hoping to get there in the next few days. We’ve been testing it some and we’re looking to post some more about it in our next blog post(s), but here’s some additional information to hopefully clarify some things.

The intake will be actuated outward and, since the roller plate (green plate) is able to pivot about the link that is moving (red plate), the rollers will also make their way outside of the frame.

What’s important here, and potentially not very intuitive, is that we need a way to constrain the movement of that roller plate (green) such that the rollers form into the general geometry that we desire (AKA: no dead zones and ample compression throughout the ball’s journey from the floor → over the bumper → into the robot frame).

This is the function of the surgical tubing; as the intake is actuated outside of frame from its “collapsed” position, the surgical tubing becomes taut and forces the rollers into roughly the desired location.

On the other hand, when we need to retract the intake back into frame, we can’t rely on the surgical tubing to do much since it becomes slack and the roller plate (green) is able to just freely flop around. So in order to get the intake back into a stowed/collapsed position, we need some helping geometry to guide the roller plate (green) along its actuation path and into its collapsed location.

We’re not too sure what exactly this helping geometry is going to look like just yet, but it may look something like using our tower’s 1x1 tube with some standoffs or plates to get underneath the sloped side of the roller plate (green) to get it to kinda nicely ramp up and into its collapsed state.

1678 guiding geometry example

I believe that this section of this plate is what helped guide 1678’s intake up and into its stowed position, though I may very well be mistaken.

We’re also still thinking about whether or not this style of actuation is what we want to go with. As development progresses, we’ll be sure to keep the thread updated on what route we end up going with.

Hopefully this video of us performing a very useful test helps visualize the function of the surgical tubing and whatnot :sweat_smile:.

32 Likes