FRC 6328 Mechanical Advantage 2022 Build Thread

Day 4: Linear Catapult Prototype

Small update today on a first shooter prototype, a linear pneumatic catapult.

It is made up with a linear motion 3D printed HYPE block with 8 bearings in it and a pair of 3/4" bore 10" stroke cylinders both powered by their own high flow solenoids from automation direct. There are also some other optimizations like having two tanks down stream from the regulator.

The cradle to hold the ball needs a lot of work as it is currently rotating and flexing with every shot, but it is impressive to still be consistent even with that. The eventual goal is to have this hold both balls at a time, side by side.

-Max

30 Likes

We used brushes from the 2019 feeder station in our 2021 at home robot and they worked great. 10/10 would recommend.

2 Likes

This would be really interesting to see. The main argument I’ve seen against catapult’s this year is that you can hold two cargo at once. A double shot catapult would, in theory at least, be even better than a fly wheel shooter in that regard. Good luck!

1 Like

Day 5-6: Launcher Prototypes, Scouting App, & Drivetrain CAD

Scouting App

The scouting team started work on the scouting app for this year. One of the ideas they came up with was recording the exact location a team shot from by tapping on the field. This requires having an accurate field modeled in the app UI. The scouting team will do a much larger deep dive into this soon.

We figured out a way to streamline the process of figuring out where each field element should go on CanvasManager [our name for the scouting app UI framework]. To do this, we first uploaded a drawing of the field into GIMP. We resize it to the resolution of the app we are making so that when we hover the mouse over the image, we see the exact x and y coordinate of the mouse. After accounting for some offsets and other calculations, we transfer these coordinates into a website called GeoGebra(similar to Desmos). We can then rotate the shape we created on GeoGebra to find the other necessary points. Although there was some initial grunt-work, this process should streamline the entire process, giving us more time to work on debugging the app.

-Manthan

Software

The software team continued to bring up the baseline code for controlling the drivetrain and running motion profiles, in addition to working to finish up the prototype motor test board. There will be a much larger software update posted soon so this section is kept short for now.

Prototyping

Linear Catapult

The linear catapult was modified to hold two balls side by side. With this added weight the prototype struggled with range in its current form. I unfortunately don’t have any pictures/videos of this right now, so stay tuned for those over the weekend. For the next catapult revision we think we will be pivoting to a more standard rotating catapult design, but staying pneumatic.

Hooded Flywheel

This is a traditional hooded flywheel prototype with various built in adjustments. The CAD team worked on designing it and then we cut it out on our router.

Attempt 1


Maybe a little fast… turning down the speed for attempt 2

First Second try!

This was at the very end of the meeting, so we only shot these two balls. Much more testing to come…

Prototype Specs

  • Adjustable compressing by changing hood holes and thickness (currently 0.8" of compression)
  • 6" flywheel (currently 2x colsons)
  • 2x NEOs either 1:1 or 1.33:1 (currently 1 NEO 1:1)
  • Additional top flywheel attachment powered by another NEO either 1:1 or 1.33:1, similar to the GreyT Shooter V2 (not shown or tested yet)
  • 2" Kicker wheel powered by an UltraPlanetary (not used yet)
  • Mounting holes for a “ball tower” to store and feed the balls

Next Steps

  • Prototype and integrate ball tunnel into the shooter to feed balls
  • Test how rapidly two balls can be fed
  • Test the range where we can still make shots with a fixed hood angle
  • Experiment with
    • Wheel type
    • Compression
    • Hood material, potentially foam
    • Release angle
    • Upper flywheel
    • Flywheel speed(s)

CAD

The CAD team also worked on our drivetrain CAD which is shown below. There are still some final details, but it is almost there. Currently it is 29W x 30L and has an 11:66 gear ratio.

We’re off today, so we should have a week 1 summary post with our progress so far and from tomorrows meeting up on Sunday.

-Max

19 Likes

gif game is 100%

5 Likes

Software Update #1: Setting the Foundation

Our robot code work this week was focused on setting up a solid foundation for our 2022 code base. Below are a few key takeaways; while none of this is particularly revolutionary, we hope that it might prove useful.

AdvantageKit, Logging, and Multi-Robot Support

This is our first full project to make use of AdvantageKit and our logging framework. This means that hardware interaction in each subsystem is separated into an “IO layer” (see more details here) such that data can be replayed in a simulator. An extra bonus of this system is that it’s easy to swap out the hardware implementation. Currently, the drive subsystem supports SparkMAXs, TalonSRXs, or the WPILib drive sim depending on the current robot. We can test our code on older robots just by changing a constant, which has proved useful as we develop code long before any competition robot takes shape. Each subsystem and command defines constants for each robot, meaning that we don’t rely on all of them to behave identically. We’ll continue to share our progress with logging as the robot project grows beyond just a single subsystem.

Operator Interface

During the offseason, we built a new operator board (see below) focused on using Xbox controllers. It contains wells for the driver and operator controllers, plus a series of physical override switches. The panel under the laptop is magnetically attached, and lifts up for access to the cables. This makes it much easier to carry devices to the field, and means that we don’t need to rely on dashboard buttons for critical overrides.

Last year, our scheme for finding and using joysticks was extremely flexible (see this post). However, we felt that the capability just wasn’t worth the complexity. Rather than supporting 7+ control schemes, we’ve reduced down to just 2. During competitions, the driver and operator use separate Xbox controllers. For testing or demos, all of the controls can also be mapped to a single Xbox controller. Here’s how the code is set up internally:

  • All of the driver and operator controls are defined in this interface (it’s very minimal right now). Based on the number of Xbox controllers, either the single controller or dual controller implementations are instantiated.

  • This class handles the override switches on the operator board. It reads the value of each switch when the joystick is attached, but will default to every switch being off if we’re running without the board.

  • The OISelector class scans the connected joysticks to determine which versions of the OI classes to instantiate. When the robot is disabled, the code continuously scans for joystick changes and recreates the OI objects when necessary.

This command is responsible for converting joystick positions to drive speeds. The joystick mode is selectable on the dashboard, and we currently support 3 modes (again, simplifying from 10 modes last year).

  • Tank: Left stick controls left speed, right stick controls right speed. While not a very intuitive system, this is still useful when testing the drive train.
  • Split Arcade: Left stick controls forward speed, right stick controls the speed differential. This is a reliable and easy to use option for testing, demos, etc.
  • Curvature: This isn’t quite a traditional curvature drive, but a mix between full curvature and split arcade. It’s a scheme we used during the at-home challenges last year to overcome the key limitation of curvature drive: not functioning while stationary. While a “quick turn” button is an effective solution, we felt that it was too unintuitive. Instead, we slowly transition between split arcade and curvature drive up to 15% speed (0% is split arcade, 15% is curvature, and 7.5% averages the outputs from both modes). This provides the benefits of curvature drive at high speed while maintaining the ease of use from split arcade.

Odometry, Motion Profiling, and Field Constants

Our current odometry system is fairly basic right now, but will evolve as we continue to explore vision options. Odometry is handled by the drive subsystem, with getters and setters for pose. One key change from last year is that everything uses meters instead of inches. This has already made our lives much easier when it comes to setting up motion profiling.

As with other areas, we took this as an opportunity to simplify older code. Our ~800 line motion profiling command from last year is now much simpler and easier to use (see here). We focused on the most essential features while removing the less useful ones (like circular paths, which were only useful for the at-home challenges). This new command is also much more maintainable when we need to make fixes and improvements.

Before long, we’ll need to start putting together profiles for auto routines. Unfortunately, this year’s field layout is a bit of a mess when it comes to defining the positions of game elements (did the edges of tarmac really need to be nominally tilted by 1.5°?) To save many headaches later in the season, we wrote this class with lots of useful constants. It defines four “reference points” (see the diagram below) along the tarmac. The cargo positions are defined using translations from those references. The same principle of starting at a reference and translating could be used to define robot starting positions or waypoints on a profile to collect cargo. The class also includes constants for the hub vision target.

SysId

SysId is an incredibly useful tool, but the process of setting up a new project is quite tedious (selecting motors, encoders, conversion factors, etc.) Instead, we wrote a command that communicates with SysId but makes use of the code we’ve already set up to control each subsystem. See this example of using the command. Each subsystem just needs a method to set the voltage output (with no other processing except voltage compensation), and a method to return encoder data like position and velocity.

Utilities

Neither of these utilities are new this year, but it’s always worth mentioning them again:

  • We use a custom Shuffleboard plugin called NetworkAlerts for displaying persistent alerts. The robot side class can be used to alert the drivers of potential problems.

  • For constants (like PID gains) that require tuning, we use our TunableNumber class. During normal operation, each acts as a constant. When the robot is in “tuning mode” (enabled via a global constant), all of the TunableNumbers are published to NT and can be changed on the fly.

Our next major software project is to explore vision options with the hub target (and maybe cargo too). We’ll be sure to post an update with our findings. In the meantime, we’re happy to answer any questions.

17 Likes

8 Likes

Scouting and Strategy Week 1 Update

This past week, the Scouting and Strategy Team worked to further analyze the game as well as begin work on our scouting app for this year. Below you can find a summary of what we have accomplished, what we have found useful, and what we haven’t.

Strategy

Throughout the week, we looked at ways to gain further insight into the game as well as possible adjustments to our priority list. Early on, we looked at the Monte Carlo simulation made by Team 4926, which can be found here. When using it, we found that we were unable to extrapolate meaningful information from it when the distribution of performance was flat, as opposed to on a curve. We are looking to modify this in the near future, both so we can project how the game will play out with it, and also so we can generate mock data for our analysis systems with it.
We have also been looking at ways to predict the amount of cargo on the field as a match progresses with robots of certain capabilities. We attempted this using a sort of excel algorithm, however we found this to not be able to take in enough data to give useful results. We think this sort of analysis has potential and we are looking at python solutions.

We have also discussed and listed below our adjustments to our initial priority list. If you click on any of the changes, you can see our reasoning.

Being able to go under the low bar: Should Have -> Must Have
  • As more and more Open Alliance and Ri3D information comes out regarding the cramped nature of the Hangar, we believe it will be very important to be able to approach the hangar rungs from the middle of the field as opposed to having 2-3 robots enter from the side, turn 90 degrees and awkwardly slide into place.

  • We expect cargo to make its way over into the hangars and it will be greatly beneficial to be able to enter from either side of the hangar.

  • We expect defense to be strong against robots who are taller than the low rung who enter a hangar and get trapped in an artificial chokepoint with a defender between the hangar trusses.

  • It’s only a few inches of height sacrifice

Active Cargo Settling: Should Have -> Could Have
  • From our own early prototyping as well as other Open Alliance teams, we have found intakes that settle balls that are bouncing more than a couple inches off the ground to be challenging to say the least. We feel currently it may be better to focus on having an excellent ball-on-ground intake.

  • We don’t expect bouncing cargo to be as much of a plaguing issue as we did on kickoff weekend. Pre-Champs, we expect to able to thrive without this capability. We may assess pursuing this later in the season depending on how early events look.

Score cargo in lower hub from tarmac: Rephrased to “Score cargo from one robots length away from fender”
  • This was more or less the intended understanding from kickoff, just a clarification.

  • Goal is to be able to “score over defender,” similar to 254 2014.

  • Further back on the tarmac seems unnecessary and unreliable/unrealistic.

Score cargo from one robots length away from fender: Must Have -> Should Have

Having more than one position of scoring low may hinder other aspects of the design and we don’t believe it is absolutely necessary to have a good low scoring robot.

High Bar: Could Have -> Should Have
  • From early prototypes from other Open Alliance teams, we expect this challenge to be slightly easier than some of us originally foresaw.

  • We expect a large portion of teams to have a mid climb and a high climb + a mid climb = bonus RP. In order to retain control over our destiny as much as possible in the rankings, we would like to not require 3 robots to climb for the rank point.

Umbrella: Won’t Have -> Could Have

Umbrella refers to a mechanism that can selectively prevent cargo from entering an open hopper. If the goal is to allow bouncing Cargo to fall into the robot, we would potentially want to be able to prevent undesired Cargo from doing the same.

Operation Double Trouble: Unlisted -> Could Have

2faaaa0314a8d33bb47d189698ca93f9

Wearing a Mask: Must Have -> Must Have

giphy (6)

5 Cargo Auto: Unlisted -> Unlisted

152624_2

These have likely settled into place for the most part as we are quickly approaching the main archetype discussions of the robot. We will have a post prior to our Week 1 event to share our strategies going into it.

Scouting

We have been working hard to get our scouting app off the ground quickly as we share a large amount of student resources with the programming subteam. To start off, we compiled a list of data points that we want to be able to collect both in match data as well as pit scouting data. This can be found below.

Match Data
  • Alliance color
  • Start position(tap for location)
  • Taxi
  • Ratings(intake, driver, defense, avoid defense)
  • Shooting position(tap for location)
  • Upper success/failure
  • Lower success/failure
  • Climb level(attempted, success)
  • Climb time(for each level) (collected in background)
  • Penalties(#)
Pit Data
  • Team #
  • Picture
  • Climb level
  • Height
  • Dimensions
  • Multiple drive teams?
  • Shoot high?low?both?
  • Auto mode
  • Start preference
  • Shoot from fender?tarmac?launchpad?
  • Drivetrain type
  • Holding capacity

One of the more unique aspects of our data collection is that we are taking in the “precise” location that robots scored from, as well as their accuracy from that position. We are doing this by having scouts select the field image on their tablet screen on the place where they believe they scored from. Then a pop up comes up near where they pressed, prompting them to select how many were scored and missed by hub height. We are, in the background, collecting the timestamp data of each scoring action, as well as climber timings. We are unsure as to how we will use it or how accurate it will be, but we are moving with the principle of collecting all the data we have access to that doesn’t impede the collection of other data, and selecting what we want to use later down the line.

We have spent more time than I think any of us are a fan of figuring out the most effective ways to display the field in the app.

Getting the dimensions of field elements and scaling them into our app was annoying.

We want this degree of precision so when we collect scoring location x,y data, we can overlay it onto an image of a field and have it be accurate-ish.

Pit Scouting App portion of app in browser and on the kindle.

App Field Status as of Saturday. You can probably see the slight imperfections in the Hub deflectors and tarmac lines. Trust me, it bothers us as much as it bothers you.

We have started looking at some ways we can use the substantial amount of data we are collecting. My 3 personal favorite are below:

Density/Heat Map of scoring locations/accuracy by team/event/driver station

Since we have access to every team’s scoring location as well as their accuracy from that position and their driver station, we can model this to our heart’s content. Below is an example of a model that looks kinda like how we imagine this could look.

Accuracy of event or team over distance from the goal

This is certainly less useful for low goal scorers, but as more robots try to score from range, we can model their accuracy from range and potentially know where they are the most effective, and take appropriate in match actions.

New York Times Spiral Graph

If you want to take a look at what we have going on code-wise, our repo can be found here. The 2022 specific code can be found in the Ayush branch, found here.

As always, any questions, comments, criticism, or suggestions are highly appreciated.

-Connor

200 (3)

16 Likes

Software Update #2: “One (vision) ring to rule them all”

Last year, we started our vision journey by auto-aiming just with the horizontal angle from our Limelight. However, we quickly realized the utility of calculating the robot’s position based on a vision target instead. By integrating that data with regular wheel odometry, we could auto-aim before the target was in sight, calculate the distance for the shot, and ensure our auto routines ended up at the correct locations (regardless of where the robot was placed on the field).

Our main objective over the past week was to create a similar system for the 2022 vision target around the hub. This meant both calculating the position of the robot relative to the target and smoothly integrating that position information with regular odometry.

While both the Limelight and PhotonVision now support target grouping, we wanted to fully utilize the data available to us by tracking each piece of tape around the ring individually (more on the benefits of that later). Currently, only PhotonVision supports tracking multiple targets simultaneously; for our testing, we installed PhotonVision on the Limelight for our 2020 robot.

The Pipeline

This video shows our full vision/odometry pipeline in action. Each step is explained in detail below.

  1. PhotonVision runs the HSV thresholding and finds contours for each piece of tape. Using the same camera mount as last year, we found that the target remains visible from ~4ft in front of the ring to the back of the field (we’ll adjust the exact mount for the competition robot of course). In most of that range, 4 pieces of tape are visible. We’ve never seen >5 tracked consistently, and less optimal spots will produce just 2-3. Currently, PhotonVision is running at 960x720; the pieces of tape can be very small on the edges, so every pixel helps. The robot code reads the corners of each contour, which are seen on the left of the video.
  2. Using the coordinates in the image along with the camera angle and target heights, the code calculates a top-down translation from the camera to each corner. This requires separating the top and bottom corners and calculating each set with the appropriate heights. These translations are plotted in the middle section of the video.
  3. Based on the known radius of the vision target, the code fits a circle to the calculated points. This is where we see the key benefit of plotting 12+ points rather than just 2 (as we did last year). When the robot is stationary, the position of the circle stays within a range of just 0.2-0.5 inches frame-to-frame. Last year, we could easily see a range of >3 inches. While the translations to each individual corner are still noisy, the circle fit is able to average all of that out and stay in almost exactly the same location. It’s also able to continue solving even when just two pieces of tape are visible on the side of the frame; so long as the corners fall somewhere along the circumference of the vision ring, the circle fit will be reasonably accurate.
  4. Using the camera-to-target translation, the current gyro rotation, and the offset from the center of the robot to the camera, the code calculates the full robot pose. This “pure vision” pose is visible as a translucent robot to the right of the video. Based on measurements of the real robot, this pose is usually within ~2 inches of the correct position. For our purposes, this is more than enough precision.
  5. Finally, the vision pose needs to be combined with regular wheel odometry. We started by utilizing the DifferentialDrivePoseEstimator class, which makes use of a Kalman filter. However, we found that adding a vision measurement usually took ~10ms, which was impractical during a 20ms loop cycle. Instead, we put together a simpler system; each frame, the current pose and vision pose are combined with a weighted average (~4% vision). This means that after one second of vision data, the pose is composed of 85% vision. It also uses the current angular velocity to adjust this gain — the data tends to be less reliable when the robot is moving. This system smoothly brings the current pose closer to the vision pose, making it practical for use with motion profiling. The final combined pose is shown as the solid robot to the right of the video.

Where applicable, these steps also handle latency compensation. New frames are detected using an NT listener, then timestamped (using the arrival time, latency provided by PhotonVision, and a constant offset to account for network delay). This timestamp is used to retrieve the correct gyro angle and run the averaging filter. Note that the visualizations shown here don’t take this latency into account, so the vision data appears to lag behind the final pose.

More Visualizations

This video shows the real robot location next to the calculated vision pose. The pose shown on the laptop is based on the camera and the gyro, but ignores the wheel encoders. This video was recorded with an older version of the vision algorithm, so the final result is extra noisy.

This was a test of using vision data during a motion profile - it stops quickly to intentionally offset the odometry. The first run was performed with the vision LEDs lit, meaning the robot can correct its pose and return to the starting location. The second run was performed with the LEDs off, meaning the robot couldn’t correct its pose and so it returned to the incorrect location.

This graph shows the x and y positions from pure vision along with the final integrated pose.

  • Vision X = purple
  • Vision Y = blue
  • Final X = yellow
  • Final Y = orange

The final pose moves smoothly but stays in sync with the vision pose. The averaging algorithm is also able to reject noisy data when the robot moves quickly (e.g. 26-28s).

Logging

This project has been a perfect opportunity to use our new logging framework.

For example, at one point during the week, I found that the code was only using one corner of each piece of tape and solving it four times (twice using the height of the bottom of the tape and twice using the top). I had grabbed a log file to record a video while I was running on code without uncommitted changes. On a whim, I switched to the original commit and replayed the code in a simulator with some breakpoints during the vision calculation. I noticed that the calculated translations looked odd, and was able to track down the issue. After fixing it locally, I could replay the same log and see that the new odometry positions were shifted by a few inches throughout the test.

Logging was also a useful resource when putting together the pipeline visualization in this post. We had a log file from a previous test, but it was a) based on an older version of the algorithm and b) didn’t log all of the necessary data (like the individual corner translations). However, we could replay the log with up-to-date code and record all of the extra data we needed. Advantage Scope then converted the data to a CSV file that Python could read and use to generate the video.

Code Links

  • NT listener to read PhotonVision data (here)
  • Calculation of camera-to-target translation (here)
  • Circle fitting class (here)
  • Method to combine vision and wheel odometry (here)

As always, we’re happy to answer any questions.

39 Likes

where’s our gif

9 Likes

Jonah, 6328 has some terrifyingly impressive programming, and I love reading about this stuff. I’ve been doing programming in FRC for 10 years now and you have way better ideas (and execution) than I do. (And definitely better ideas than @came20 )

It was great meeting you at Chezy. Keep it up, dude.

19 Likes

My sincerest apologies.

j37WuEF

15 Likes

Thank you to @Tyler_Olds and FUN for hosting us on the Open Alliance show yesterday. The students did a great job talking about where we are in our build season and what goals are next.

4 Likes

Since I see you have been prototyping catapults, I thought I would mention the white paper from team 230 back in 2016 about electric catapults . We used the information provided to create a prototype with a ~22" arm driven by a 20:1 NEO that launches a ball nicely and makes for a relatively light mechanism. Also, shout out to the Gaelhawks for sharing the excellent work they did.

Link to white paper: https://www.chiefdelphi.com/uploads/default/original/3X/c/4/c4fd84c75983c0cd32572ca0f4ea36f3d4f962e5.pdf

edit: I had to add a gif of the prototype to fit in here

18 Likes

Does the arm hit the stop on the upright or do you use the motor to stop at a certain encoder tick?

This uses the soft limit feature and brake mode to stop at a specific encoder position. Repeatability was reasonably good but would likely be improved by smarter software setup or physical stops.

3 Likes

How are you getting Pose from the target? I understand how you can get distance, but aren’t there multiple places on the field where the target looks identical?

5 Likes

Using the camera-to-target translation, the current gyro rotation, and the offset from the center of the robot to the camera, the code calculates the full robot pose

The gyro is necessary here to get a field pose from vision. Think of polar coordinates with the gyro as theta and the calculated distance as r, with the origin as the center of the hub in field coordinates.

1 Like

Ahh, I don’t trust the gyro angle enough. I was hoping to use this to account for the drift in the gyro.

Yea, that’s the problem this year with the vision target. Your “global” vision measurements are affected by your local measurement drift. I’m hoping the new pigeon 2/navx only drift 1 or 2 degrees during a match.

2 Likes