FRC 6328 Mechanical Advantage 2020-2021 Build Thread

This is a different type of virus…

1 Like

I assume this is stage 1, stage 2, and stage 3


I love the League against Science and Technology!

" Deploying 5 lines of code into the shared cache to achieve stage 1"
" Continuing to deploy code into the infected cores of the CPU"

If I had one suggestion to give, it would be to rename some of these actions. That’s a mouthful! You need something short and sweet enough that strategy teams, drivers, coaches etc can make themselves clear quickly. Maybe use acronyms for LOC (lines of code) and ICs/UCs (infected cores)?


Oh thank you, I am going off of what the game design team is saying and I would assume it’s that, but I will get back to this ASAP after discussing it with them.

1 Like

did you guys find that the acceleration wheels were necessary? My teams new, and are trying to talk ourselves out of the additional motors…


You will gain more from simplicity than from accelerators.

841 didn’t need them for everywhere up to the front of the wheel - if you’re shooting from beyond it, YMMV.

Accelerators will let your overall volley get a couple tenths faster - 95% of teams have better things to do than chase tenths of seconds in individual game actions.


Hey everyone,

Team 6328’s business team has been staying active these past few months and we wanted to share some updates about our outreach and awards submissions. With COVID-19 there were a number of limits set on how we could go about outreach, but our switch to virtual formats has still allowed us to reach different audiences and track over 1660 outreach man hours.


Since kickoff, both students and mentors on our team have been helping mentor our sister rookie Team 8604 Alpha Centauri. Some of our team members are regularly attending 8604’s different meetings - such as Innovation Challenge/Girl Up Advocacy, Game Design Challenge, programming, CAD and assembly, social hour, and all-team meetings - to join in on discussions and share our ideas, suggestions, equipment, and experiences.

6328 are excited to announce that we are partnering with 8604 to host an Innovation Challenge Practice Interviews Day for teams participating in the challenge on Sunday March 21. (Note: Our team has opted to not participate in this specific challenge.) There will be 10-12 adult judges paired up to listen to teams’ 15-minute presentation/Q&A and then together provide written feedback. Judges are provided with a rubric to evaluate teams’ performances, and the event is completely free. Registration can be completed at . Here is the flyer:

This coming week on Wednesday March 10 at 7 pm, 6328 and 8604 are also hosting a free and virtual college counseling seminar together. After a ton of positive feedback last year, Nancy Federspiel of College Consulting Services is returning to discuss the college admissions process, including changes over the past year. We have several teams participating from New England and New York. There are still some slots available for anyone interested, but attendance will be capped so sign up as soon as possible! Register at College Admissions Information Webinar – Littleton Robotics . Here is the flyer:

Back in mid-December, we gave two livestream presentations as part of the 24 Hours of STEM initiative ( organized by teams across the globe. Our software and programming team members presented “Software Training with Zumos” to talk about the in-house, summer-to-fall training program for students to build or strengthen their foundation in software. Our outreach team members presented “Moving from Rookie to Veteran” to talk about team sustainability and common issues that can come with the different life stages of an FRC team. The presentations were held in a Zoom meeting that was broadcasted on YouTube. Links:
Software Training with Zumos:
Moving from Rookie to Veteran:

Bolton FLL has been running 3 teams since September with 19 kids, and will be competing in the upcoming Qualifier in a few weeks. In advance of the competition, Bolton FL is hosting a small scrimmage. Littleton’s FLL Challenge team of 6 students and 1 mentor has been meeting since late October, with team members ranging from 5th grade to 8th grade. Due to COVID, we paused the in-person meetings in November but plan to start back up soon. In the meantime, weekly 30-minute online project meetings have kept the students busy, totaling over 80 hours of meetings and preparation. 50% of the team is new to FLL, showing the continued expansion of our family tree.

Back in November, one of our founding mentors gave a presentation at the Beach Blitz virtual event, “Helping Girls Gain Confidence in STEM.” At the offseason event, many workshops/panels were streamed on Twitch and there were technical competitions including the Minibot Competition and Scouting Hackathon.
Girls in STEM presentation:
Beach Blitz event:

In the past several weeks we have set up Zoom meetings with Team 811 The Cardinals to discuss outreach, marketing, social media, and how we’ve been working during COVID. Glad to be of help!


For our Chairman’s Award submission, we decided to continue the theme of family through describing our team as one big family tree with several branches growing in different directions. We separated the essay into three parts to talk about the different aspects of our team: the roots, branches, and trunk. Under each part, we covered different topics including our background, team structure, sponsorship, outreach, sustainability, and how the pandemic has impacted us. The pdfs for the submission will be uploaded soon.

Please feel free to ask any questions and thank you for reading!


I don’t think they’re absolutely necessary and it will depend on your hopper and ball path to the shooter. You can check out our initial prototypes without an accelerator wheel earlier in this thread. These didn’t really get high fidelity enough to be a fair comparison to our final shooter but it can show you our evolution.

We decided to go with the accelerators on separate motors for controllably so that the actual flywheel only had contact with one ball at a time. This helped make it easier to maintain a constant RPM for accuracy of the shooter. The accelerators were a lot less sensitive to variations so those could probably be combined into the hopper if that works for your design. In our case with the V hopper we needed some kind of additional motor to feed the balls up to our shooter so having those wheels serve as accelerators made sense. If you have any more questions feel free to reach out and we can walk through more of our thought process and findings.


Here are copies of our Chairman’s submission essay and executive summary questions.

2021 Chairman’s Essay.pdf (59.3 KB)
2021 Chairman’s Executive Summary Questions.pdf (62.5 KB)

(edited because I originally forgot to attach the second file. Oops!)


Hi I had a few questions about your code. I currently have been banging my head at trying to implementing your code. I have got it to the point where it will run the paths, but the distance and angles are always off and never consistent.

My questions are,
how was kS, kV, and kA calculated? it is commented as volts and volts second per meter.
How did you guys calculate your velocities and acceleration? I see that some accels and veloc are the same, but for our robot the acceleration is greater than our velocity.
How did you guys calculate PIDF for this? And why is that D is an extremely high value?
How do the Ramsete values B and Zeta effect how the robot interacts with the path?

Here is a link to my github repo

Thank you for your time,
Connor Shelton
Team 68 Programming Subteam leader


Not 6328 but I know these will be of great help:

Introduction to Robot Characterization — FIRST Robotics Competition documentation (

Trajectory Tutorial Overview — FIRST Robotics Competition documentation (

Trajectory Generation and Following with WPILib — FIRST Robotics Competition documentation


If you haven’t taken a look already, the resources linked above will all be very helpful in answering your questions. I’ll try to address them more specifically as best I can.

kS, kV, and kA are all retrieved from the robot characterization tool (see above). It’s definitely important to be careful about the units here. The variables in our code are commented as using meters, but this is because updateConstants converts them from inches to meters. I’d suggest either setting them as inches in updateConstants and letting it convert or removing the method altogether and doing everything in meters. There is an option in the characterization tool to select your units. Also keep in mind that these values are just used for the voltage constraint, which keeps acceleration within the limits of your robot’s electrical system. In general, they shouldn’t have a major effect on the quality of your tracking. The characterization tool will also give you a value for your empirical track width (which is often slightly greater than the true track width because of wheel scrub).

You can determine the robot’s theoretical maximum velocity using kV. The profile’s maximum voltage is limited to 10 volts, so dividing 10 by kV in volt seconds per inch will give you inches per second (or meters/second when working in meters). For example, our kV is 0.0722 so 10/0.0722 is ~138 inches per second. For acceleration, using the same value is a good starting point but certainly not required. Driving the robot around manually, you probably can get a good sense of what is or is not reasonable (for example, how many seconds does it take to reach top speed?). When in doubt, I’d suggest starting with low velocities and accelerations then working your way up until the profile can’t track accurately anymore. Keep in mind that the voltage constraint will also act as an upper limit on your acceleration.

As for the PID gains, I’m assuming you’re referring to the velocity PID on the drive? Those constants are here in our code. We tune these by running at a target velocity for short distances and logging the actual velocity via NetworkTables. Using Shuffleboard, we can graph that value to check the response curve. We wrote this command to help with that process, which uses our TunableNumber class to update gains based on values sent over NetworkTables.

This page provides a good explanation of the gains for the Ramsete controller. However, these values are robot-agnostic so they shouldn’t require any tuning.

I don’t see any immediate issues with the code you’re running, so my best guess is that the issues you’re seeing might be caused by a poorly tuned velocity PID. This page also provides some useful troubleshooting steps. If the velocity PID looks OK, the next thing I’d suggest is logging odometry x and y over NetworkTables to ensure that it’s tracking the position accurately (that doesn’t have to be with a profile, you can just drive in tele-op or push the robot by hand).

We’re happy to answer any other questions you have.


We’ve talked a little bit about the work we’ve been doing for the Galactic Search challenge, and we’d like to share some updates on our progress. The general process we’ve defined for completing the challenge is this; we first select a path and place the robot in its starting position. While the robot is disabled, the operator pushes a button to run the vision pipeline. The robot determines which path to run based on the visible power cells and puts the selection on the dashboard. The operator then confirms that the path is correct before enabling (so that we don’t crash into a wall). Since there are no rules requiring that the selection take place after enabling, we thought that this was a safer option than running the pipeline automatically. The profiled path for each profile is defined ahead of time from a known starting position, which means we can manually optimize each one.

We also considered whether continuously tracking the power cells would be a better solution. However, we decided against this for two reasons:

  1. Continuously tracking power cells is a much more complicated vision problem that would likely require use of our Limelight. Making a single selection can be done directly on the RIO with a USB camera (see below for details). Tracking also becomes difficult/impossible when going around tight turns, where the next power cell may not be visible.
  2. Motion profiles give us much more control over the robot’s path through the courses, meaning we can easily test & optimize with predictable results each time.

The Paths

These are the four paths as they are currently defined:





These trajectories are defined using cubic splines, which means the intermediate waypoints don’t include headings. This is different from the AutoNav courses, which use quintic splines. For this challenge, we don’t require as tight control over the robot’s path and so a cubic spline is more effective.

The starting positions for each trajectory are placed as far forward as possible such that the bumpers break the plane of the starting area. Given the locations of the first power cells, we found that starting the red paths at an angle was beneficial (just make sure to reset the odometry’s starting position accurately :wink:).

You may notice that our trajectories don’t perfectly match up with the power cells. This is for two reasons:

  1. The trajectory defines the location of the robot’s center, but we need the intake to contact the power cells (which deploys in front of the bumper). Often, this means shifting our waypoints 6-12 inches. For example, the path for the second power cell in b/blue tracks to the right of the power cell such that the intake grabs it in the center during the turn.
  2. Our profiles don’t always track perfectly towards the end of the profile, meaning we end up contacting power cells on the sides of our intake. Shifting the trajectory to compensate is a quick fix for those types of problems.

The other main issue we had to contend with is speed. The robot’s top speed is ~130 in/s, but our current intake only works effectively up to ~90 in/s. Since most power cells are collected while turning, our centripetal velocity constraint usually slows down the robot enough that this isn’t an issue. However, we needed to add custom velocity constraints for some straight sections (like the first power cell of b/blue). There were also instances where the intake contacted the power cell before the robot began turning enough to slow down. Through testing it was fairly easy to check where it needed a little extra help.

Here’s an example of the robot running the B/Red path:

The code for each path is available here:


Running the vision pipeline on only single frames vastly simplifies our setup, as opposed to continuously tracking power cells. Rather than using any separate hardware, we can plug our existing driver cam directly into the RIO for the processing. The filtering for this is simple enough that setting it up using our Limelight was overkill (and probably would have ended up being more complicated anyway). Our Limelight is also angled specifically to look for the target, which doesn’t work when finding power cells on the ground. When the operator pushes the button to run the pipeline, the camera captures an image like this:


Despite this being a single frame, we quickly realized that scaling down the full 1920x1080 resolution was necessary to allow the RIO to process it. Our pipeline runs at 640x360, which is plenty to identify the power cells. Using GRIP, we put together a simple HSV filter that processes the above image into this:


To determine the path, we need to check for the existence of power cells at known positions. Since we don’t need to locate them at arbitrary positions, finding contours is unnecessary. That also means tuning the HSV filter carefully is less critical than with traditional target-tracking (for example, the noise on the left from a nearby power cube has no impact on the logic).

Using the filtered images, our search logic scans rectangular regions for white pixels and calculates if they make up more than ~5-10% of the area. This allows us to distinguish all four paths very reliably. The rectangular scan areas are outlined below, with red & green indicating whether a power cell would be found.



It starts by searching the large area closest to the center, which determines whether the path is red or blue. The second search area shifts slightly based on that result such that a power cell will only appear on one of the two possible paths. This result is then stored and set in NetworkTables so that the operator knows whether to enable.

Here’s our code for the vision selection along with the GRIP pipeline.

The “updateVision” method handles the vision pipeline, then the command is scheduled during autonomous. We update the vision pipeline when a button is pressed, though it could also be set to run periodically while disabled.

This command can be easily customized for use with a different camera. Depending on where power cells appear in the image, the selection sequence seen below can be restructured to determine the correct path. Similarly, any four commands will be accepted for the four paths.

if (searchArea(hsvThreshold, 0.1, 315, 90, 355, 125)) { // Red path
  if (searchArea(hsvThreshold, 0.05, 290, 65, 315, 85)) {
    path = GalacticSearchPath.A_RED;
    SmartDashboard.putString("Galactic Search Path", "A/Red");
  } else {
    path = GalacticSearchPath.B_RED;
    SmartDashboard.putString("Galactic Search Path", "B/Red");
} else { // Blue path
  if (searchArea(hsvThreshold, 0.05, 255, 65, 280, 85)) {
    path = GalacticSearchPath.B_BLUE;
    SmartDashboard.putString("Galactic Search Path", "B/Blue");
  } else {
    path = GalacticSearchPath.A_BLUE;
    SmartDashboard.putString("Galactic Search Path", "A/Blue");

Overall, we’ve found this setup with vision + profiles to work quite reliably. We’re happy to answer any questions.


Today we’ll talk about a software structure that we use to support and easily switch between multiple forms of driver and operator controls.


For a while, we had a single set of operator controls. In our rookie year we built a driver station on which was mounted two Logitech joysticks for the driver plus an operator button-board with a couple of E-Stop Robotics CCI interface boards for subsystem controls. (It also had an Arduino to drive status LEDs, but for this post we’re going to focus on input controls only.) Simple enough. After a few years, though, things began to get complicated…

  • We found that for demos, dragging the whole, large driver station assembly around was not ideal. A common refrain was, “Can’t we just bring a laptop?” :man_shrugging:
  • At demos, we often wished we could have a control arrangement for one person to drive all the main subsystem controls “well enough”, rather than needing two people to operate all the controls.
  • We developed another, more advanced button-board operator interface, while the old one remained in use as well.
  • Some of our drivers/operators expressed a preference for handheld controllers in place of the dual joysticks, or to use handheld controllers in place of the button board.
  • For offseason competitions, we needed to be able to run a 2nd robot, sometimes with different controls than it usually used. (For example, the second robot might need to use our “other” driver station, or just a laptop and controllers.)

Trying to support these sorts of variations in the software posed quite a challenge, since doing so would mean fixed mappings no longer worked. And requiring one of our programmers to go adjust the code every time there was a need to run with a different controller was not practical. We needed to be able to “plug and go” so that in situations like driver tryouts where we gave the students an opportunity to try any of the control schemes, we could swap between them quickly.

A better solution was in order.

Dividing Up The Controls

Our current controls can be logically divided into 3 groups, and some years, there may be a 4th. Currently we have:

  • Driver controls: all the things that the driver is responsible for, including:

    • Controlling the drivetrain
    • Aiming (auto-aim button)
    • Shooter ball feed
  • Driver “overrides” : controls we need to support but that aren’t normally used on the competition field. In our original driver station, these were on toggle switches with flip-up “fighter pilot” covers, located on the operator’s console. These include:

    • Drive Disable - for locking out the drivetrain for occasions where the robot needs to be enabled but can’t move for safety reasons. We use this in the pit a lot.
    • Open Loop Drive - our normal drive is closed-loop velocity control for precision. This override runs the drivetrain open-loop instead. It’s needed for testing when the robot is on blocks on the cart, or if there’s an encoder issue that is creating a problem. (This saved us in our rookie year when we had to play a match without working encoders after a marathon emergency drivetrain repair.)
    • Limelight LED disable - for rules compliance and annoyance reduction.
  • Operator controls: all the things that the operator is responsible for:

    • Intake extend/retract and rollers on/off
    • Flywheel control
    • Climber controls - deploying and climbing

The specific controls needed vary year to year, but this sort of grouping is typical. Some robots might have operator overrides too (for example, a limit override on an elevator) but our current one does not.

Basically, supporting flexible controls comes down to mapping these control groups, and the specific elements for each, to available physical inputs from the connected controllers. Here are some arrangements we support. If a controller is in a group, that means at least one input in that group comes from that controller.

Each arrangement requires that the specific controls within the grouping are mapped to a particular controller object and physical input (button, joystick, slider, etc.) so that the robot code can use it. This requires a fair bit of flexibility, since:

  • The controls inputs represent anywhere from one to four separate controllers
  • Sometimes controls groups are supported across multiple controllers (as with dual joysticks)
  • Sometimes controls groups are overlapped onto one controller
  • If a control isn’t available, it should be quietly unavailable without requiring special effort (or causing the code to crash)

Mapping the Concept to Software

So how do we take our concept of flexible, decoupled controls inputs and turn that into working software?


Representing the above in software requires some abstraction, which we provide through Java Interfaces. Each controls grouping is represented as an interface that specifies methods for getting the button for a specific function, reading specific analog controller values, and so on. Even better, Java allows us to have default methods in the interface so that if a particular implementation doesn’t support one of them, the default method provides a (dummy) version of it. Here is an excerpt from our Operator OI interface:

public interface IOperatorOI {

    static final Trigger dummyTrigger = new Trigger();

    public Trigger getShooterFlywheelRunButton();

    public Trigger getShooterFlywheelStopButton();

    public default Trigger getClimbEnableSwitch() {
        return dummyTrigger;

    public default double getClimbStickY() {
        return 0;

    public default double getClimbStickX() {
        return 0;

Here, we see a few things:

  • Methods that we always expect to have implementations for, including the shooter-flywheel controls.
  • Methods that might not always have implementations, like getClimbEnableSwitch, which have a default implementation that just returns a dummy Trigger. This way, if we’re on a controller that doesn’t have a mapping for that function, nothing special needs to be done; the trigger object is valid, it will just never activate.
  • Interface methods that return an analog input reading. Here, the getClimbStick methods return a double value - and as before, there’s a default implementation that just returns 0.

The full interface definitions can be seen in our source code:

OI Implementations

Our code has a series of different implementations that each support one or more interfaces. Each takes one or two controller IDs and uses those to map buttons and other inputs appropriately. Depending on what controllers the main robot code detects - more on that later - it decides which implementation objects to instantiate. Some, like the “All In One”, are used alone, while others are used in combinations (for example, OIHandheldWithOverrides might be used with OIOperatorHandheld). Later, we’ll talk about how this selection logic works, but first let’s see what implementations our code currently provides:

Implementation Description # of IDs Implements Interfaces
OIArduinoConsole Operator console based on an Arduino 32U4 that presents as 2 joysticks 2 IOperatorOI, IDriverOverrideOI
OIDualJoysticks Dual Logitech Attack 3 joysticks 2 IDriverOI
OIeStopConsole Operator console based on 2 eStop Robotics CCI boards 2 IOperatorOI, IDriverOverrideOI
OIHandheld Xbox driver controller (drive only) 1 IDriverOI
OIHandheldWithOverrides Xbox driver controller (drive and overrides) 1 IDriverOI, IDriverOverrideOI
(extends OIHandheld)
OIOperatorHandheld Xbox operator controller 1 IOperatorOI
OIHandheldAllInOne Xbox driver controller (all functions) 1 IDriverOI, IDriverOverrideOI, IOperatorOI
(extends OIHandheldWithOverrides)

Let’s look at part of how one of these is constructed:

public class OIeStopConsole implements IDriverOverrideOI, IOperatorOI {
  private Joystick oiController1;
  private Joystick oiController2;

  private Button openLoopDrive;
  private Button driveDisableSwitch;
  private Button limelightLEDDisableSwitch;

  private Button shooterFlywheelRunButton;
  private Button shooterFlywheelStopButton;
  public OIeStopConsole(int firstID, int secondID) {
    oiController1 = new Joystick(firstID);
    oiController2 = new Joystick(secondID);

    openLoopDrive = new JoystickButton(oiController2, 10);
    driveDisableSwitch = new JoystickButton(oiController2, 9);
    limelightLEDDisableSwitch = new JoystickButton(oiController2, 8);

    shooterFlywheelRunButton = new JoystickButton(oiController2, 4);
    shooterFlywheelStopButton = new JoystickButton(oiController2, 3);

  public Trigger getOpenLoopSwitch() {
    return openLoopDrive;

  public Trigger getDriveDisableSwitch() {
    return driveDisableSwitch;

  public Trigger getLimelightLEDDisableSwitch() {
    return limelightLEDDisableSwitch;

  public Trigger getShooterFlywheelRunButton() {
    return shooterFlywheelRunButton;

  public Trigger getShooterFlywheelStopButton() {
    return shooterFlywheelStopButton;

We can see that this is implementing both the operator and driver-override interfaces. Its constructor is passed two controller IDs, which it uses to build two Joystick objects. It then maps specific functions to buttons on these controllers, and provides methods for accessing these Button objects. The above is just a piece of the code to serve as an example. You can look at the code in our robot/oi folder to see the complete definition of this and the other implementations.

Finding & Using Current Configuration

On the driver station, joystick devices show up on the USB tab. The code needs to evaluate which controllers are connected, and decide based on that which OI objects to instantiate. Most important, all of the code accesses the OI through three interface objects:

private IDriverOI driverOI;
private IDriverOverrideOI driverOverrideOI;
private IOperatorOI operatorOI;

At the end of controller evaluation, all three of these need to have OI objects assigned. Because these are Interface objects, each can be assigned any object that implements that interface. This means that in some cases, they share the same object! For example:

  • When the eStop console is used, both the operatorOI and driverOverrideOI use the same OIeStopConsole object. The driverOI object is separate (and could be of several types).
  • When the All-In-One controller is used, all three objects are the same OIHandheldAllInOne object. This is possible since it implements all 3 interfaces!

More importantly, the code that uses these interface objects needs to know nothing about what the actual controllers are. That detail is entirely abstracted - the rest of the code simply gets and uses buttons and input values as needed, via the public methods specified in the interface.

If no mapping can be found, there is a dummy OI object that gets assigned and provides dummy methods for the things that don’t already have a default in their interfaces. This is really just a fallback to prevent crashing - it doesn’t do anything useful.

Our selection logic is in the method updateOIType in This is called at the start of teleopInit, autonomousInit, and every 10 seconds in disabledPeriodic. This approach makes sure that controllers can be switched around and will reconfigure themselves automatically without needing to restart the code.

Selection Logic

The update method follows the basic sequence:

  • Loop through connected joysticks looking for controllers that represent dedicated operator interfaces. For us, this is the eStop and Arduino controllers. If either of these is found, we know that the operator controls and driver-overrides will map to these controllers.
  • Then loop through again looking for driver controllers:
    • If we see a Logitech Attack 3, store it, and wait to find the second one. When we find the second one, make an OIDualJoysticks object.
    • For Xbox controllers, store the first one found. If we find another, and don’t already have an operator interface, then the second becomes the operator interface.
  • Then sort through other possible conditions where we don’t have a complete mapping yet. Here is where we figure out that we have a handheld driver control with overrides, or an all-in-one handheld.
  • When a selection is made for operator and driver controls, a message is printed to the console to confirm what the code is using. This is very helpful when plugging and unplugging controllers!

The selection logic does, necessarily, depend on the names of the controllers and the order in which they show up in the USB tab. But this means that all students need to do is connect the right controllers and drag them to the right order (generally, driver above operator, and left joystick above right if using dual joysticks), and the code sorts out what makes sense. With some simple instructions, anyone on the team can get their desired controls working.

Pay attention to the names your devices show up as; sometimes there can be more than one representation of a specific device. An example: “Controller (XBOX 360 For Windows)” and “XBOX 360 For Windows (Controller)” are both handled in our code.

Drive Modes

To further enhance the flexibility of our controls, in addition to supporting different physical controllers, we support multiple drive modes, selectable through a SmartDashboard dropdown menu. Currently, we support:

Drive Mode Description
Tank Standard 2-joystick tank-drive
Split Arcade Standard split arcade (differential drive)
Southpaw Split Arcade Left-handed version of above
Hybrid Curvature Drive Curvature drive with automatic low speed turning
Southpaw Hybrid Curvature Drive Left-handed version of above
Manual Curvature Drive Curvature drive with manual low speed turning
Southpaw Manual Curvature Drive Left-handed version of above
Trigger Drive Xbox trigger drive (triggers for forward/reverse, steer by joystick)
Trigger Hybrid Curvature Like hybrid curvature, but trigger-operated
Trigger Manual Curvature Like manual curvature, but trigger-operated

This covers pretty much all of the common drive modes and supports both right- and left-handed drivers, whatever their preference. Note as well that the drive mode is selected entirely independently of the controllers in use. The source that supports this can be found in our file.

One other 6328-specific drive feature you may see mentioned in the code is “Sniper Mode”, which we created in 2017 and have carried forward. It’s a precision maneuvering mode in which top speed is reduced to a fraction of the normal maximum, so the entire joystick analog range can be used for precise slow-speed positioning. This was ideal for placing gears on pegs in Steamworks and other similar game challenges.

Final Thoughts

We’re very glad we implemented this OI abstraction layer; it enables all sorts of flexibility that would be hard to have any other way. There is some effort involved in setting up the interfaces, and if you support as many options as we have, the selection logic can be a little complex. We think it’s worth it, though: drivers get to use the controllers they prefer, we get flexibility for testing and demos, and our primary robot code is decoupled from needing to know what controllers are in use.

As always, we’re available to answer any questions!


To accommodate for the different gameplay this year, we decided to modify our middle shooter hood position. Our shooter hood has three positions: the outer and inner positions are actuated using a piston, and the middle position is a catch with two sheet metal hooks and a pair of solenoids.

First, we used the existing design to find the shot angle and height and brought this information (along with the linear velocity of the shot, which was determined to be 104.72 ft/s at 6000 rpm) into a design calculator. The design calculator determined the shot trajectory and that the closest we can be to the power port at the middle hood position is 134 inches (accounting for air resistance).

The programming team then tested the shooter at different distances (x axis) and their corresponding flywheel speeds (y axis), as seen here:

The middle hood position is represented by the red curve, and during testing we noted that point H, which is closer than 134 inches, missed a few times. It was then decided that the middle hood position should overlap with both the range of the trench shot and the wall shot, and that it should make shots between 80 and 190 inches. By adjusting the shot angle to 40 degrees, we were able to achieve this: at 6000 RPM, we can make a shot at 82 inches away, a huge improvement from previous calculations. This hood position also allows us to shoot from a variety of locations using software that dynamically changes the shot speed depending on the distance from the goal. So, by changing the shot velocity to 40 ft/s, we can make shots from the trench zone. This achieves our goal of creating a middle shot profile that overlaps with both other profiles, and, by adjusting the shot velocity, we should be able to shoot into the inner goal as well.

We then used this geometry to move the stop for the middle hood position in our CAD. To do this, we drew a line at a 40 degree angle to simulate the 40 degree shot angle, and used the existing solenoid location to move the catch up a few inches from its existing position.

This required the mechanical team to swap out the two catch plates in the shooter, which arrived a few days ago and were powder coated on Saturday. We did this yesterday, and the programming team is going to characterize the new shooter hood position later this week. We’re planning on posting an update then, and of course let us know if you have any questions!


I’m Lizzy, a student on FRC6328! We typically have a display we set up in the pits with our information. We wanted to release all of this information online before the upcoming chairman’s interviews.

Chairman’s Essay:
6328ChairmansEssay2021.pdf (72.9 KB)

Executive Summary:
6328ExecutiveSummaryQuestions2021.pdf (63.3 KB)

6328CADefinitionChart.pdf (61 KB)

Outreach Tracking:
Public Outreach Tracking 2020-2021.pdf (39.5 KB)

Resource Guides:

Remote Curriculum:

Let me know if you have any questions!


Making note of the questions that were asked during our students’ Deans List interviews last week. (If there’s another/better place to put a copy of these, I’m happy to do that as well.)

  • Have you read your nomination submission?
  • What skills did you learn in FLL that helped your transition to FRC? (both of our DL students started in FLL)
  • What inspired you to work with XXX (each student was asked this about something specific in their nomination, followed up by a couple project-specific questions)
  • What did you learn from XXX (again, a specific project or responsibility from the nomination)?
  • What are your future goals for your FRC team?
  • How were you able to contribute to sponsor relationships?
  • What has been your greatest contribution to your FRC team?
  • What are your plans for continuing with FIRST after high school?

6328’s Chairman’s interview is scheduled for tonight, and we’ll post those questions after as well.


And the Chairman’s interview questions from tonight:

  • have you come up with plans to sustain outreach in a post pandemic world
  • you state that all students are part of the biz team, how to you ensure that happens?
  • you talk about supporting female students, please expand on that

And then several questions asking students to provide more details on team-specific programs and initiatives discussed in the exec summary and essay.


In this post we mentioned some of the changes we’ve been working on with our shooter, and we’d like to give an update. To recap, we identified an issue with the three positions of our pneumatic hood. Between wall (the lowest position) and line (the middle position), we had a dead zone where making shots was nearly impossible. We adjusted the angle of the middle position to bridge this gap. To make accurate shots, we manually tune flywheel speed at various distances for each hood position and model the curve using a quadratic regression. This allows the robot to calculate the flywheel speed for any distance. After adjusting the line position, our data looks like this:

The x axis represents distance to the inner port and the y axis represents flywheel speed in RPM. Blue is the wall position (low), green is the line position (middle), and red is the trench position (high). The data points visible here represent the maximum usable ranges for each position. As you can see, we have eliminated the gap between wall and line, but introduced a new dead zone between line and trench. Unfortunately, that dead zone covers one of the zones for Interstellar Accuracy and is our preferred shooting location for the Power Port challenge. For the next iteration, we added a fourth hood position at the same location as the original line. After identifying the issue, we had the new plates designed and cut in under 24 hours:

After installing them, we found that adding back the original line position perfectly fills in our range:

The code automatically selects hood position and flywheel speed by combining pure wheel odometry and Limelight data. Adding a fourth hood position makes maneuvering the hood a little more complex, since the main lift now presses against our stops from two directions. This also means that disabling doesn’t always return the hood to a known position (it can apply enough pressure to the stops that they are unable to retract). The code will detect when that might be happening and automatically do a full reset the next time it moves the hood. It’s nothing a state machine can’t solve!

For the shooting challenges this year, we’ve been using a flat power port. For a 3D power port, the distance to the target when calculating flywheel speed is relative to the inner port. For a flat target, we simply move the “inner port” forward. This means that our characterization data for most positions works in its current state. The only exception is the wall, because our previous data relied on bouncing the power cell off of the inside of the structure and into the inner port. Clearly, this doesn’t work for a flat target. We created a separate model for the wall position when using a flat power port (colored purple):

The minimum distance is much lower than the model for the 3D power port because the robot can get physically closer to the “inner” target. This introduces a new dead zone between wall and line, but it doesn’t affect any important shooting locations. After making these improvements, we are now turning our attention to the two shooting challenges. We’ll be doing formal runs of Interstellar Accuracy soon, but our initial testing has shown good results. For the Power Port Challenge, we decided that an autonomous routine would be best for minimizing the shooting time. Here’s an example (some speed improvements to come):

The robot stays on the left side of the field throughout the challenge, meaning all of the power cells return in the center or to the right. That means human players can collect and return them quickly without running across the field. Like Interstellar Accuracy, we’ll be moving into formal runs of this challenge soon. We’re happy to answer any questions.

Love those flywheel calibration curves. I don’t think very many teams have shared that data, so thank you. Looks like you made the right call adding a fourth hood angle stop.

Did you consider replacing the pneumatic cylinder with a linear servo so you could choose any hood angle you like?