FRC 6328 Mechanical Advantage 2020-2021 Build Thread

Hi I had a few questions about your code. I currently have been banging my head at trying to implementing your code. I have got it to the point where it will run the paths, but the distance and angles are always off and never consistent.

My questions are,
how was kS, kV, and kA calculated? it is commented as volts and volts second per meter.
How did you guys calculate your velocities and acceleration? I see that some accels and veloc are the same, but for our robot the acceleration is greater than our velocity.
How did you guys calculate PIDF for this? And why is that D is an extremely high value?
How do the Ramsete values B and Zeta effect how the robot interacts with the path?

Here is a link to my github repo

Thank you for your time,
Connor Shelton
Team 68 Programming Subteam leader

2 Likes

Not 6328 but I know these will be of great help:

Introduction to Robot Characterization — FIRST Robotics Competition documentation (wpilib.org)

Trajectory Tutorial Overview — FIRST Robotics Competition documentation (wpilib.org)

Trajectory Generation and Following with WPILib — FIRST Robotics Competition documentation

2 Likes

If you haven’t taken a look already, the resources linked above will all be very helpful in answering your questions. I’ll try to address them more specifically as best I can.

kS, kV, and kA are all retrieved from the robot characterization tool (see above). It’s definitely important to be careful about the units here. The variables in our code are commented as using meters, but this is because updateConstants converts them from inches to meters. I’d suggest either setting them as inches in updateConstants and letting it convert or removing the method altogether and doing everything in meters. There is an option in the characterization tool to select your units. Also keep in mind that these values are just used for the voltage constraint, which keeps acceleration within the limits of your robot’s electrical system. In general, they shouldn’t have a major effect on the quality of your tracking. The characterization tool will also give you a value for your empirical track width (which is often slightly greater than the true track width because of wheel scrub).

You can determine the robot’s theoretical maximum velocity using kV. The profile’s maximum voltage is limited to 10 volts, so dividing 10 by kV in volt seconds per inch will give you inches per second (or meters/second when working in meters). For example, our kV is 0.0722 so 10/0.0722 is ~138 inches per second. For acceleration, using the same value is a good starting point but certainly not required. Driving the robot around manually, you probably can get a good sense of what is or is not reasonable (for example, how many seconds does it take to reach top speed?). When in doubt, I’d suggest starting with low velocities and accelerations then working your way up until the profile can’t track accurately anymore. Keep in mind that the voltage constraint will also act as an upper limit on your acceleration.

As for the PID gains, I’m assuming you’re referring to the velocity PID on the drive? Those constants are here in our code. We tune these by running at a target velocity for short distances and logging the actual velocity via NetworkTables. Using Shuffleboard, we can graph that value to check the response curve. We wrote this command to help with that process, which uses our TunableNumber class to update gains based on values sent over NetworkTables.

This page provides a good explanation of the gains for the Ramsete controller. However, these values are robot-agnostic so they shouldn’t require any tuning.

I don’t see any immediate issues with the code you’re running, so my best guess is that the issues you’re seeing might be caused by a poorly tuned velocity PID. This page also provides some useful troubleshooting steps. If the velocity PID looks OK, the next thing I’d suggest is logging odometry x and y over NetworkTables to ensure that it’s tracking the position accurately (that doesn’t have to be with a profile, you can just drive in tele-op or push the robot by hand).

We’re happy to answer any other questions you have.

6 Likes

We’ve talked a little bit about the work we’ve been doing for the Galactic Search challenge, and we’d like to share some updates on our progress. The general process we’ve defined for completing the challenge is this; we first select a path and place the robot in its starting position. While the robot is disabled, the operator pushes a button to run the vision pipeline. The robot determines which path to run based on the visible power cells and puts the selection on the dashboard. The operator then confirms that the path is correct before enabling (so that we don’t crash into a wall). Since there are no rules requiring that the selection take place after enabling, we thought that this was a safer option than running the pipeline automatically. The profiled path for each profile is defined ahead of time from a known starting position, which means we can manually optimize each one.

We also considered whether continuously tracking the power cells would be a better solution. However, we decided against this for two reasons:

  1. Continuously tracking power cells is a much more complicated vision problem that would likely require use of our Limelight. Making a single selection can be done directly on the RIO with a USB camera (see below for details). Tracking also becomes difficult/impossible when going around tight turns, where the next power cell may not be visible.
  2. Motion profiles give us much more control over the robot’s path through the courses, meaning we can easily test & optimize with predictable results each time.

The Paths

These are the four paths as they are currently defined:

A/Blue:

A/Red:

B/Blue:

B/Red:

These trajectories are defined using cubic splines, which means the intermediate waypoints don’t include headings. This is different from the AutoNav courses, which use quintic splines. For this challenge, we don’t require as tight control over the robot’s path and so a cubic spline is more effective.

The starting positions for each trajectory are placed as far forward as possible such that the bumpers break the plane of the starting area. Given the locations of the first power cells, we found that starting the red paths at an angle was beneficial (just make sure to reset the odometry’s starting position accurately :wink:).

You may notice that our trajectories don’t perfectly match up with the power cells. This is for two reasons:

  1. The trajectory defines the location of the robot’s center, but we need the intake to contact the power cells (which deploys in front of the bumper). Often, this means shifting our waypoints 6-12 inches. For example, the path for the second power cell in b/blue tracks to the right of the power cell such that the intake grabs it in the center during the turn.
  2. Our profiles don’t always track perfectly towards the end of the profile, meaning we end up contacting power cells on the sides of our intake. Shifting the trajectory to compensate is a quick fix for those types of problems.

The other main issue we had to contend with is speed. The robot’s top speed is ~130 in/s, but our current intake only works effectively up to ~90 in/s. Since most power cells are collected while turning, our centripetal velocity constraint usually slows down the robot enough that this isn’t an issue. However, we needed to add custom velocity constraints for some straight sections (like the first power cell of b/blue). There were also instances where the intake contacted the power cell before the robot began turning enough to slow down. Through testing it was fairly easy to check where it needed a little extra help.

Here’s an example of the robot running the B/Red path:

The code for each path is available here:

Vision!

Running the vision pipeline on only single frames vastly simplifies our setup, as opposed to continuously tracking power cells. Rather than using any separate hardware, we can plug our existing driver cam directly into the RIO for the processing. The filtering for this is simple enough that setting it up using our Limelight was overkill (and probably would have ended up being more complicated anyway). Our Limelight is also angled specifically to look for the target, which doesn’t work when finding power cells on the ground. When the operator pushes the button to run the pipeline, the camera captures an image like this:

b-red-lowres

Despite this being a single frame, we quickly realized that scaling down the full 1920x1080 resolution was necessary to allow the RIO to process it. Our pipeline runs at 640x360, which is plenty to identify the power cells. Using GRIP, we put together a simple HSV filter that processes the above image into this:

b-red-threshold

To determine the path, we need to check for the existence of power cells at known positions. Since we don’t need to locate them at arbitrary positions, finding contours is unnecessary. That also means tuning the HSV filter carefully is less critical than with traditional target-tracking (for example, the noise on the left from a nearby power cube has no impact on the logic).

Using the filtered images, our search logic scans rectangular regions for white pixels and calculates if they make up more than ~5-10% of the area. This allows us to distinguish all four paths very reliably. The rectangular scan areas are outlined below, with red & green indicating whether a power cell would be found.

a-red-overlayb-red-overlay

a-blue-overlayb-blue-overlay

It starts by searching the large area closest to the center, which determines whether the path is red or blue. The second search area shifts slightly based on that result such that a power cell will only appear on one of the two possible paths. This result is then stored and set in NetworkTables so that the operator knows whether to enable.

Here’s our code for the vision selection along with the GRIP pipeline.

The “updateVision” method handles the vision pipeline, then the command is scheduled during autonomous. We update the vision pipeline when a button is pressed, though it could also be set to run periodically while disabled.

This command can be easily customized for use with a different camera. Depending on where power cells appear in the image, the selection sequence seen below can be restructured to determine the correct path. Similarly, any four commands will be accepted for the four paths.

if (searchArea(hsvThreshold, 0.1, 315, 90, 355, 125)) { // Red path
  if (searchArea(hsvThreshold, 0.05, 290, 65, 315, 85)) {
    path = GalacticSearchPath.A_RED;
    SmartDashboard.putString("Galactic Search Path", "A/Red");
  } else {
    path = GalacticSearchPath.B_RED;
    SmartDashboard.putString("Galactic Search Path", "B/Red");
  }
} else { // Blue path
  if (searchArea(hsvThreshold, 0.05, 255, 65, 280, 85)) {
    path = GalacticSearchPath.B_BLUE;
    SmartDashboard.putString("Galactic Search Path", "B/Blue");
  } else {
    path = GalacticSearchPath.A_BLUE;
    SmartDashboard.putString("Galactic Search Path", "A/Blue");
  }
}

Overall, we’ve found this setup with vision + profiles to work quite reliably. We’re happy to answer any questions.

15 Likes

Today we’ll talk about a software structure that we use to support and easily switch between multiple forms of driver and operator controls.

Background

For a while, we had a single set of operator controls. In our rookie year we built a driver station on which was mounted two Logitech joysticks for the driver plus an operator button-board with a couple of E-Stop Robotics CCI interface boards for subsystem controls. (It also had an Arduino to drive status LEDs, but for this post we’re going to focus on input controls only.) Simple enough. After a few years, though, things began to get complicated…

  • We found that for demos, dragging the whole, large driver station assembly around was not ideal. A common refrain was, “Can’t we just bring a laptop?” :man_shrugging:
  • At demos, we often wished we could have a control arrangement for one person to drive all the main subsystem controls “well enough”, rather than needing two people to operate all the controls.
  • We developed another, more advanced button-board operator interface, while the old one remained in use as well.
  • Some of our drivers/operators expressed a preference for handheld controllers in place of the dual joysticks, or to use handheld controllers in place of the button board.
  • For offseason competitions, we needed to be able to run a 2nd robot, sometimes with different controls than it usually used. (For example, the second robot might need to use our “other” driver station, or just a laptop and controllers.)

Trying to support these sorts of variations in the software posed quite a challenge, since doing so would mean fixed mappings no longer worked. And requiring one of our programmers to go adjust the code every time there was a need to run with a different controller was not practical. We needed to be able to “plug and go” so that in situations like driver tryouts where we gave the students an opportunity to try any of the control schemes, we could swap between them quickly.

A better solution was in order.

Dividing Up The Controls

Our current controls can be logically divided into 3 groups, and some years, there may be a 4th. Currently we have:

  • Driver controls: all the things that the driver is responsible for, including:

    • Controlling the drivetrain
    • Aiming (auto-aim button)
    • Shooter ball feed
  • Driver “overrides” : controls we need to support but that aren’t normally used on the competition field. In our original driver station, these were on toggle switches with flip-up “fighter pilot” covers, located on the operator’s console. These include:

    • Drive Disable - for locking out the drivetrain for occasions where the robot needs to be enabled but can’t move for safety reasons. We use this in the pit a lot.
    • Open Loop Drive - our normal drive is closed-loop velocity control for precision. This override runs the drivetrain open-loop instead. It’s needed for testing when the robot is on blocks on the cart, or if there’s an encoder issue that is creating a problem. (This saved us in our rookie year when we had to play a match without working encoders after a marathon emergency drivetrain repair.)
    • Limelight LED disable - for rules compliance and annoyance reduction.
  • Operator controls: all the things that the operator is responsible for:

    • Intake extend/retract and rollers on/off
    • Flywheel control
    • Climber controls - deploying and climbing

The specific controls needed vary year to year, but this sort of grouping is typical. Some robots might have operator overrides too (for example, a limit override on an elevator) but our current one does not.

Basically, supporting flexible controls comes down to mapping these control groups, and the specific elements for each, to available physical inputs from the connected controllers. Here are some arrangements we support. If a controller is in a group, that means at least one input in that group comes from that controller.

Each arrangement requires that the specific controls within the grouping are mapped to a particular controller object and physical input (button, joystick, slider, etc.) so that the robot code can use it. This requires a fair bit of flexibility, since:

  • The controls inputs represent anywhere from one to four separate controllers
  • Sometimes controls groups are supported across multiple controllers (as with dual joysticks)
  • Sometimes controls groups are overlapped onto one controller
  • If a control isn’t available, it should be quietly unavailable without requiring special effort (or causing the code to crash)

Mapping the Concept to Software

So how do we take our concept of flexible, decoupled controls inputs and turn that into working software?

Interfaces

Representing the above in software requires some abstraction, which we provide through Java Interfaces. Each controls grouping is represented as an interface that specifies methods for getting the button for a specific function, reading specific analog controller values, and so on. Even better, Java allows us to have default methods in the interface so that if a particular implementation doesn’t support one of them, the default method provides a (dummy) version of it. Here is an excerpt from our Operator OI interface:

public interface IOperatorOI {

    static final Trigger dummyTrigger = new Trigger();

    public Trigger getShooterFlywheelRunButton();

    public Trigger getShooterFlywheelStopButton();

    public default Trigger getClimbEnableSwitch() {
        return dummyTrigger;
    }

    public default double getClimbStickY() {
        return 0;
    }

    public default double getClimbStickX() {
        return 0;
    }
}

Here, we see a few things:

  • Methods that we always expect to have implementations for, including the shooter-flywheel controls.
  • Methods that might not always have implementations, like getClimbEnableSwitch, which have a default implementation that just returns a dummy Trigger. This way, if we’re on a controller that doesn’t have a mapping for that function, nothing special needs to be done; the trigger object is valid, it will just never activate.
  • Interface methods that return an analog input reading. Here, the getClimbStick methods return a double value - and as before, there’s a default implementation that just returns 0.

The full interface definitions can be seen in our source code:

IDriverOI.java

IOperatorOI.java

IDriverOverrideOI.java

OI Implementations

Our code has a series of different implementations that each support one or more interfaces. Each takes one or two controller IDs and uses those to map buttons and other inputs appropriately. Depending on what controllers the main robot code detects - more on that later - it decides which implementation objects to instantiate. Some, like the “All In One”, are used alone, while others are used in combinations (for example, OIHandheldWithOverrides might be used with OIOperatorHandheld). Later, we’ll talk about how this selection logic works, but first let’s see what implementations our code currently provides:

Implementation Description # of IDs Implements Interfaces
OIArduinoConsole Operator console based on an Arduino 32U4 that presents as 2 joysticks 2 IOperatorOI, IDriverOverrideOI
OIDualJoysticks Dual Logitech Attack 3 joysticks 2 IDriverOI
OIeStopConsole Operator console based on 2 eStop Robotics CCI boards 2 IOperatorOI, IDriverOverrideOI
OIHandheld Xbox driver controller (drive only) 1 IDriverOI
OIHandheldWithOverrides Xbox driver controller (drive and overrides) 1 IDriverOI, IDriverOverrideOI
(extends OIHandheld)
OIOperatorHandheld Xbox operator controller 1 IOperatorOI
OIHandheldAllInOne Xbox driver controller (all functions) 1 IDriverOI, IDriverOverrideOI, IOperatorOI
(extends OIHandheldWithOverrides)

Let’s look at part of how one of these is constructed:

public class OIeStopConsole implements IDriverOverrideOI, IOperatorOI {
  private Joystick oiController1;
  private Joystick oiController2;

  private Button openLoopDrive;
  private Button driveDisableSwitch;
  private Button limelightLEDDisableSwitch;

  private Button shooterFlywheelRunButton;
  private Button shooterFlywheelStopButton;
  
  public OIeStopConsole(int firstID, int secondID) {
    oiController1 = new Joystick(firstID);
    oiController2 = new Joystick(secondID);

    openLoopDrive = new JoystickButton(oiController2, 10);
    driveDisableSwitch = new JoystickButton(oiController2, 9);
    limelightLEDDisableSwitch = new JoystickButton(oiController2, 8);

    shooterFlywheelRunButton = new JoystickButton(oiController2, 4);
    shooterFlywheelStopButton = new JoystickButton(oiController2, 3);
  }

  @Override
  public Trigger getOpenLoopSwitch() {
    return openLoopDrive;
  }

  @Override
  public Trigger getDriveDisableSwitch() {
    return driveDisableSwitch;
  }

  @Override
  public Trigger getLimelightLEDDisableSwitch() {
    return limelightLEDDisableSwitch;
  }

  @Override
  public Trigger getShooterFlywheelRunButton() {
    return shooterFlywheelRunButton;
  }

  @Override
  public Trigger getShooterFlywheelStopButton() {
    return shooterFlywheelStopButton;
  }
}

We can see that this is implementing both the operator and driver-override interfaces. Its constructor is passed two controller IDs, which it uses to build two Joystick objects. It then maps specific functions to buttons on these controllers, and provides methods for accessing these Button objects. The above is just a piece of the code to serve as an example. You can look at the code in our robot/oi folder to see the complete definition of this and the other implementations.

Finding & Using Current Configuration

On the driver station, joystick devices show up on the USB tab. The code needs to evaluate which controllers are connected, and decide based on that which OI objects to instantiate. Most important, all of the code accesses the OI through three interface objects:

private IDriverOI driverOI;
private IDriverOverrideOI driverOverrideOI;
private IOperatorOI operatorOI;

At the end of controller evaluation, all three of these need to have OI objects assigned. Because these are Interface objects, each can be assigned any object that implements that interface. This means that in some cases, they share the same object! For example:

  • When the eStop console is used, both the operatorOI and driverOverrideOI use the same OIeStopConsole object. The driverOI object is separate (and could be of several types).
  • When the All-In-One controller is used, all three objects are the same OIHandheldAllInOne object. This is possible since it implements all 3 interfaces!

More importantly, the code that uses these interface objects needs to know nothing about what the actual controllers are. That detail is entirely abstracted - the rest of the code simply gets and uses buttons and input values as needed, via the public methods specified in the interface.

If no mapping can be found, there is a dummy OI object that gets assigned and provides dummy methods for the things that don’t already have a default in their interfaces. This is really just a fallback to prevent crashing - it doesn’t do anything useful.

Our selection logic is in the method updateOIType in RobotContainer.java. This is called at the start of teleopInit, autonomousInit, and every 10 seconds in disabledPeriodic. This approach makes sure that controllers can be switched around and will reconfigure themselves automatically without needing to restart the code.

Selection Logic

The update method follows the basic sequence:

  • Loop through connected joysticks looking for controllers that represent dedicated operator interfaces. For us, this is the eStop and Arduino controllers. If either of these is found, we know that the operator controls and driver-overrides will map to these controllers.
  • Then loop through again looking for driver controllers:
    • If we see a Logitech Attack 3, store it, and wait to find the second one. When we find the second one, make an OIDualJoysticks object.
    • For Xbox controllers, store the first one found. If we find another, and don’t already have an operator interface, then the second becomes the operator interface.
  • Then sort through other possible conditions where we don’t have a complete mapping yet. Here is where we figure out that we have a handheld driver control with overrides, or an all-in-one handheld.
  • When a selection is made for operator and driver controls, a message is printed to the console to confirm what the code is using. This is very helpful when plugging and unplugging controllers!

The selection logic does, necessarily, depend on the names of the controllers and the order in which they show up in the USB tab. But this means that all students need to do is connect the right controllers and drag them to the right order (generally, driver above operator, and left joystick above right if using dual joysticks), and the code sorts out what makes sense. With some simple instructions, anyone on the team can get their desired controls working.

Pay attention to the names your devices show up as; sometimes there can be more than one representation of a specific device. An example: “Controller (XBOX 360 For Windows)” and “XBOX 360 For Windows (Controller)” are both handled in our code.

Drive Modes

To further enhance the flexibility of our controls, in addition to supporting different physical controllers, we support multiple drive modes, selectable through a SmartDashboard dropdown menu. Currently, we support:

Drive Mode Description
Tank Standard 2-joystick tank-drive
Split Arcade Standard split arcade (differential drive)
Southpaw Split Arcade Left-handed version of above
Hybrid Curvature Drive Curvature drive with automatic low speed turning
Southpaw Hybrid Curvature Drive Left-handed version of above
Manual Curvature Drive Curvature drive with manual low speed turning
Southpaw Manual Curvature Drive Left-handed version of above
Trigger Drive Xbox trigger drive (triggers for forward/reverse, steer by joystick)
Trigger Hybrid Curvature Like hybrid curvature, but trigger-operated
Trigger Manual Curvature Like manual curvature, but trigger-operated

This covers pretty much all of the common drive modes and supports both right- and left-handed drivers, whatever their preference. Note as well that the drive mode is selected entirely independently of the controllers in use. The source that supports this can be found in our DriveWithJoysticks.java file.

One other 6328-specific drive feature you may see mentioned in the code is “Sniper Mode”, which we created in 2017 and have carried forward. It’s a precision maneuvering mode in which top speed is reduced to a fraction of the normal maximum, so the entire joystick analog range can be used for precise slow-speed positioning. This was ideal for placing gears on pegs in Steamworks and other similar game challenges.

Final Thoughts

We’re very glad we implemented this OI abstraction layer; it enables all sorts of flexibility that would be hard to have any other way. There is some effort involved in setting up the interfaces, and if you support as many options as we have, the selection logic can be a little complex. We think it’s worth it, though: drivers get to use the controllers they prefer, we get flexibility for testing and demos, and our primary robot code is decoupled from needing to know what controllers are in use.

As always, we’re available to answer any questions!

16 Likes

To accommodate for the different gameplay this year, we decided to modify our middle shooter hood position. Our shooter hood has three positions: the outer and inner positions are actuated using a piston, and the middle position is a catch with two sheet metal hooks and a pair of solenoids.

First, we used the existing design to find the shot angle and height and brought this information (along with the linear velocity of the shot, which was determined to be 104.72 ft/s at 6000 rpm) into a design calculator. The design calculator determined the shot trajectory and that the closest we can be to the power port at the middle hood position is 134 inches (accounting for air resistance).

The programming team then tested the shooter at different distances (x axis) and their corresponding flywheel speeds (y axis), as seen here:

The middle hood position is represented by the red curve, and during testing we noted that point H, which is closer than 134 inches, missed a few times. It was then decided that the middle hood position should overlap with both the range of the trench shot and the wall shot, and that it should make shots between 80 and 190 inches. By adjusting the shot angle to 40 degrees, we were able to achieve this: at 6000 RPM, we can make a shot at 82 inches away, a huge improvement from previous calculations. This hood position also allows us to shoot from a variety of locations using software that dynamically changes the shot speed depending on the distance from the goal. So, by changing the shot velocity to 40 ft/s, we can make shots from the trench zone. This achieves our goal of creating a middle shot profile that overlaps with both other profiles, and, by adjusting the shot velocity, we should be able to shoot into the inner goal as well.

We then used this geometry to move the stop for the middle hood position in our CAD. To do this, we drew a line at a 40 degree angle to simulate the 40 degree shot angle, and used the existing solenoid location to move the catch up a few inches from its existing position.

This required the mechanical team to swap out the two catch plates in the shooter, which arrived a few days ago and were powder coated on Saturday. We did this yesterday, and the programming team is going to characterize the new shooter hood position later this week. We’re planning on posting an update then, and of course let us know if you have any questions!

13 Likes

Hi,
I’m Lizzy, a student on FRC6328! We typically have a display we set up in the pits with our information. We wanted to release all of this information online before the upcoming chairman’s interviews.

Chairman’s Essay:
6328ChairmansEssay2021.pdf (72.9 KB)

Executive Summary:
6328ExecutiveSummaryQuestions2021.pdf (63.3 KB)

Definitions:
6328CADefinitionChart.pdf (61 KB)

Outreach Tracking:
Public Outreach Tracking 2020-2021.pdf (39.5 KB)

Resource Guides:
https://littletonrobotics.org/about-6328/team-resources/

Remote Curriculum:
https://littletonrobotics.org/remote-learning/

Let me know if you have any questions!

16 Likes

Making note of the questions that were asked during our students’ Deans List interviews last week. (If there’s another/better place to put a copy of these, I’m happy to do that as well.)

  • Have you read your nomination submission?
  • What skills did you learn in FLL that helped your transition to FRC? (both of our DL students started in FLL)
  • What inspired you to work with XXX (each student was asked this about something specific in their nomination, followed up by a couple project-specific questions)
  • What did you learn from XXX (again, a specific project or responsibility from the nomination)?
  • What are your future goals for your FRC team?
  • How were you able to contribute to sponsor relationships?
  • What has been your greatest contribution to your FRC team?
  • What are your plans for continuing with FIRST after high school?

6328’s Chairman’s interview is scheduled for tonight, and we’ll post those questions after as well.

5 Likes

And the Chairman’s interview questions from tonight:

  • have you come up with plans to sustain outreach in a post pandemic world
  • you state that all students are part of the biz team, how to you ensure that happens?
  • you talk about supporting female students, please expand on that

And then several questions asking students to provide more details on team-specific programs and initiatives discussed in the exec summary and essay.

6 Likes

In this post we mentioned some of the changes we’ve been working on with our shooter, and we’d like to give an update. To recap, we identified an issue with the three positions of our pneumatic hood. Between wall (the lowest position) and line (the middle position), we had a dead zone where making shots was nearly impossible. We adjusted the angle of the middle position to bridge this gap. To make accurate shots, we manually tune flywheel speed at various distances for each hood position and model the curve using a quadratic regression. This allows the robot to calculate the flywheel speed for any distance. After adjusting the line position, our data looks like this:

The x axis represents distance to the inner port and the y axis represents flywheel speed in RPM. Blue is the wall position (low), green is the line position (middle), and red is the trench position (high). The data points visible here represent the maximum usable ranges for each position. As you can see, we have eliminated the gap between wall and line, but introduced a new dead zone between line and trench. Unfortunately, that dead zone covers one of the zones for Interstellar Accuracy and is our preferred shooting location for the Power Port challenge. For the next iteration, we added a fourth hood position at the same location as the original line. After identifying the issue, we had the new plates designed and cut in under 24 hours:

After installing them, we found that adding back the original line position perfectly fills in our range:

The code automatically selects hood position and flywheel speed by combining pure wheel odometry and Limelight data. Adding a fourth hood position makes maneuvering the hood a little more complex, since the main lift now presses against our stops from two directions. This also means that disabling doesn’t always return the hood to a known position (it can apply enough pressure to the stops that they are unable to retract). The code will detect when that might be happening and automatically do a full reset the next time it moves the hood. It’s nothing a state machine can’t solve!

For the shooting challenges this year, we’ve been using a flat power port. For a 3D power port, the distance to the target when calculating flywheel speed is relative to the inner port. For a flat target, we simply move the “inner port” forward. This means that our characterization data for most positions works in its current state. The only exception is the wall, because our previous data relied on bouncing the power cell off of the inside of the structure and into the inner port. Clearly, this doesn’t work for a flat target. We created a separate model for the wall position when using a flat power port (colored purple):

The minimum distance is much lower than the model for the 3D power port because the robot can get physically closer to the “inner” target. This introduces a new dead zone between wall and line, but it doesn’t affect any important shooting locations. After making these improvements, we are now turning our attention to the two shooting challenges. We’ll be doing formal runs of Interstellar Accuracy soon, but our initial testing has shown good results. For the Power Port Challenge, we decided that an autonomous routine would be best for minimizing the shooting time. Here’s an example (some speed improvements to come):

The robot stays on the left side of the field throughout the challenge, meaning all of the power cells return in the center or to the right. That means human players can collect and return them quickly without running across the field. Like Interstellar Accuracy, we’ll be moving into formal runs of this challenge soon. We’re happy to answer any questions.

13 Likes

Love those flywheel calibration curves. I don’t think very many teams have shared that data, so thank you. Looks like you made the right call adding a fourth hood angle stop.

Did you consider replacing the pneumatic cylinder with a linear servo so you could choose any hood angle you like?

9 Likes

We certainly considered both options. From a mechanical perspective, the pneumatic hood is both simpler and more consistent than a servo. It’s much easier to guarantee that we will always hit the exact same positions using pneumatics. That sacrifices some variability, but changing flywheel speed as we do now gives us an essentially equivalent accuracy. Maneuvering the hood in software is also a challenge, particularly with the fourth position. However, once we got it working (which was also an excellent introduction to state machines) it can move very quickly between positions compared to many servo based mechanisms.

6 Likes

Just dropping in to share some goodness. Still a lot of room for improvement with all of these, but wanted to share some of the best times from our recent runs. All Autonav stuff will be up in a few days as well. Feel free to ask any questions!

Galactic Search FRC6328 Red A (4.0 seconds)

Galactic Search FRC6328 Red B (3.7 seconds)

Barrel Racing Path FRC6328 HyperDrive Challenge Run 1 (10.1 Seconds)

Bounce Path FRC6328 HyperDrive Challenge Run 1 (8.0 Seconds)

Lightspeed Path FRC6328 HyperDrive Challenge Run 1 (15.5 Seconds)

Slalom Path FRC6328 HyperDrive Challenge Run 1 (7.0 Seconds)

PowerPort Challenge FRC6328 (69 Pts)

Interstellar Accuracy Challenge FRC6328 (44 Pts!!!?!?!?!11ahadsgah)

22 Likes

And some Autonav paths, any questions feel free to ask!

Slalom Path FRC6328 Autonav Challenge Run 7 (9.2 Seconds)

Bounce Path FRC 6328 Autonav Challenge Run 2 (9.0 Seconds)

Barrel Racing Path FRC6328 Autonav Challenge Run 3 (9.9 Seconds)

14 Likes

By my unofficial count (i.e. looking things up here) that puts 6328 at 3rd, 3rd, 2nd, T-17th, T-4th in the world across the various challenges. Very impressive!

14 Likes

Thanks Karthik!

The team has been putting in an incredible amount of work over the last few months, it’s been challenging, but we’re enjoy the new challenge!

There’s still lots of room for improvement with a lot of these challenges for us, so hopeful with the last week or so we can push our way a little higher. I’m hoping some more teams share their scores on the leaderboard as we get closer to the deadline, it’s very fun to see the scores populate up against everyone else in the world!

9 Likes

6328 is really showing the world you don’t need swerve to be top tier. Great work you guys and can’t wait to see how much you can improve these already nuts scores!

6 Likes

Hello all,
After a long few months of hard work, team 6328’s Game Design Challenge has had our interview with the judges and is currently waiting for the results from the first round. So, we would like to take this opportunity to share with the community everything that we have submitted and presented at our interview for game design.

For our submission, we had a game overview summary, notable field elements, expected robot actions, and ELEMENT description, a CAD model of the field, supplementary information, a game video, and our presentation from the interview. Here it is:

Game Overview:
In MALWARE MAYHEM, two alliances of 3 robots each work to protect FIRST against a malware attack from the anti-STEM organization LAST (League Against Science & Technology) and ultimately save future FRC game files including those for the top-secret water game. Each alliance and their robots work towards collecting lines of Code and deploying them in the Infected Cores in the CPU. Near the end of the match, robots race to share Code into a Shared Cache, and at the end of the match robots collect Firewalls and install them into the CPU.

During the 15 second autonomous period, robots must follow pre-programmed instructions. Alliances score points by:

  1. Moving from the Initiation Line
  2. Deploying lines of Code into the Infected Cores of the CPU
  3. Deploying lines of Code into the Uninfected Cores of the CPU

During the 75 second tele-op period, drivers take control of their robots. Alliances score points by:

  1. Continuing to deploy Code into the Infected Cores of the CPU
  2. Continuing to deploy Code into the Uninfected Cores of the CPU

During the 30 second positioning period, robots score points by:

  1. Continuing to deploy Code into the Infected Cores of the CPU
  2. Continuing to deploy Code into the Uninfected Cores of the CPU
  3. Deploying 5 lines of Code into the Shared Cache to achieve stage 1
  4. Deploying 10 lines of Code into the Shared Cache to achieve stage 2
  5. Deploying 15 lines of Code into the Shared Cache to achieve stage 3

During the 30 second deployment period, robots score points by:

  1. Continuing to deploy Code into the Shared Cache
  2. Installing Firewalls into the CPU
  3. Hanging from installed Firewalls

The alliance with the highest score at the end of the match wins.

Notable Field Elements:
CPU: A large structure that separates the two halves of the field, consisting of seven CORES on each side and a central opening (DATA BUS) for interaction between alliances. A total of 5 CORES span the upper level of the CPU, and there are an additional 2 CORES on either side of the DATA BUS near the edge of the field. Throughout the match, different CORES will be randomly highlighted with LEDs, marking them infected. The three high CORES in the center have a slot below the scoring opening for installation of the FIREWALL. An additional scoring area for the FIREWALL is a slot below the lower CORES on both sides of the DATA BUS.

Security Context: A platform that houses 3 FIREWALL units: One directly in front, and two angled on either side. The Security Context is located in front of the alliance station walls of the same alliance color.

Shared Cache: A scoring location on the opposite alliance’s driver station wall. The scoring location is a single window the same size as the windows on the CPU and is directly over the center of the alliance wall. CODE can only be scored during the POSITIONING PERIOD and DEPLOYMENT PERIOD periods, and teams will have to reach different scoring tiers to receive equal points for both alliances.

Player Stations: There are four PLAYER STATIONS located in the four corners of the field. There are blue and red PLAYER STATIONS on the blue alliance station wall and the same for the red alliance station wall, for a total of two per alliance. The PLAYER STATIONS on the opposite side of the field of the alliance station have PROTECTED ZONES around them, while the stations on the same side do not.

Expected Robot Actions:
Auto: During the Autonomous Period, teams are tasked with moving off of the INITIATION LINE such that no part of their ROBOT is over the line. Teams may score in CORES, and they may collect code from the PLAYER STATION.

Tele-Operated Period: During the 75 second Tele-Operated portion of the game, the team’s main task is to shoot CODE into the COREs of the CPU. The primary locations that teams will be able to receive CODE is from the PLAYER STATION on their own side or on the opposite side of the field.

Positioning Period: During the POSITIONING PERIOD, which spans the second-to-last 30 seconds of the match, teams may retrieve FIREWALLS from the SECURITY CONTEXT, and prepare for the DEPLOYMENT PERIOD. Teams may also race to the other end of the field and deposit CODE into the SHARED CACHE. However, any robot that passes through the DATA BUS and does not return by the end of the POSITIONING PERIOD must remain on that side for the remainder of the match.

Deployment Period: During the DEPLOYMENT PERIOD the final 30 seconds, teams are no longer allowed to pass through the DATA BUS. During this period, teams are expected to deploy FIREWALLS by inserting a FIREWALL into the CPU and pulling themselves completely off of the ground. Teams, if on the opposite alliance’s side, are expected to play defense on climbing ROBOTS, deploy CODE into the SHARED CACHE, and/or try to prevent their FIREWALL deployal. However, teams must be careful not to cross the INITIATION LINE.

ELEMENT:
The ELEMENT in MALWARE MAYHEM is the main game piece in our game. The ELEMENT represents lines of anti-malware CODE, and ROBOTS must deploy the CODE into the CORES, INFECTED CORES, or the SHARED CACHE to earn points. The ELEMENT itself is a 18” plastic linkage chain with 2” links. Each CODE will have a magnetic component on one end, allowing it to be automatically scored as it passes through scoring locations. All CORES will have a magnetic detector that will detect any CODE that is scored through that specific CORE. The ELEMENT enters the field through the four PLAYER STATIONS on the field. During auto, CODE will not be located on the field, instead all CODE will be either pre-loaded into the ROBOT or collected from the PLAYER STATIONS.

CAD Model:

Supplementary Information:

POINT VALUES

SHARED CACHE Stages

Points from shooting in the SHARED CACHE come in 3 stages. Each alliance in a match will only receive points after a certain number of lines of CODE have been deployed into the SHARED CACHE by both alliances. Once both alliances have deployed 5 lines of CODE each, stage 1 is achieved. Once both alliances have deployed 10 lines of CODE each, stage 2 is achieved. Once both alliances have deployed 15 lines of CODE each, stage 3 is achieved.

ROLE OF HUMAN PLAYER

Both alliances can have a maximum of three human players and there are two primary positions that the players can take during the game. One of the positions will be to put CODE onto the field through the PLAYER STATIONS on each side of the field, and since there are two PLAYER STATIONS per alliance, there will be two human players for this position. The other position of the human players is to move CODE from OVERFLOW to the PLAYER STATIONS on their alliance station side. A blue human player will be at one OVERFLOW and the red human player will be at the other. An OVERFLOW is where scored CODE in the CPU ends up.

Initiation Line (6): At the start of each match, each team’s ROBOT must start on the INITIATION LINE consisting of a long strip of white spanning the entire width of the field on their alliances side of the field. ROBOTS moving entirely off of the INITIATION LINE into either the climbing zone or scoring zone will earn points for their team. During the Endgame period of the match, this plane acts as a defense line, where, in the last 30 seconds of the match, robots of the opposite alliance are not allowed to cross this line so as to not interfere with gameplay (climbing).

Collection Line (5): This is the line that spans the width of the field at the end of the retrieval zones on each side of the field.

Collection Zone (12): This is the area of the field that spans from the COLLECTION LINE to the alliance wall. While in this zone, ROBOTS cannot attempt to deploy CODE. A ROBOT must move completely outside of the zone to be able to deploy CODE.

Scoring Zone: During TELEOP and the POSITIONING PERIOD, this is the area of the field that is in between the COLLECTION LINE (5) and CPU on either side of the field. During the DEPLOYMENT PERIOD, this zone is in between the COLLECTION LINE, and the INITIATION LINE (6) on either side of the field.

PROTECTED ZONES

Retrieval Zone (11): These zones are the marked areas on the field surrounding the PLAYER STATIONS on the opposite side of an alliance’s driver stations. At all times during the match, defense is not permitted on ROBOTS that are in these zones.

Climbing Zone (10): This zone becomes active only during the DEPLOYMENT PERIOD. This is the area of the field in between the two INITIATION LINES on either side of the field. During the DEPLOYMENT PERIOD, no defense is permitted on a ROBOT in this zone.

GAME SPECIFIC PENALTIES

Shooting outside the Scoring Zone 5 points per CODE segment
Deploying the FIREWALL prior to the DEPLOYMENT PERIOD 15 points
Contacting an opponent in their RETRIEVAL ZONE 5 points for every contact
Passing through the DATA BUS during the DEPLOYMENT PERIOD 15 points
Contacting an opposing ROBOT in the CLIMBING ZONE during the DEPLOYMENT PERIOD Climb awarded to opponent

GAME RULES: ROBOTS

G1. ROBOT height, as measured when it’s resting normally on a flat floor, may not exceed 36 in. (~91 cm) above the carpet during the match, with the exception of ROBOTS intersecting their alliance’s defense zone during the DEPLOYMENT PERIOD.

ROBOT CONSTRUCTION RULES

R1. The ROBOT (excluding bumpers) must have a frame perimeter that consists of fixed, non-articulated structural elements of the ROBOT.

R2. In the starting configuration (the physical configuration in which a ROBOT starts a match), no part of the ROBOT shall extend outside the vertical projection of the frame perimeter, with the exception of its bumpers.

R3. A ROBOT’S starting configuration may not have a frame perimeter greater than 120 in. (~304 cm) and may not be more than 36 in. (~91 cm) tall.

R4. ROBOTS may not extend beyond their frame perimeter, with the exception of ROBOTS intersecting their alliance’s defense zone during the DEPLOYMENT PERIOD.

R5. The ROBOT weight must not exceed 125 lbs. (~56 kg). When determining weight, the basic ROBOT structure and all elements of all additional mechanisms that might be used in a single configuration of the ROBOT shall be weighed together. The following items are excluded:

  1. ROBOT bumpers
  2. ROBOT battery

R6. A bumper is a required assembly which attaches to the ROBOT frame. Bumpers protect ROBOTS from damaging/being damaged by other ROBOTS and field elements

GAME TERMS

AUTO - The first 15 seconds of the match where the ROBOT moves by using autonomous code

CODE - An 18 inch long chain that is the main game piece, ELEMENT

CORE - One of 7 scoring locations that are on each side of the CPU

CPU - The central scoring unit that divides the two halves of the field

DATA BUS - The tunnel under the CPU that allows for interaction between alliances

DEPLOYMENT PERIOD - the last 30 second period following POSITIONING PERIOD

FIREWALL - A suitcase-shaped object with a handle

FIREWALL INSTALLATION PORT - location on the CPU below CORES where FIREWALLS are scored

INFECTED CORE - Scoring areas on the CPU that are highlighted with LEDs

INITIATION LINE - The line 8 feet from the CPU

LAST - League Against Science & Technology

MALWARE MAYHEM - The name of Team 6328’s concept game

PLAYER STATION - the station where a human players interact with CODE to give to the ROBOTS on the field

POSITIONING PERIOD - The 30 second period following TELEOP

PROTECTED ZONES - Areas on the field in which defense is penalized

ROBOT - the mechanism used to complete the tasks

SECURITY CONTEXT - A trapezoidal platform which stores the FIREWALLS prior to deployment

SHARED CACHE - Scoring location for the Coopertition mission

SHARING CODE - This is used to allow the alliances to cooperate

TELEOP - The time after AUTO where players control the ROBOT (2:15)

UNINFECTED CORE - Scoring areas on the CPU that are not highlighted with LEDs

RANKING SYSTEM

For Malware Mayhem, the points earned in each individual match is your team’s score. Every match, the score that your alliance earns will be totaled into a total score. Teams are ranked by their total score, which consists of the scores from all of their matches for that event, combined. In addition, teams earn a 50 point bonus if they win the match, or a 25 point bonus (to each alliance) if the match is tied.

  1. PLAYER STATION
  2. SHARED CACHE
  3. SECURITY CONTEXT
  4. FIREWALL
  5. SHOOTING LINE
  6. INITIATION LINE
  7. OVERFLOW
  8. CORE
  9. FIREWALL INSTALLATION PORT
  10. CLIMBING ZONE
  11. RETRIEVAL ZONE
  12. COLLECTION ZONE

Game Video:

Interview Presentation:
Malware Mayhem (2).pdf (3.8 MB)

All of the questions that the judges asked us after our presentation were game specific questions, like clarifications and further inquiries about how are game worked.

6328’s Game Design Challenge team had a very fun time going through all of the steps in developing our game and really enjoyed the alternative challenge for our team’s strategy/scouting sub-teams, who made up the majority of the Game Design team. We would be happy to answer any clarification questions about our game or any game specific questions.

15 Likes

I think 6328 wins just based on the super villain organization L.A.S.T.:sunglasses:

6 Likes

Couple updated videos, the programming team has been pushing the limits and making great strides.

Slalom Path FRC6328 Autonav Challenge Run 8 (6.8 Seconds)
Improvement of 2.4 seconds

Bounce Path FRC 6328 Autonav Challenge Run 3 (7.8 Seconds)
Improvement of 1.2 seconds

The 45 point Interstellar Accuracy challenge has evaded us thus far, but we’ll keep pushing.

12 Likes