FRC 6328 Mechanical Advantage 2020-2021 Build Thread

Today we’ll talk about a software structure that we use to support and easily switch between multiple forms of driver and operator controls.

Background

For a while, we had a single set of operator controls. In our rookie year we built a driver station on which was mounted two Logitech joysticks for the driver plus an operator button-board with a couple of E-Stop Robotics CCI interface boards for subsystem controls. (It also had an Arduino to drive status LEDs, but for this post we’re going to focus on input controls only.) Simple enough. After a few years, though, things began to get complicated…

  • We found that for demos, dragging the whole, large driver station assembly around was not ideal. A common refrain was, “Can’t we just bring a laptop?” :man_shrugging:
  • At demos, we often wished we could have a control arrangement for one person to drive all the main subsystem controls “well enough”, rather than needing two people to operate all the controls.
  • We developed another, more advanced button-board operator interface, while the old one remained in use as well.
  • Some of our drivers/operators expressed a preference for handheld controllers in place of the dual joysticks, or to use handheld controllers in place of the button board.
  • For offseason competitions, we needed to be able to run a 2nd robot, sometimes with different controls than it usually used. (For example, the second robot might need to use our “other” driver station, or just a laptop and controllers.)

Trying to support these sorts of variations in the software posed quite a challenge, since doing so would mean fixed mappings no longer worked. And requiring one of our programmers to go adjust the code every time there was a need to run with a different controller was not practical. We needed to be able to “plug and go” so that in situations like driver tryouts where we gave the students an opportunity to try any of the control schemes, we could swap between them quickly.

A better solution was in order.

Dividing Up The Controls

Our current controls can be logically divided into 3 groups, and some years, there may be a 4th. Currently we have:

  • Driver controls: all the things that the driver is responsible for, including:

    • Controlling the drivetrain
    • Aiming (auto-aim button)
    • Shooter ball feed
  • Driver “overrides” : controls we need to support but that aren’t normally used on the competition field. In our original driver station, these were on toggle switches with flip-up “fighter pilot” covers, located on the operator’s console. These include:

    • Drive Disable - for locking out the drivetrain for occasions where the robot needs to be enabled but can’t move for safety reasons. We use this in the pit a lot.
    • Open Loop Drive - our normal drive is closed-loop velocity control for precision. This override runs the drivetrain open-loop instead. It’s needed for testing when the robot is on blocks on the cart, or if there’s an encoder issue that is creating a problem. (This saved us in our rookie year when we had to play a match without working encoders after a marathon emergency drivetrain repair.)
    • Limelight LED disable - for rules compliance and annoyance reduction.
  • Operator controls: all the things that the operator is responsible for:

    • Intake extend/retract and rollers on/off
    • Flywheel control
    • Climber controls - deploying and climbing

The specific controls needed vary year to year, but this sort of grouping is typical. Some robots might have operator overrides too (for example, a limit override on an elevator) but our current one does not.

Basically, supporting flexible controls comes down to mapping these control groups, and the specific elements for each, to available physical inputs from the connected controllers. Here are some arrangements we support. If a controller is in a group, that means at least one input in that group comes from that controller.

Each arrangement requires that the specific controls within the grouping are mapped to a particular controller object and physical input (button, joystick, slider, etc.) so that the robot code can use it. This requires a fair bit of flexibility, since:

  • The controls inputs represent anywhere from one to four separate controllers
  • Sometimes controls groups are supported across multiple controllers (as with dual joysticks)
  • Sometimes controls groups are overlapped onto one controller
  • If a control isn’t available, it should be quietly unavailable without requiring special effort (or causing the code to crash)

Mapping the Concept to Software

So how do we take our concept of flexible, decoupled controls inputs and turn that into working software?

Interfaces

Representing the above in software requires some abstraction, which we provide through Java Interfaces. Each controls grouping is represented as an interface that specifies methods for getting the button for a specific function, reading specific analog controller values, and so on. Even better, Java allows us to have default methods in the interface so that if a particular implementation doesn’t support one of them, the default method provides a (dummy) version of it. Here is an excerpt from our Operator OI interface:

public interface IOperatorOI {

    static final Trigger dummyTrigger = new Trigger();

    public Trigger getShooterFlywheelRunButton();

    public Trigger getShooterFlywheelStopButton();

    public default Trigger getClimbEnableSwitch() {
        return dummyTrigger;
    }

    public default double getClimbStickY() {
        return 0;
    }

    public default double getClimbStickX() {
        return 0;
    }
}

Here, we see a few things:

  • Methods that we always expect to have implementations for, including the shooter-flywheel controls.
  • Methods that might not always have implementations, like getClimbEnableSwitch, which have a default implementation that just returns a dummy Trigger. This way, if we’re on a controller that doesn’t have a mapping for that function, nothing special needs to be done; the trigger object is valid, it will just never activate.
  • Interface methods that return an analog input reading. Here, the getClimbStick methods return a double value - and as before, there’s a default implementation that just returns 0.

The full interface definitions can be seen in our source code:

IDriverOI.java

IOperatorOI.java

IDriverOverrideOI.java

OI Implementations

Our code has a series of different implementations that each support one or more interfaces. Each takes one or two controller IDs and uses those to map buttons and other inputs appropriately. Depending on what controllers the main robot code detects - more on that later - it decides which implementation objects to instantiate. Some, like the “All In One”, are used alone, while others are used in combinations (for example, OIHandheldWithOverrides might be used with OIOperatorHandheld). Later, we’ll talk about how this selection logic works, but first let’s see what implementations our code currently provides:

Implementation Description # of IDs Implements Interfaces
OIArduinoConsole Operator console based on an Arduino 32U4 that presents as 2 joysticks 2 IOperatorOI, IDriverOverrideOI
OIDualJoysticks Dual Logitech Attack 3 joysticks 2 IDriverOI
OIeStopConsole Operator console based on 2 eStop Robotics CCI boards 2 IOperatorOI, IDriverOverrideOI
OIHandheld Xbox driver controller (drive only) 1 IDriverOI
OIHandheldWithOverrides Xbox driver controller (drive and overrides) 1 IDriverOI, IDriverOverrideOI
(extends OIHandheld)
OIOperatorHandheld Xbox operator controller 1 IOperatorOI
OIHandheldAllInOne Xbox driver controller (all functions) 1 IDriverOI, IDriverOverrideOI, IOperatorOI
(extends OIHandheldWithOverrides)

Let’s look at part of how one of these is constructed:

public class OIeStopConsole implements IDriverOverrideOI, IOperatorOI {
  private Joystick oiController1;
  private Joystick oiController2;

  private Button openLoopDrive;
  private Button driveDisableSwitch;
  private Button limelightLEDDisableSwitch;

  private Button shooterFlywheelRunButton;
  private Button shooterFlywheelStopButton;
  
  public OIeStopConsole(int firstID, int secondID) {
    oiController1 = new Joystick(firstID);
    oiController2 = new Joystick(secondID);

    openLoopDrive = new JoystickButton(oiController2, 10);
    driveDisableSwitch = new JoystickButton(oiController2, 9);
    limelightLEDDisableSwitch = new JoystickButton(oiController2, 8);

    shooterFlywheelRunButton = new JoystickButton(oiController2, 4);
    shooterFlywheelStopButton = new JoystickButton(oiController2, 3);
  }

  @Override
  public Trigger getOpenLoopSwitch() {
    return openLoopDrive;
  }

  @Override
  public Trigger getDriveDisableSwitch() {
    return driveDisableSwitch;
  }

  @Override
  public Trigger getLimelightLEDDisableSwitch() {
    return limelightLEDDisableSwitch;
  }

  @Override
  public Trigger getShooterFlywheelRunButton() {
    return shooterFlywheelRunButton;
  }

  @Override
  public Trigger getShooterFlywheelStopButton() {
    return shooterFlywheelStopButton;
  }
}

We can see that this is implementing both the operator and driver-override interfaces. Its constructor is passed two controller IDs, which it uses to build two Joystick objects. It then maps specific functions to buttons on these controllers, and provides methods for accessing these Button objects. The above is just a piece of the code to serve as an example. You can look at the code in our robot/oi folder to see the complete definition of this and the other implementations.

Finding & Using Current Configuration

On the driver station, joystick devices show up on the USB tab. The code needs to evaluate which controllers are connected, and decide based on that which OI objects to instantiate. Most important, all of the code accesses the OI through three interface objects:

private IDriverOI driverOI;
private IDriverOverrideOI driverOverrideOI;
private IOperatorOI operatorOI;

At the end of controller evaluation, all three of these need to have OI objects assigned. Because these are Interface objects, each can be assigned any object that implements that interface. This means that in some cases, they share the same object! For example:

  • When the eStop console is used, both the operatorOI and driverOverrideOI use the same OIeStopConsole object. The driverOI object is separate (and could be of several types).
  • When the All-In-One controller is used, all three objects are the same OIHandheldAllInOne object. This is possible since it implements all 3 interfaces!

More importantly, the code that uses these interface objects needs to know nothing about what the actual controllers are. That detail is entirely abstracted - the rest of the code simply gets and uses buttons and input values as needed, via the public methods specified in the interface.

If no mapping can be found, there is a dummy OI object that gets assigned and provides dummy methods for the things that don’t already have a default in their interfaces. This is really just a fallback to prevent crashing - it doesn’t do anything useful.

Our selection logic is in the method updateOIType in RobotContainer.java. This is called at the start of teleopInit, autonomousInit, and every 10 seconds in disabledPeriodic. This approach makes sure that controllers can be switched around and will reconfigure themselves automatically without needing to restart the code.

Selection Logic

The update method follows the basic sequence:

  • Loop through connected joysticks looking for controllers that represent dedicated operator interfaces. For us, this is the eStop and Arduino controllers. If either of these is found, we know that the operator controls and driver-overrides will map to these controllers.
  • Then loop through again looking for driver controllers:
    • If we see a Logitech Attack 3, store it, and wait to find the second one. When we find the second one, make an OIDualJoysticks object.
    • For Xbox controllers, store the first one found. If we find another, and don’t already have an operator interface, then the second becomes the operator interface.
  • Then sort through other possible conditions where we don’t have a complete mapping yet. Here is where we figure out that we have a handheld driver control with overrides, or an all-in-one handheld.
  • When a selection is made for operator and driver controls, a message is printed to the console to confirm what the code is using. This is very helpful when plugging and unplugging controllers!

The selection logic does, necessarily, depend on the names of the controllers and the order in which they show up in the USB tab. But this means that all students need to do is connect the right controllers and drag them to the right order (generally, driver above operator, and left joystick above right if using dual joysticks), and the code sorts out what makes sense. With some simple instructions, anyone on the team can get their desired controls working.

Pay attention to the names your devices show up as; sometimes there can be more than one representation of a specific device. An example: “Controller (XBOX 360 For Windows)” and “XBOX 360 For Windows (Controller)” are both handled in our code.

Drive Modes

To further enhance the flexibility of our controls, in addition to supporting different physical controllers, we support multiple drive modes, selectable through a SmartDashboard dropdown menu. Currently, we support:

Drive Mode Description
Tank Standard 2-joystick tank-drive
Split Arcade Standard split arcade (differential drive)
Southpaw Split Arcade Left-handed version of above
Hybrid Curvature Drive Curvature drive with automatic low speed turning
Southpaw Hybrid Curvature Drive Left-handed version of above
Manual Curvature Drive Curvature drive with manual low speed turning
Southpaw Manual Curvature Drive Left-handed version of above
Trigger Drive Xbox trigger drive (triggers for forward/reverse, steer by joystick)
Trigger Hybrid Curvature Like hybrid curvature, but trigger-operated
Trigger Manual Curvature Like manual curvature, but trigger-operated

This covers pretty much all of the common drive modes and supports both right- and left-handed drivers, whatever their preference. Note as well that the drive mode is selected entirely independently of the controllers in use. The source that supports this can be found in our DriveWithJoysticks.java file.

One other 6328-specific drive feature you may see mentioned in the code is “Sniper Mode”, which we created in 2017 and have carried forward. It’s a precision maneuvering mode in which top speed is reduced to a fraction of the normal maximum, so the entire joystick analog range can be used for precise slow-speed positioning. This was ideal for placing gears on pegs in Steamworks and other similar game challenges.

Final Thoughts

We’re very glad we implemented this OI abstraction layer; it enables all sorts of flexibility that would be hard to have any other way. There is some effort involved in setting up the interfaces, and if you support as many options as we have, the selection logic can be a little complex. We think it’s worth it, though: drivers get to use the controllers they prefer, we get flexibility for testing and demos, and our primary robot code is decoupled from needing to know what controllers are in use.

As always, we’re available to answer any questions!

16 Likes

To accommodate for the different gameplay this year, we decided to modify our middle shooter hood position. Our shooter hood has three positions: the outer and inner positions are actuated using a piston, and the middle position is a catch with two sheet metal hooks and a pair of solenoids.

First, we used the existing design to find the shot angle and height and brought this information (along with the linear velocity of the shot, which was determined to be 104.72 ft/s at 6000 rpm) into a design calculator. The design calculator determined the shot trajectory and that the closest we can be to the power port at the middle hood position is 134 inches (accounting for air resistance).

The programming team then tested the shooter at different distances (x axis) and their corresponding flywheel speeds (y axis), as seen here:

The middle hood position is represented by the red curve, and during testing we noted that point H, which is closer than 134 inches, missed a few times. It was then decided that the middle hood position should overlap with both the range of the trench shot and the wall shot, and that it should make shots between 80 and 190 inches. By adjusting the shot angle to 40 degrees, we were able to achieve this: at 6000 RPM, we can make a shot at 82 inches away, a huge improvement from previous calculations. This hood position also allows us to shoot from a variety of locations using software that dynamically changes the shot speed depending on the distance from the goal. So, by changing the shot velocity to 40 ft/s, we can make shots from the trench zone. This achieves our goal of creating a middle shot profile that overlaps with both other profiles, and, by adjusting the shot velocity, we should be able to shoot into the inner goal as well.

We then used this geometry to move the stop for the middle hood position in our CAD. To do this, we drew a line at a 40 degree angle to simulate the 40 degree shot angle, and used the existing solenoid location to move the catch up a few inches from its existing position.

This required the mechanical team to swap out the two catch plates in the shooter, which arrived a few days ago and were powder coated on Saturday. We did this yesterday, and the programming team is going to characterize the new shooter hood position later this week. We’re planning on posting an update then, and of course let us know if you have any questions!

13 Likes

Hi,
I’m Lizzy, a student on FRC6328! We typically have a display we set up in the pits with our information. We wanted to release all of this information online before the upcoming chairman’s interviews.

Chairman’s Essay:
6328ChairmansEssay2021.pdf (72.9 KB)

Executive Summary:
6328ExecutiveSummaryQuestions2021.pdf (63.3 KB)

Definitions:
6328CADefinitionChart.pdf (61 KB)

Outreach Tracking:
Public Outreach Tracking 2020-2021.pdf (39.5 KB)

Resource Guides:
https://littletonrobotics.org/about-6328/team-resources/

Remote Curriculum:
https://littletonrobotics.org/remote-learning/

Let me know if you have any questions!

16 Likes

Making note of the questions that were asked during our students’ Deans List interviews last week. (If there’s another/better place to put a copy of these, I’m happy to do that as well.)

  • Have you read your nomination submission?
  • What skills did you learn in FLL that helped your transition to FRC? (both of our DL students started in FLL)
  • What inspired you to work with XXX (each student was asked this about something specific in their nomination, followed up by a couple project-specific questions)
  • What did you learn from XXX (again, a specific project or responsibility from the nomination)?
  • What are your future goals for your FRC team?
  • How were you able to contribute to sponsor relationships?
  • What has been your greatest contribution to your FRC team?
  • What are your plans for continuing with FIRST after high school?

6328’s Chairman’s interview is scheduled for tonight, and we’ll post those questions after as well.

5 Likes

And the Chairman’s interview questions from tonight:

  • have you come up with plans to sustain outreach in a post pandemic world
  • you state that all students are part of the biz team, how to you ensure that happens?
  • you talk about supporting female students, please expand on that

And then several questions asking students to provide more details on team-specific programs and initiatives discussed in the exec summary and essay.

6 Likes

In this post we mentioned some of the changes we’ve been working on with our shooter, and we’d like to give an update. To recap, we identified an issue with the three positions of our pneumatic hood. Between wall (the lowest position) and line (the middle position), we had a dead zone where making shots was nearly impossible. We adjusted the angle of the middle position to bridge this gap. To make accurate shots, we manually tune flywheel speed at various distances for each hood position and model the curve using a quadratic regression. This allows the robot to calculate the flywheel speed for any distance. After adjusting the line position, our data looks like this:

The x axis represents distance to the inner port and the y axis represents flywheel speed in RPM. Blue is the wall position (low), green is the line position (middle), and red is the trench position (high). The data points visible here represent the maximum usable ranges for each position. As you can see, we have eliminated the gap between wall and line, but introduced a new dead zone between line and trench. Unfortunately, that dead zone covers one of the zones for Interstellar Accuracy and is our preferred shooting location for the Power Port challenge. For the next iteration, we added a fourth hood position at the same location as the original line. After identifying the issue, we had the new plates designed and cut in under 24 hours:

After installing them, we found that adding back the original line position perfectly fills in our range:

The code automatically selects hood position and flywheel speed by combining pure wheel odometry and Limelight data. Adding a fourth hood position makes maneuvering the hood a little more complex, since the main lift now presses against our stops from two directions. This also means that disabling doesn’t always return the hood to a known position (it can apply enough pressure to the stops that they are unable to retract). The code will detect when that might be happening and automatically do a full reset the next time it moves the hood. It’s nothing a state machine can’t solve!

For the shooting challenges this year, we’ve been using a flat power port. For a 3D power port, the distance to the target when calculating flywheel speed is relative to the inner port. For a flat target, we simply move the “inner port” forward. This means that our characterization data for most positions works in its current state. The only exception is the wall, because our previous data relied on bouncing the power cell off of the inside of the structure and into the inner port. Clearly, this doesn’t work for a flat target. We created a separate model for the wall position when using a flat power port (colored purple):

The minimum distance is much lower than the model for the 3D power port because the robot can get physically closer to the “inner” target. This introduces a new dead zone between wall and line, but it doesn’t affect any important shooting locations. After making these improvements, we are now turning our attention to the two shooting challenges. We’ll be doing formal runs of Interstellar Accuracy soon, but our initial testing has shown good results. For the Power Port Challenge, we decided that an autonomous routine would be best for minimizing the shooting time. Here’s an example (some speed improvements to come):

The robot stays on the left side of the field throughout the challenge, meaning all of the power cells return in the center or to the right. That means human players can collect and return them quickly without running across the field. Like Interstellar Accuracy, we’ll be moving into formal runs of this challenge soon. We’re happy to answer any questions.

13 Likes

Love those flywheel calibration curves. I don’t think very many teams have shared that data, so thank you. Looks like you made the right call adding a fourth hood angle stop.

Did you consider replacing the pneumatic cylinder with a linear servo so you could choose any hood angle you like?

9 Likes

We certainly considered both options. From a mechanical perspective, the pneumatic hood is both simpler and more consistent than a servo. It’s much easier to guarantee that we will always hit the exact same positions using pneumatics. That sacrifices some variability, but changing flywheel speed as we do now gives us an essentially equivalent accuracy. Maneuvering the hood in software is also a challenge, particularly with the fourth position. However, once we got it working (which was also an excellent introduction to state machines) it can move very quickly between positions compared to many servo based mechanisms.

6 Likes

Just dropping in to share some goodness. Still a lot of room for improvement with all of these, but wanted to share some of the best times from our recent runs. All Autonav stuff will be up in a few days as well. Feel free to ask any questions!

Galactic Search FRC6328 Red A (4.0 seconds)

Galactic Search FRC6328 Red B (3.7 seconds)

Barrel Racing Path FRC6328 HyperDrive Challenge Run 1 (10.1 Seconds)

Bounce Path FRC6328 HyperDrive Challenge Run 1 (8.0 Seconds)

Lightspeed Path FRC6328 HyperDrive Challenge Run 1 (15.5 Seconds)

Slalom Path FRC6328 HyperDrive Challenge Run 1 (7.0 Seconds)

PowerPort Challenge FRC6328 (69 Pts)

Interstellar Accuracy Challenge FRC6328 (44 Pts!!!?!?!?!11ahadsgah)

22 Likes

And some Autonav paths, any questions feel free to ask!

Slalom Path FRC6328 Autonav Challenge Run 7 (9.2 Seconds)

Bounce Path FRC 6328 Autonav Challenge Run 2 (9.0 Seconds)

Barrel Racing Path FRC6328 Autonav Challenge Run 3 (9.9 Seconds)

14 Likes

By my unofficial count (i.e. looking things up here) that puts 6328 at 3rd, 3rd, 2nd, T-17th, T-4th in the world across the various challenges. Very impressive!

14 Likes

Thanks Karthik!

The team has been putting in an incredible amount of work over the last few months, it’s been challenging, but we’re enjoy the new challenge!

There’s still lots of room for improvement with a lot of these challenges for us, so hopeful with the last week or so we can push our way a little higher. I’m hoping some more teams share their scores on the leaderboard as we get closer to the deadline, it’s very fun to see the scores populate up against everyone else in the world!

9 Likes

6328 is really showing the world you don’t need swerve to be top tier. Great work you guys and can’t wait to see how much you can improve these already nuts scores!

6 Likes

Hello all,
After a long few months of hard work, team 6328’s Game Design Challenge has had our interview with the judges and is currently waiting for the results from the first round. So, we would like to take this opportunity to share with the community everything that we have submitted and presented at our interview for game design.

For our submission, we had a game overview summary, notable field elements, expected robot actions, and ELEMENT description, a CAD model of the field, supplementary information, a game video, and our presentation from the interview. Here it is:

Game Overview:
In MALWARE MAYHEM, two alliances of 3 robots each work to protect FIRST against a malware attack from the anti-STEM organization LAST (League Against Science & Technology) and ultimately save future FRC game files including those for the top-secret water game. Each alliance and their robots work towards collecting lines of Code and deploying them in the Infected Cores in the CPU. Near the end of the match, robots race to share Code into a Shared Cache, and at the end of the match robots collect Firewalls and install them into the CPU.

During the 15 second autonomous period, robots must follow pre-programmed instructions. Alliances score points by:

  1. Moving from the Initiation Line
  2. Deploying lines of Code into the Infected Cores of the CPU
  3. Deploying lines of Code into the Uninfected Cores of the CPU

During the 75 second tele-op period, drivers take control of their robots. Alliances score points by:

  1. Continuing to deploy Code into the Infected Cores of the CPU
  2. Continuing to deploy Code into the Uninfected Cores of the CPU

During the 30 second positioning period, robots score points by:

  1. Continuing to deploy Code into the Infected Cores of the CPU
  2. Continuing to deploy Code into the Uninfected Cores of the CPU
  3. Deploying 5 lines of Code into the Shared Cache to achieve stage 1
  4. Deploying 10 lines of Code into the Shared Cache to achieve stage 2
  5. Deploying 15 lines of Code into the Shared Cache to achieve stage 3

During the 30 second deployment period, robots score points by:

  1. Continuing to deploy Code into the Shared Cache
  2. Installing Firewalls into the CPU
  3. Hanging from installed Firewalls

The alliance with the highest score at the end of the match wins.

Notable Field Elements:
CPU: A large structure that separates the two halves of the field, consisting of seven CORES on each side and a central opening (DATA BUS) for interaction between alliances. A total of 5 CORES span the upper level of the CPU, and there are an additional 2 CORES on either side of the DATA BUS near the edge of the field. Throughout the match, different CORES will be randomly highlighted with LEDs, marking them infected. The three high CORES in the center have a slot below the scoring opening for installation of the FIREWALL. An additional scoring area for the FIREWALL is a slot below the lower CORES on both sides of the DATA BUS.

Security Context: A platform that houses 3 FIREWALL units: One directly in front, and two angled on either side. The Security Context is located in front of the alliance station walls of the same alliance color.

Shared Cache: A scoring location on the opposite alliance’s driver station wall. The scoring location is a single window the same size as the windows on the CPU and is directly over the center of the alliance wall. CODE can only be scored during the POSITIONING PERIOD and DEPLOYMENT PERIOD periods, and teams will have to reach different scoring tiers to receive equal points for both alliances.

Player Stations: There are four PLAYER STATIONS located in the four corners of the field. There are blue and red PLAYER STATIONS on the blue alliance station wall and the same for the red alliance station wall, for a total of two per alliance. The PLAYER STATIONS on the opposite side of the field of the alliance station have PROTECTED ZONES around them, while the stations on the same side do not.

Expected Robot Actions:
Auto: During the Autonomous Period, teams are tasked with moving off of the INITIATION LINE such that no part of their ROBOT is over the line. Teams may score in CORES, and they may collect code from the PLAYER STATION.

Tele-Operated Period: During the 75 second Tele-Operated portion of the game, the team’s main task is to shoot CODE into the COREs of the CPU. The primary locations that teams will be able to receive CODE is from the PLAYER STATION on their own side or on the opposite side of the field.

Positioning Period: During the POSITIONING PERIOD, which spans the second-to-last 30 seconds of the match, teams may retrieve FIREWALLS from the SECURITY CONTEXT, and prepare for the DEPLOYMENT PERIOD. Teams may also race to the other end of the field and deposit CODE into the SHARED CACHE. However, any robot that passes through the DATA BUS and does not return by the end of the POSITIONING PERIOD must remain on that side for the remainder of the match.

Deployment Period: During the DEPLOYMENT PERIOD the final 30 seconds, teams are no longer allowed to pass through the DATA BUS. During this period, teams are expected to deploy FIREWALLS by inserting a FIREWALL into the CPU and pulling themselves completely off of the ground. Teams, if on the opposite alliance’s side, are expected to play defense on climbing ROBOTS, deploy CODE into the SHARED CACHE, and/or try to prevent their FIREWALL deployal. However, teams must be careful not to cross the INITIATION LINE.

ELEMENT:
The ELEMENT in MALWARE MAYHEM is the main game piece in our game. The ELEMENT represents lines of anti-malware CODE, and ROBOTS must deploy the CODE into the CORES, INFECTED CORES, or the SHARED CACHE to earn points. The ELEMENT itself is a 18” plastic linkage chain with 2” links. Each CODE will have a magnetic component on one end, allowing it to be automatically scored as it passes through scoring locations. All CORES will have a magnetic detector that will detect any CODE that is scored through that specific CORE. The ELEMENT enters the field through the four PLAYER STATIONS on the field. During auto, CODE will not be located on the field, instead all CODE will be either pre-loaded into the ROBOT or collected from the PLAYER STATIONS.

CAD Model:

Supplementary Information:

POINT VALUES

SHARED CACHE Stages

Points from shooting in the SHARED CACHE come in 3 stages. Each alliance in a match will only receive points after a certain number of lines of CODE have been deployed into the SHARED CACHE by both alliances. Once both alliances have deployed 5 lines of CODE each, stage 1 is achieved. Once both alliances have deployed 10 lines of CODE each, stage 2 is achieved. Once both alliances have deployed 15 lines of CODE each, stage 3 is achieved.

ROLE OF HUMAN PLAYER

Both alliances can have a maximum of three human players and there are two primary positions that the players can take during the game. One of the positions will be to put CODE onto the field through the PLAYER STATIONS on each side of the field, and since there are two PLAYER STATIONS per alliance, there will be two human players for this position. The other position of the human players is to move CODE from OVERFLOW to the PLAYER STATIONS on their alliance station side. A blue human player will be at one OVERFLOW and the red human player will be at the other. An OVERFLOW is where scored CODE in the CPU ends up.

Initiation Line (6): At the start of each match, each team’s ROBOT must start on the INITIATION LINE consisting of a long strip of white spanning the entire width of the field on their alliances side of the field. ROBOTS moving entirely off of the INITIATION LINE into either the climbing zone or scoring zone will earn points for their team. During the Endgame period of the match, this plane acts as a defense line, where, in the last 30 seconds of the match, robots of the opposite alliance are not allowed to cross this line so as to not interfere with gameplay (climbing).

Collection Line (5): This is the line that spans the width of the field at the end of the retrieval zones on each side of the field.

Collection Zone (12): This is the area of the field that spans from the COLLECTION LINE to the alliance wall. While in this zone, ROBOTS cannot attempt to deploy CODE. A ROBOT must move completely outside of the zone to be able to deploy CODE.

Scoring Zone: During TELEOP and the POSITIONING PERIOD, this is the area of the field that is in between the COLLECTION LINE (5) and CPU on either side of the field. During the DEPLOYMENT PERIOD, this zone is in between the COLLECTION LINE, and the INITIATION LINE (6) on either side of the field.

PROTECTED ZONES

Retrieval Zone (11): These zones are the marked areas on the field surrounding the PLAYER STATIONS on the opposite side of an alliance’s driver stations. At all times during the match, defense is not permitted on ROBOTS that are in these zones.

Climbing Zone (10): This zone becomes active only during the DEPLOYMENT PERIOD. This is the area of the field in between the two INITIATION LINES on either side of the field. During the DEPLOYMENT PERIOD, no defense is permitted on a ROBOT in this zone.

GAME SPECIFIC PENALTIES

Shooting outside the Scoring Zone 5 points per CODE segment
Deploying the FIREWALL prior to the DEPLOYMENT PERIOD 15 points
Contacting an opponent in their RETRIEVAL ZONE 5 points for every contact
Passing through the DATA BUS during the DEPLOYMENT PERIOD 15 points
Contacting an opposing ROBOT in the CLIMBING ZONE during the DEPLOYMENT PERIOD Climb awarded to opponent

GAME RULES: ROBOTS

G1. ROBOT height, as measured when it’s resting normally on a flat floor, may not exceed 36 in. (~91 cm) above the carpet during the match, with the exception of ROBOTS intersecting their alliance’s defense zone during the DEPLOYMENT PERIOD.

ROBOT CONSTRUCTION RULES

R1. The ROBOT (excluding bumpers) must have a frame perimeter that consists of fixed, non-articulated structural elements of the ROBOT.

R2. In the starting configuration (the physical configuration in which a ROBOT starts a match), no part of the ROBOT shall extend outside the vertical projection of the frame perimeter, with the exception of its bumpers.

R3. A ROBOT’S starting configuration may not have a frame perimeter greater than 120 in. (~304 cm) and may not be more than 36 in. (~91 cm) tall.

R4. ROBOTS may not extend beyond their frame perimeter, with the exception of ROBOTS intersecting their alliance’s defense zone during the DEPLOYMENT PERIOD.

R5. The ROBOT weight must not exceed 125 lbs. (~56 kg). When determining weight, the basic ROBOT structure and all elements of all additional mechanisms that might be used in a single configuration of the ROBOT shall be weighed together. The following items are excluded:

  1. ROBOT bumpers
  2. ROBOT battery

R6. A bumper is a required assembly which attaches to the ROBOT frame. Bumpers protect ROBOTS from damaging/being damaged by other ROBOTS and field elements

GAME TERMS

AUTO - The first 15 seconds of the match where the ROBOT moves by using autonomous code

CODE - An 18 inch long chain that is the main game piece, ELEMENT

CORE - One of 7 scoring locations that are on each side of the CPU

CPU - The central scoring unit that divides the two halves of the field

DATA BUS - The tunnel under the CPU that allows for interaction between alliances

DEPLOYMENT PERIOD - the last 30 second period following POSITIONING PERIOD

FIREWALL - A suitcase-shaped object with a handle

FIREWALL INSTALLATION PORT - location on the CPU below CORES where FIREWALLS are scored

INFECTED CORE - Scoring areas on the CPU that are highlighted with LEDs

INITIATION LINE - The line 8 feet from the CPU

LAST - League Against Science & Technology

MALWARE MAYHEM - The name of Team 6328’s concept game

PLAYER STATION - the station where a human players interact with CODE to give to the ROBOTS on the field

POSITIONING PERIOD - The 30 second period following TELEOP

PROTECTED ZONES - Areas on the field in which defense is penalized

ROBOT - the mechanism used to complete the tasks

SECURITY CONTEXT - A trapezoidal platform which stores the FIREWALLS prior to deployment

SHARED CACHE - Scoring location for the Coopertition mission

SHARING CODE - This is used to allow the alliances to cooperate

TELEOP - The time after AUTO where players control the ROBOT (2:15)

UNINFECTED CORE - Scoring areas on the CPU that are not highlighted with LEDs

RANKING SYSTEM

For Malware Mayhem, the points earned in each individual match is your team’s score. Every match, the score that your alliance earns will be totaled into a total score. Teams are ranked by their total score, which consists of the scores from all of their matches for that event, combined. In addition, teams earn a 50 point bonus if they win the match, or a 25 point bonus (to each alliance) if the match is tied.

  1. PLAYER STATION
  2. SHARED CACHE
  3. SECURITY CONTEXT
  4. FIREWALL
  5. SHOOTING LINE
  6. INITIATION LINE
  7. OVERFLOW
  8. CORE
  9. FIREWALL INSTALLATION PORT
  10. CLIMBING ZONE
  11. RETRIEVAL ZONE
  12. COLLECTION ZONE

Game Video:

Interview Presentation:
Malware Mayhem (2).pdf (3.8 MB)

All of the questions that the judges asked us after our presentation were game specific questions, like clarifications and further inquiries about how are game worked.

6328’s Game Design Challenge team had a very fun time going through all of the steps in developing our game and really enjoyed the alternative challenge for our team’s strategy/scouting sub-teams, who made up the majority of the Game Design team. We would be happy to answer any clarification questions about our game or any game specific questions.

15 Likes

I think 6328 wins just based on the super villain organization L.A.S.T.:sunglasses:

6 Likes

Couple updated videos, the programming team has been pushing the limits and making great strides.

Slalom Path FRC6328 Autonav Challenge Run 8 (6.8 Seconds)
Improvement of 2.4 seconds

Bounce Path FRC 6328 Autonav Challenge Run 3 (7.8 Seconds)
Improvement of 1.2 seconds

The 45 point Interstellar Accuracy challenge has evaded us thus far, but we’ll keep pushing.

12 Likes

It is SUPER frustrating when you can’t miss a shot. I have at least three videos where we literally miss the last shot every time. You’ll get it!

3 Likes

Hey everyone,
We have a few new videos to share and some technical updates.

Videos

After many dozens of attempts and over 6 hours of footage, we finally achieved a 45 on Interstellar Accuracy!

We’ve also been doing more runs of the Power Port Challenge. Our best score is now 84 points:

We also have a couple of updated videos for the Hyperdrive challenge:

Bounce Path (7.7 seconds)

Slalom Path (7.0 seconds)

Interstellar Accuracy

One of our biggest problems with this challenge was that in order to make the most accurate shots possible, we needed to shoot power cells individually. This allows the flywheel to fully recover and stabilize at the correct RPM. However, our robot doesn’t have a system for achieving this in a purely mechanical way. We tried several methods for working around that:

  • Our first attempt was to have the drivers manually pulse the feed system. That works OK, but is subject to human error. The timing for this is quite precise, so accidentally getting two or more shots in a row was common. This also means power cells are stopped in the feed system, which affects their velocity before they enter the flywheel and decreases our accuracy.
  • For several runs, we resorted to loading power cells from the reintroduction zone individually. We could stay within the five minute time limit, but this method is obviously tedious for the drive team. It also meant that we couldn’t attempt as many runs of the challenge since each took quite a while. Much of this challenge comes down to luck, so we wanted to be able to run as many attempts as possible.

Given these problems, we worked to create a better solution - software indexing! This is an alternative shooting mode which attempts to feed balls individually without any extra input from the driver. Here’s a video of the system in action:

This goes by quickly, so here’s a breakdown of what’s happening:

  1. As the flywheel spins up, the rollers are running in reverse. This makes sure that all power cells are fully within the hopper and not the feed system.
  2. The software detects when the flywheel speed is within 3% of the setpoint for two seconds continuously. This ensures that there is minimal remaining oscillation.
  3. The hopper and rollers run forwards to feed power cells towards the flywheel.
  4. Once the flywheel speed error is greater than 3% of the setpoint, we know that the first power cell is being shot. The rollers continue running for a tenth of a second to ensure that it doesn’t lose any extra velocity.
  5. To prevent the next shot, the rollers are immediately reversed and the second power cell is forced to stay in the hopper (or it’s ejected from the feed system if it made it that far). This process repeats for as long as the driver holds the shooting button.

AutoNav

Throughout this season, we’ve made progress in setting up our motion profiling system to track long paths accurately and quickly. However, we knew that even all of these improvements weren’t pushing our robot to its mechanical limits. Previously, we were using the following constraints on the profiles, which allowed for accurate tracking:

  • Max velocity = 130 in/s
  • Max acceleration = 130 in/s^2
  • Max centripetal acceleration = 120 in/s^2

Based on our testing, we determined the theoretical capabilities of the robot. These are our new constraints:

  • Max velocity = 150 in/s
  • Max acceleration = 250 in/s^2

We removed the centripetal acceleration constraint as well. While these constraints allow for much faster profiling, they come at the expense of good odometry tracking. This means that each path had to be manually tuned via trial and error, correcting points based on the robot’s true location. We’ve started looking at better solutions for tracking paths quickly, but this solution works well enough for now given our time pressure.

In addition to simply correcting the waypoints, we found that a useful tool was to break the path into separate sections. Before each, we reset the odometry system based on the robot’s approximate location. This keeps our path from drifting so far as to be impossible to correct. We also have to keep an eye on the battery voltage, since each path is very sensitive to this (as expected given how janky this method is). Generally, we considered a battery “low” if it was under 12.6 volts. Below are examples of each of the paths using this new method. Videos of the robot following them were posted previously.

Each path took 3+ hours of tuning, but we were able to match times with the Hyperdrive challenge within 0.2 seconds on each path:

  • Barrel Racing - 9.9 AutoNav, 10.1 Hyperdrive
  • Slalom - 6.8 AutoNav, 7.0 Hyperdrive
  • Bounce - 7.8 AutoNav, 7.7 Hyperdrive

We think that for both AutoNav and Hyperdrive, we have essentially reached the mechanical limits of this drivetrain. We’ll keep everyone updated as we make further improvements, and we’re happy to answer any questions.

27 Likes

So, it’s been a couple months since this thread was active, but this is part of 6328’s goals for the 2021 season so I’m including it here.

Over the last couple of years, 6328 has been fortunate enough to work with some truly amazing student mentors from WPI. They’ve made the leap from being an FRC student to being an FRC mentor incredibly well, and we’ve all worked hard to (I hope!) support them in that process. Over the last few months, 6328 has been documenting what has and hasn’t worked in making that student-to-mentor leap and just published our recommended transition program to our website here: FRC Student-to-Mentor Transition Program.

Every team is different, of course, so if you think this would be useful to your team please adapt it as needed. The vision is to be able to support students as they grow from FLL age to FTC/FRC age to mentor age, expanding on FIRST’s progression of programs.

The 6328 students are working on a season wrap up post before closing this year’s Open Alliance build thread, but our 2021 season isn’t quite finished yet. Good luck to all the teams that are still working on final interviews and presentations!

24 Likes

This is fantastic, thank you for sharing

3 Likes