FRC 6328 Mechanical Advantage 2020-2021 Build Thread

We certainly considered both options. From a mechanical perspective, the pneumatic hood is both simpler and more consistent than a servo. It’s much easier to guarantee that we will always hit the exact same positions using pneumatics. That sacrifices some variability, but changing flywheel speed as we do now gives us an essentially equivalent accuracy. Maneuvering the hood in software is also a challenge, particularly with the fourth position. However, once we got it working (which was also an excellent introduction to state machines) it can move very quickly between positions compared to many servo based mechanisms.

6 Likes

Just dropping in to share some goodness. Still a lot of room for improvement with all of these, but wanted to share some of the best times from our recent runs. All Autonav stuff will be up in a few days as well. Feel free to ask any questions!

Galactic Search FRC6328 Red A (4.0 seconds)

Galactic Search FRC6328 Red B (3.7 seconds)

Barrel Racing Path FRC6328 HyperDrive Challenge Run 1 (10.1 Seconds)

Bounce Path FRC6328 HyperDrive Challenge Run 1 (8.0 Seconds)

Lightspeed Path FRC6328 HyperDrive Challenge Run 1 (15.5 Seconds)

Slalom Path FRC6328 HyperDrive Challenge Run 1 (7.0 Seconds)

PowerPort Challenge FRC6328 (69 Pts)

Interstellar Accuracy Challenge FRC6328 (44 Pts!!!?!?!?!11ahadsgah)

22 Likes

And some Autonav paths, any questions feel free to ask!

Slalom Path FRC6328 Autonav Challenge Run 7 (9.2 Seconds)

Bounce Path FRC 6328 Autonav Challenge Run 2 (9.0 Seconds)

Barrel Racing Path FRC6328 Autonav Challenge Run 3 (9.9 Seconds)

13 Likes

By my unofficial count (i.e. looking things up here) that puts 6328 at 3rd, 3rd, 2nd, T-17th, T-4th in the world across the various challenges. Very impressive!

14 Likes

Thanks Karthik!

The team has been putting in an incredible amount of work over the last few months, it’s been challenging, but we’re enjoy the new challenge!

There’s still lots of room for improvement with a lot of these challenges for us, so hopeful with the last week or so we can push our way a little higher. I’m hoping some more teams share their scores on the leaderboard as we get closer to the deadline, it’s very fun to see the scores populate up against everyone else in the world!

9 Likes

6328 is really showing the world you don’t need swerve to be top tier. Great work you guys and can’t wait to see how much you can improve these already nuts scores!

6 Likes

Hello all,
After a long few months of hard work, team 6328’s Game Design Challenge has had our interview with the judges and is currently waiting for the results from the first round. So, we would like to take this opportunity to share with the community everything that we have submitted and presented at our interview for game design.

For our submission, we had a game overview summary, notable field elements, expected robot actions, and ELEMENT description, a CAD model of the field, supplementary information, a game video, and our presentation from the interview. Here it is:

Game Overview:
In MALWARE MAYHEM, two alliances of 3 robots each work to protect FIRST against a malware attack from the anti-STEM organization LAST (League Against Science & Technology) and ultimately save future FRC game files including those for the top-secret water game. Each alliance and their robots work towards collecting lines of Code and deploying them in the Infected Cores in the CPU. Near the end of the match, robots race to share Code into a Shared Cache, and at the end of the match robots collect Firewalls and install them into the CPU.

During the 15 second autonomous period, robots must follow pre-programmed instructions. Alliances score points by:

  1. Moving from the Initiation Line
  2. Deploying lines of Code into the Infected Cores of the CPU
  3. Deploying lines of Code into the Uninfected Cores of the CPU

During the 75 second tele-op period, drivers take control of their robots. Alliances score points by:

  1. Continuing to deploy Code into the Infected Cores of the CPU
  2. Continuing to deploy Code into the Uninfected Cores of the CPU

During the 30 second positioning period, robots score points by:

  1. Continuing to deploy Code into the Infected Cores of the CPU
  2. Continuing to deploy Code into the Uninfected Cores of the CPU
  3. Deploying 5 lines of Code into the Shared Cache to achieve stage 1
  4. Deploying 10 lines of Code into the Shared Cache to achieve stage 2
  5. Deploying 15 lines of Code into the Shared Cache to achieve stage 3

During the 30 second deployment period, robots score points by:

  1. Continuing to deploy Code into the Shared Cache
  2. Installing Firewalls into the CPU
  3. Hanging from installed Firewalls

The alliance with the highest score at the end of the match wins.

Notable Field Elements:
CPU: A large structure that separates the two halves of the field, consisting of seven CORES on each side and a central opening (DATA BUS) for interaction between alliances. A total of 5 CORES span the upper level of the CPU, and there are an additional 2 CORES on either side of the DATA BUS near the edge of the field. Throughout the match, different CORES will be randomly highlighted with LEDs, marking them infected. The three high CORES in the center have a slot below the scoring opening for installation of the FIREWALL. An additional scoring area for the FIREWALL is a slot below the lower CORES on both sides of the DATA BUS.

Security Context: A platform that houses 3 FIREWALL units: One directly in front, and two angled on either side. The Security Context is located in front of the alliance station walls of the same alliance color.

Shared Cache: A scoring location on the opposite alliance’s driver station wall. The scoring location is a single window the same size as the windows on the CPU and is directly over the center of the alliance wall. CODE can only be scored during the POSITIONING PERIOD and DEPLOYMENT PERIOD periods, and teams will have to reach different scoring tiers to receive equal points for both alliances.

Player Stations: There are four PLAYER STATIONS located in the four corners of the field. There are blue and red PLAYER STATIONS on the blue alliance station wall and the same for the red alliance station wall, for a total of two per alliance. The PLAYER STATIONS on the opposite side of the field of the alliance station have PROTECTED ZONES around them, while the stations on the same side do not.

Expected Robot Actions:
Auto: During the Autonomous Period, teams are tasked with moving off of the INITIATION LINE such that no part of their ROBOT is over the line. Teams may score in CORES, and they may collect code from the PLAYER STATION.

Tele-Operated Period: During the 75 second Tele-Operated portion of the game, the team’s main task is to shoot CODE into the COREs of the CPU. The primary locations that teams will be able to receive CODE is from the PLAYER STATION on their own side or on the opposite side of the field.

Positioning Period: During the POSITIONING PERIOD, which spans the second-to-last 30 seconds of the match, teams may retrieve FIREWALLS from the SECURITY CONTEXT, and prepare for the DEPLOYMENT PERIOD. Teams may also race to the other end of the field and deposit CODE into the SHARED CACHE. However, any robot that passes through the DATA BUS and does not return by the end of the POSITIONING PERIOD must remain on that side for the remainder of the match.

Deployment Period: During the DEPLOYMENT PERIOD the final 30 seconds, teams are no longer allowed to pass through the DATA BUS. During this period, teams are expected to deploy FIREWALLS by inserting a FIREWALL into the CPU and pulling themselves completely off of the ground. Teams, if on the opposite alliance’s side, are expected to play defense on climbing ROBOTS, deploy CODE into the SHARED CACHE, and/or try to prevent their FIREWALL deployal. However, teams must be careful not to cross the INITIATION LINE.

ELEMENT:
The ELEMENT in MALWARE MAYHEM is the main game piece in our game. The ELEMENT represents lines of anti-malware CODE, and ROBOTS must deploy the CODE into the CORES, INFECTED CORES, or the SHARED CACHE to earn points. The ELEMENT itself is a 18” plastic linkage chain with 2” links. Each CODE will have a magnetic component on one end, allowing it to be automatically scored as it passes through scoring locations. All CORES will have a magnetic detector that will detect any CODE that is scored through that specific CORE. The ELEMENT enters the field through the four PLAYER STATIONS on the field. During auto, CODE will not be located on the field, instead all CODE will be either pre-loaded into the ROBOT or collected from the PLAYER STATIONS.

CAD Model:

Supplementary Information:

POINT VALUES

SHARED CACHE Stages

Points from shooting in the SHARED CACHE come in 3 stages. Each alliance in a match will only receive points after a certain number of lines of CODE have been deployed into the SHARED CACHE by both alliances. Once both alliances have deployed 5 lines of CODE each, stage 1 is achieved. Once both alliances have deployed 10 lines of CODE each, stage 2 is achieved. Once both alliances have deployed 15 lines of CODE each, stage 3 is achieved.

ROLE OF HUMAN PLAYER

Both alliances can have a maximum of three human players and there are two primary positions that the players can take during the game. One of the positions will be to put CODE onto the field through the PLAYER STATIONS on each side of the field, and since there are two PLAYER STATIONS per alliance, there will be two human players for this position. The other position of the human players is to move CODE from OVERFLOW to the PLAYER STATIONS on their alliance station side. A blue human player will be at one OVERFLOW and the red human player will be at the other. An OVERFLOW is where scored CODE in the CPU ends up.

Initiation Line (6): At the start of each match, each team’s ROBOT must start on the INITIATION LINE consisting of a long strip of white spanning the entire width of the field on their alliances side of the field. ROBOTS moving entirely off of the INITIATION LINE into either the climbing zone or scoring zone will earn points for their team. During the Endgame period of the match, this plane acts as a defense line, where, in the last 30 seconds of the match, robots of the opposite alliance are not allowed to cross this line so as to not interfere with gameplay (climbing).

Collection Line (5): This is the line that spans the width of the field at the end of the retrieval zones on each side of the field.

Collection Zone (12): This is the area of the field that spans from the COLLECTION LINE to the alliance wall. While in this zone, ROBOTS cannot attempt to deploy CODE. A ROBOT must move completely outside of the zone to be able to deploy CODE.

Scoring Zone: During TELEOP and the POSITIONING PERIOD, this is the area of the field that is in between the COLLECTION LINE (5) and CPU on either side of the field. During the DEPLOYMENT PERIOD, this zone is in between the COLLECTION LINE, and the INITIATION LINE (6) on either side of the field.

PROTECTED ZONES

Retrieval Zone (11): These zones are the marked areas on the field surrounding the PLAYER STATIONS on the opposite side of an alliance’s driver stations. At all times during the match, defense is not permitted on ROBOTS that are in these zones.

Climbing Zone (10): This zone becomes active only during the DEPLOYMENT PERIOD. This is the area of the field in between the two INITIATION LINES on either side of the field. During the DEPLOYMENT PERIOD, no defense is permitted on a ROBOT in this zone.

GAME SPECIFIC PENALTIES

Shooting outside the Scoring Zone 5 points per CODE segment
Deploying the FIREWALL prior to the DEPLOYMENT PERIOD 15 points
Contacting an opponent in their RETRIEVAL ZONE 5 points for every contact
Passing through the DATA BUS during the DEPLOYMENT PERIOD 15 points
Contacting an opposing ROBOT in the CLIMBING ZONE during the DEPLOYMENT PERIOD Climb awarded to opponent

GAME RULES: ROBOTS

G1. ROBOT height, as measured when it’s resting normally on a flat floor, may not exceed 36 in. (~91 cm) above the carpet during the match, with the exception of ROBOTS intersecting their alliance’s defense zone during the DEPLOYMENT PERIOD.

ROBOT CONSTRUCTION RULES

R1. The ROBOT (excluding bumpers) must have a frame perimeter that consists of fixed, non-articulated structural elements of the ROBOT.

R2. In the starting configuration (the physical configuration in which a ROBOT starts a match), no part of the ROBOT shall extend outside the vertical projection of the frame perimeter, with the exception of its bumpers.

R3. A ROBOT’S starting configuration may not have a frame perimeter greater than 120 in. (~304 cm) and may not be more than 36 in. (~91 cm) tall.

R4. ROBOTS may not extend beyond their frame perimeter, with the exception of ROBOTS intersecting their alliance’s defense zone during the DEPLOYMENT PERIOD.

R5. The ROBOT weight must not exceed 125 lbs. (~56 kg). When determining weight, the basic ROBOT structure and all elements of all additional mechanisms that might be used in a single configuration of the ROBOT shall be weighed together. The following items are excluded:

  1. ROBOT bumpers
  2. ROBOT battery

R6. A bumper is a required assembly which attaches to the ROBOT frame. Bumpers protect ROBOTS from damaging/being damaged by other ROBOTS and field elements

GAME TERMS

AUTO - The first 15 seconds of the match where the ROBOT moves by using autonomous code

CODE - An 18 inch long chain that is the main game piece, ELEMENT

CORE - One of 7 scoring locations that are on each side of the CPU

CPU - The central scoring unit that divides the two halves of the field

DATA BUS - The tunnel under the CPU that allows for interaction between alliances

DEPLOYMENT PERIOD - the last 30 second period following POSITIONING PERIOD

FIREWALL - A suitcase-shaped object with a handle

FIREWALL INSTALLATION PORT - location on the CPU below CORES where FIREWALLS are scored

INFECTED CORE - Scoring areas on the CPU that are highlighted with LEDs

INITIATION LINE - The line 8 feet from the CPU

LAST - League Against Science & Technology

MALWARE MAYHEM - The name of Team 6328’s concept game

PLAYER STATION - the station where a human players interact with CODE to give to the ROBOTS on the field

POSITIONING PERIOD - The 30 second period following TELEOP

PROTECTED ZONES - Areas on the field in which defense is penalized

ROBOT - the mechanism used to complete the tasks

SECURITY CONTEXT - A trapezoidal platform which stores the FIREWALLS prior to deployment

SHARED CACHE - Scoring location for the Coopertition mission

SHARING CODE - This is used to allow the alliances to cooperate

TELEOP - The time after AUTO where players control the ROBOT (2:15)

UNINFECTED CORE - Scoring areas on the CPU that are not highlighted with LEDs

RANKING SYSTEM

For Malware Mayhem, the points earned in each individual match is your team’s score. Every match, the score that your alliance earns will be totaled into a total score. Teams are ranked by their total score, which consists of the scores from all of their matches for that event, combined. In addition, teams earn a 50 point bonus if they win the match, or a 25 point bonus (to each alliance) if the match is tied.

  1. PLAYER STATION
  2. SHARED CACHE
  3. SECURITY CONTEXT
  4. FIREWALL
  5. SHOOTING LINE
  6. INITIATION LINE
  7. OVERFLOW
  8. CORE
  9. FIREWALL INSTALLATION PORT
  10. CLIMBING ZONE
  11. RETRIEVAL ZONE
  12. COLLECTION ZONE

Game Video:

Interview Presentation:
Malware Mayhem (2).pdf (3.8 MB)

All of the questions that the judges asked us after our presentation were game specific questions, like clarifications and further inquiries about how are game worked.

6328’s Game Design Challenge team had a very fun time going through all of the steps in developing our game and really enjoyed the alternative challenge for our team’s strategy/scouting sub-teams, who made up the majority of the Game Design team. We would be happy to answer any clarification questions about our game or any game specific questions.

15 Likes

I think 6328 wins just based on the super villain organization L.A.S.T.:sunglasses:

6 Likes

Couple updated videos, the programming team has been pushing the limits and making great strides.

Slalom Path FRC6328 Autonav Challenge Run 8 (6.8 Seconds)
Improvement of 2.4 seconds

Bounce Path FRC 6328 Autonav Challenge Run 3 (7.8 Seconds)
Improvement of 1.2 seconds

The 45 point Interstellar Accuracy challenge has evaded us thus far, but we’ll keep pushing.

12 Likes

It is SUPER frustrating when you can’t miss a shot. I have at least three videos where we literally miss the last shot every time. You’ll get it!

3 Likes

Hey everyone,
We have a few new videos to share and some technical updates.

Videos

After many dozens of attempts and over 6 hours of footage, we finally achieved a 45 on Interstellar Accuracy!

We’ve also been doing more runs of the Power Port Challenge. Our best score is now 84 points:

We also have a couple of updated videos for the Hyperdrive challenge:

Bounce Path (7.7 seconds)

Slalom Path (7.0 seconds)

Interstellar Accuracy

One of our biggest problems with this challenge was that in order to make the most accurate shots possible, we needed to shoot power cells individually. This allows the flywheel to fully recover and stabilize at the correct RPM. However, our robot doesn’t have a system for achieving this in a purely mechanical way. We tried several methods for working around that:

  • Our first attempt was to have the drivers manually pulse the feed system. That works OK, but is subject to human error. The timing for this is quite precise, so accidentally getting two or more shots in a row was common. This also means power cells are stopped in the feed system, which affects their velocity before they enter the flywheel and decreases our accuracy.
  • For several runs, we resorted to loading power cells from the reintroduction zone individually. We could stay within the five minute time limit, but this method is obviously tedious for the drive team. It also meant that we couldn’t attempt as many runs of the challenge since each took quite a while. Much of this challenge comes down to luck, so we wanted to be able to run as many attempts as possible.

Given these problems, we worked to create a better solution - software indexing! This is an alternative shooting mode which attempts to feed balls individually without any extra input from the driver. Here’s a video of the system in action:

This goes by quickly, so here’s a breakdown of what’s happening:

  1. As the flywheel spins up, the rollers are running in reverse. This makes sure that all power cells are fully within the hopper and not the feed system.
  2. The software detects when the flywheel speed is within 3% of the setpoint for two seconds continuously. This ensures that there is minimal remaining oscillation.
  3. The hopper and rollers run forwards to feed power cells towards the flywheel.
  4. Once the flywheel speed error is greater than 3% of the setpoint, we know that the first power cell is being shot. The rollers continue running for a tenth of a second to ensure that it doesn’t lose any extra velocity.
  5. To prevent the next shot, the rollers are immediately reversed and the second power cell is forced to stay in the hopper (or it’s ejected from the feed system if it made it that far). This process repeats for as long as the driver holds the shooting button.

AutoNav

Throughout this season, we’ve made progress in setting up our motion profiling system to track long paths accurately and quickly. However, we knew that even all of these improvements weren’t pushing our robot to its mechanical limits. Previously, we were using the following constraints on the profiles, which allowed for accurate tracking:

  • Max velocity = 130 in/s
  • Max acceleration = 130 in/s^2
  • Max centripetal acceleration = 120 in/s^2

Based on our testing, we determined the theoretical capabilities of the robot. These are our new constraints:

  • Max velocity = 150 in/s
  • Max acceleration = 250 in/s^2

We removed the centripetal acceleration constraint as well. While these constraints allow for much faster profiling, they come at the expense of good odometry tracking. This means that each path had to be manually tuned via trial and error, correcting points based on the robot’s true location. We’ve started looking at better solutions for tracking paths quickly, but this solution works well enough for now given our time pressure.

In addition to simply correcting the waypoints, we found that a useful tool was to break the path into separate sections. Before each, we reset the odometry system based on the robot’s approximate location. This keeps our path from drifting so far as to be impossible to correct. We also have to keep an eye on the battery voltage, since each path is very sensitive to this (as expected given how janky this method is). Generally, we considered a battery “low” if it was under 12.6 volts. Below are examples of each of the paths using this new method. Videos of the robot following them were posted previously.

Each path took 3+ hours of tuning, but we were able to match times with the Hyperdrive challenge within 0.2 seconds on each path:

  • Barrel Racing - 9.9 AutoNav, 10.1 Hyperdrive
  • Slalom - 6.8 AutoNav, 7.0 Hyperdrive
  • Bounce - 7.8 AutoNav, 7.7 Hyperdrive

We think that for both AutoNav and Hyperdrive, we have essentially reached the mechanical limits of this drivetrain. We’ll keep everyone updated as we make further improvements, and we’re happy to answer any questions.

27 Likes

So, it’s been a couple months since this thread was active, but this is part of 6328’s goals for the 2021 season so I’m including it here.

Over the last couple of years, 6328 has been fortunate enough to work with some truly amazing student mentors from WPI. They’ve made the leap from being an FRC student to being an FRC mentor incredibly well, and we’ve all worked hard to (I hope!) support them in that process. Over the last few months, 6328 has been documenting what has and hasn’t worked in making that student-to-mentor leap and just published our recommended transition program to our website here: FRC Student-to-Mentor Transition Program.

Every team is different, of course, so if you think this would be useful to your team please adapt it as needed. The vision is to be able to support students as they grow from FLL age to FTC/FRC age to mentor age, expanding on FIRST’s progression of programs.

The 6328 students are working on a season wrap up post before closing this year’s Open Alliance build thread, but our 2021 season isn’t quite finished yet. Good luck to all the teams that are still working on final interviews and presentations!

24 Likes

This is fantastic, thank you for sharing

3 Likes

Hey y’all! It’s been a while since we’ve done an update, so just wanted to circle back and talk a little bit on a high level about what the team has been up to.


“Dave’s going away party, July 2020”

Since the last update, the team has been fortunate enough to add 3(!!!) new banners to the collection, and a half dozen more awards, including “Excellence in Engineering” for Neon and “Engineering Design” for Game Design, At-Home NEON group champions, NE district level Chairman’s, and NE District Champs Chairman’s and a Dean’s list semi-finalist, and most importantly, we’ve graduated a handful of seniors, all heading off to great colleges.

I couldn’t be more proud of everything the team has accomplished this year, it feels surreal to type out that list of awards and I’m incredibly grateful the team has pushed to make it happen.


“6328 NE Awards Watch Party - May 2021”

Even though the summer has quickly arrived, and we’ve said our goodbyes(for now) to our seniors, the season still isn’t over yet. This weekend, our Chairman’s team will be presenting for Championship Chairman’s, the youngest team to be doing so in Detroit. The team has been working incredibly hard to polishing up the presentation, and with the help of some amazing friends, it’s coming together to be the best presentation the team has done to date. Fingers crossed for them, bring it home for NE!


“bling heckin bling”

Lastly, even though it wasn’t required this year, our media team went above and beyond to create a 2021 chairman’s video, so please check it out below and give our lead editor some some praise(he’s only 13 years old, already blowing my mind and bringing a tear to my eye!)!

We’ll have another real season wrap with data about the entire OA process and some thoughts on this whole social experiment(jk) and (finally) a thread wrapping post after the champs award ceremony from the students, as always, please feel free to ask any questions!

Love yall,
David

22 Likes

That brought a tear to my eye

2 Likes

Thanks for the promo, but what can I say, it’s not in 4K :grinning_face_with_smiling_eyes:

3 Likes

Hey all,

This is Jack, a new(ish) CAD & design mentor on 6328 - I joined the team for the 2021 season and I’m having a blast so far. I absolutely love the team’s willingness to openly share how they operate, and I’m incredibly excited to be a part of the build threads in future seasons. As always, please never hesitate to reach out with any questions or comments you may have - I’d love to have a conversation!

It’s been a while since our last update, so we figured it would be good to let y’all know what’s gone on since.

BattleCry Prep

Software (by @jonahb55)

In the weeks leading up to BattleCry, our software team was busy working on a variety of improvements based on our experience during (and after) the At-Home challenges.

We polished up our six ball auto routine during build season for the At-Home challenges, but hadn’t touched it in several months. Using some of the shooting and odometry improvements we made for the Interstellar Accuracy and Power Port challenges, we revised the routine once again. It’s now more reliable and flexible than before, including being able to deal with a wider variety of starting positions. There are still occasional issues with getting our auto-aim function to converge in time, but this should be solvable with some more tuning.

We’re also in the process of developing more robust tools for robot logging, but decided to start small for BattleCry. One of the key systems we wanted to be able to examine in more detail was odometry, particularly looking at how it’s affected by real field conditions (hard use, defense, occasionally running into walls or power cells, etc.) We created a simple logging setup which saves pose data every tenth of a second to a CSV file on a USB stick. The robot code is here, which can remain relatively simple given that data is only used from a single subsystem. It also records when data from the Limelight is being used to reset pose. This information is visualized using a custom web app packaged into a local HTML file (available here). See below for our results during BattleCry.

For the At-Home shooting challenges, we added new logic to automatically select one of four hood positions and calculate the appropriate flywheel speed based on odometry. This proved successful during the season, even behaving fairly well for power cells of varied wear. However, upon returning to this we found that all of our shots from behind the initiation line were systematically low. We spent many hours attempting to pinpoint the issue, and seemed to eliminate all of the likely candidates (ball wear, flywheel not up to speed, odometry error, nonfunctional motor, etc.). In an attempt to work around this, we manually compensated by adding 200-400 RPM to each shot. We’ll continue investigating now that the event is over.

Mechanical

A lot of our time was spent performing maintenance on the climber that hadn’t been used since April of 2020… Is time even real?

After such a long break, we ended up having a couple of issues getting the arms to deploy consistently due to binding in the telescoping arms. Investigating, we found that after the one event that we did get to play in 2020, our PLA hardstops were in pretty rough shape :sweat_smile:.

To remedy this, we remade the Delrin plates that are located between stages (the old ones didn’t look too great) and ensured that the rest of the parts operated smoothly.

Since we were racing against the clock at this point, we decided that a simple solution to reduce future damage would be to deploy the arms only part way and to use the rope as a hardstop instead.

After tightening some things up and giving the robot a once-over, it was time for BattleCry!

BattleCry

What an event! We had an absolute blast competing for the first time in a while. Below is a breakdown of what went down and how we handled the event:

Software (by @jonahb55)

Odometry logging proved to be an excellent addition, both for debugging and as a demonstration of what the software is capable of (I guess looking at coordinates on a dashboard isn’t for everyone). Below is a visualization of data from our last qualification match next to the video of the robot’s true position. The bright green line indicates when Limelight data is being used. Note that this tracking can be done off of the opponent target as well as our own. While the odometry tends to drift over time as expected, it’s still effectively able to reset itself using vision.

The Limelight LEDs are always on when the driver activates our auto-aim function, but using vision for resetting odometry means we’d like to be able to see the target even when driving around. We avoid keeping the LEDs on continuously as this may be annoying for those around the field, so the software can be configured to use different logic for deciding when to activate the lights. At the start of the event, they would blink once every eight seconds when the target was not visible and every three seconds once a target was spotted. However, we found that odometry often drifted significantly during wall shots where the target wasn’t visible directly. To fix this, we switched the Limelight to blink every three seconds normally and to hold the LEDs on as long as a target was visible. This allowed the robot to more consistently reset odometry while driving to the wall. Of course, we were ready to change any of these timings if they proved problematic for people near the robot.

Unfortunately, even with accurate odometry the shooting issues observed at the shop continued to affect us throughout the event. We tweaked our manual speed compensation, but found that shots from behind the initiation line continued to be unreliable. Luckily, we could work around the issue by shooting from the protected area of the wall where flywheel speed doesn’t need to be controlled as precisely. We also added an indicator to the dashboard to show when the flywheel reaches its setpoint.

Despite some missed shots, we were glad to see that the robot experienced no major issues…until finals. During finals one and three, the robot began spinning rapidly during auto. As we discovered later, the navX connected to the RIO’s MXP port had come loose after a year and a half of hard use. The software thought that it was perpetually at zero degrees, so the turning controller aggressively tried to bring it to a target angle but saw no response from the gyro. Thus, we continued spinning until the end of auto. This also meant that our odometry data was terrible, as it relies on the gyro. Below is a sped up replay of the robot’s odometry from finals three. The Limelight was able to figure out the distance to the target well enough that we made a few shots, but clearly this wasn’t the most reliable data feed.

Unfortunately, the navX wasn’t the only major issue during finals. A problem with hopper feeding, plus some CAN errors, resulted from one of the breakers for our hopper motor controllers becoming partially dislodged during the day. Between finals one and two, we swapped in a new breaker. The other problems were that the RSL and pressure sensor worked only intermittently, but we hadn’t yet identified the source of the issues before putting the robot back on the field for finals two. The pressure sensor reading is used by the software to decide when it can move our pneumatic shooter hood. Without the sensor, it refused to move the hood and thus prevented us from shooting. The only upside was that while waiting for the hood, it never got to the part of the auto routine which caused uncontrollable spinning. After finals two, we disabled the pressure check and so were able to shoot again in finals three. After the event we quickly found that the Swyft board on the RIO had come loose in addition to the navX. This caused both of the problems we observed. Once we figured out the issue, fixing it was as simple as pressing the Swyft board back into place. We’re looking at potential solutions to prevent this from happening in the future. Overall, we still like the Swyft board as it makes the individual connections much more secure compared to the stock RIO connectors.

In an unfortunate irony, the missing navX was actually detected by the software and it printed an error message in the console. However, that message was easily lost in the stream of other information. No-one noticed the error, which meant we didn’t figure out the cause of the robot’s spinning until after the last match. The disconnected pressure sensor was similarly detectable. Ideally, we want these sorts of errors to 1) be very obvious to anyone looking at the driver station and 2) show persistently until the issue is resolved. There doesn’t seem to be a good way to achieve this using the built-in features of Shuffleboard, so after the event we wrote a custom plugin to handle it instead. This creates a new widget, as pictured below. The robot code can send alert messages at various levels of urgency via NetworkTables to be displayed here (of course, these are also printed in the console as before). Here are the relevant resources:

unnamed

Scouting/Strategy

Despite not being at the event in-person, our scouting and strategy team did a great job with guiding our team to a win. We’ve been looking to utilize the scouting and strategy team more and more recently, and I think that this event was a huge step in the right direction. They are currently in the process of performing a post-event analysis of BattleCry - once they’re done with that, they’ll be sure to update this thread with their thought processes and decision making throughout the event.

We can’t thank everyone involved with BattleCry enough for hosting such a safe and fun event. Special shout out to our alliance partners 2168, 2262, and 1735 for forming such a collaborative and exciting alliance, and helping us take home a gold medal!

unnamed

What’s next?

To our pleasant surprise, we were able to recruit 16 new students to the team this year! To keep them engaged and to hit the ground running next season, 6328 will be largely focused on training for the next few months. We’ll try our best to keep our website updated as we progress throughout these training sessions so that other teams can adapt/use some of what we have for themselves.

Additionally, we’ve been developing the CAD for a new intake that we can (hopefully) build and use at another off-season this year. When that design is cleaned up a bit, we’ll certainly share it here.

Please don’t hesitate to ask questions or reach out - I’ll be sure to get you in contact with the appropriate person.

Have a great day and enjoy your weekend,
Jack

30 Likes

Always love reading what yall share with the community. It was awesome to see you guys compete AND to play you in the finals yet again (A different outcome this time but it was a great time).

Do you guys plan on doing any robot upgrades or potentially a different robot for potential future offseason events?

6 Likes

5 seconds:
I’m sorry for laughing when the robot yeeted out of the field


Thanks for the very informative post. Great job on powering through these hard times and best of luck next season!

3 Likes

I haven’t touched Shuffleboard in a long time at this point, and I’ve never written a custom plugin for it. Is it possible to have multiple instances of this alert system on the dashboard at once? I figure one dedicated to CAN issues could be useful.

3 Likes