wake up new jonah post just dropped
Good stuff as always, appreciate you sharing.
wake up new jonah post just dropped
Good stuff as always, appreciate you sharing.
Could you please tell us more about the simulation software, and how we can get it?
Specifically showing the robot driving around the field.
There are two parts to our simulation system — the robot code (which runs the physics sim) and the visualizer (the field diagram and graphs). I’ll also include this link to WPILib’s simulation tutorials. It’s a great resource for teams just looking to get started with simulation. With that said, here’s how we do things:
For the robot code, we structure each of our subsystems such that all of the hardware interaction (the “IO layer”) is isolated from the main control logic. This is key in allowing our logging framework (AdvantageKit) to function, since we can replay data from a log without a hardware implementation. More details about the structure are available here (the “Subsystems” section). A side effect of being able to “turn off” the hardware is that we can also replace it with a different implementation. For example, our drive subsystem supports both Spark MAX and Talon SRX controllers depending on the selected robot. This same concept extends naturally to a physics sim implementation. We treat the simulated robot as a separate robot with its own set of constants and IO implementations, but all of the control logic is the same as any real robot (hence its usefulness when testing). Here are a few useful links:
All of the visualizations are produced with Advantage Scope, which connects to our logging framework. This means that all of the same views work on the real robot. WPILib also includes a
Field2d widget which accomplishes something similar for odometry data specifically.
Let me know if you have any questions.
Thanks! That is an impressive system, and I appreciate the rundown. I’ll show this to the rest of the team for off-season.
Before Granite State, we checked for compliance with Rule 611 - the one that causes your Friendly Robot Inspector to ask for your voltmeter. To prove your frame is isolated from your electrical system, you turn the main breaker on and then measure the resistance from each of the power input terminals to the frame (at several points). Normally it’s very high and all is well. So we did our test, and we passed… barely. We were measuring approximately 3.5K Ω to both the 12V and ground connections. This is much lower than we’d expect to see, and just barely clears the specified minimum of 3K Ω.
A cursory inspection didn’t uncover any obvious faults and all subsystems were fully operational. Since we were passing, and time was short, we let it go. We did pass inspection, although the inspector commented on the low reading (but as our mechanical folks like to say - “clearance is clearance”). However, we were determined to track down the fault before the next event, as something was not right.
This fault can require some time-consuming effort to locate, so it’s best to check for it well before your events. I thought it might be instructive to go through the process and what we ultimately found, for those teams who might not have had to deal with this before.
FRC-legal electrical components are isolated - controllers, motors, etc. are not supposed to have any connection from their electrical terminals to their cases, mounting points, etc. But there are a number of ways that we can end up with a connection to the frame:
Let’s first envision a few different kinds of faults with the help of this extremely simplified schematic:
The input terminals are on the left, with the robot’s electrical system represented as a resistor R1. If we imagine the frame to the right, there are three possible points where we could have a connection:
Before we get too deep, it’s worth checking the obvious. Run a systems check - is everything working? Do a visual inspection, using a light to peer into dark areas. Look for loose wires, signs of abrasion, wires catching on mechanisms. Focus especially where wires attach to moving elements - intakes, pivots, etc. - or pass near moving parts.
Next, rule out custom circuits. Completely disconnect anything like vision cameras, LEDs, proximity sensors, etc. If that makes the problem go away, add them back one by one until you find the culprit, then figure out how it’s making contact.
Finally, we have to start isolating motor and pneumatics circuits. If you’ve got a short to the 12V input - A in the diagram above - you may be able to just pull breakers until you find the circuit with the issue, since disconnecting a breaker isolates the positive side. More likely, though, you’re going to need to disconnect both + and - on each circuit in turn, in order to fully isolate it and rule it out as the culprit. This might mean pulling wires from your PDP/PDH. Our motor controllers have PowerPoles on them (for ease of swapping in the event of a failure) so for most circuits it was a bit simpler - we could disconnect those pairs one by one. Of course, this approach left the “pigtail” with the PowerPole on it still connected to the power distribution, so we were assuming the fault wasn’t in that part. If there’s no significant change in reading when you disconnect a circuit, you can put it back and move on to the next.
In most cases there’s likely a single fault, but in the worst case you might have to consider the possibility of multiple faults.
After painstakingly working our way through each motor circuit without luck, we reached the very last one - our “hopper” motor controller… and instantly when it was disconnected, the reading jumped 1000x to over 3M Ω. So the fault was clearly on that circuit. Next we reconnected the controller and disconnected the motor… still good. So that isolated the fault to a Neo 550 motor mounted to an UltraPlanetary gear set. We replaced that assembly and all was now well.
We disassembled the faulty UltraPlanetary and discovered that if we loosened the M3x8mm bolts that held the plate onto the motor just slightly, the fault disappeared. These are the standard, recommended bolts, but somehow one was contacting an internal component (or coil) and causing our frame short. That motor also still works just fine, but is flagged and will never again go on a competition robot, as we don’t know what might be damaged.
If you’ve been through this and have other tips and tricks, please share in the comments!
As we complete the final preparations before shipping our robot to Houston, I’d like to take a moment to reflect on some software topics from our last event at Greater Boston.
With a new motorized hood to control, we had to rely on accurate vision and odometry data more than ever before. As I’ve discussed previously, we use an averaging system for odometry where the weight of the vision data vs. existing drive data is tunable (and also depends on the robot’s angular velocity). Tuning the gains for that algorithm is tricky in the shop, since defense and match strategy play a huge role in how quickly we need to maneuver and how much we’ll be pushed around between shots.
For example, we faced some very impressive defense in QM 38, which the odometry system wasn’t properly tuned to handle — the drive data drifted significantly from reality, and it didn’t adjust quickly enough when receiving vision updates. This resulted in several volleys of missed shots. I think I’ve now examined the log of this match for longer than the rest of the matches combined…
Based on data from QM 38, we were able to retune the odometry system before our next match. Using the replay feature of our logging framework, we could try a variety of gains and see how they would have performed if we ran them on the field. Here’s an example of the robot adjusting odometry based on vision data — the top shows the real calculations performed during a match, and the bottom shows a retuned system.
As we worked on refining the vision system based on log data, we also realized that it would be useful to visualize the raw corner positions (and top down translations) from the Limelight. The data was included in the log files, we just didn’t have a built in way to visualize it with Advantage Scope. After the event, we added a generic 2D point visualizer. We can now view visualizations like the one below using just a log file — no preprocessing or video editing required. The colored points show the corner data from the Limelight, and the white points are the top-down translations to each corner (the input data used for circle fitting).
We’re currently in the process of analyzing log data from the event to evaluate any other improvements we can make to the vision system (like better rejection of invalid targets).
For playoffs, we deployed a new three cargo auto routine that picked up cargo from our partner 5563. We also ejected an opponent cargo into the hangar for good measure. Here’s an example of the full routine in action:
We had a large suite of auto routines prepared for the event (I believe @Connor_H would like me to say that there are technically “30” routines). However, this particular auto wasn’t one of them. As we had to write the full routine during lunch, simulation was essential in checking that the sequence would function correctly:
We’re continuing to explore options for new auto routines, and will have more details soon.
As with our first event, we’ve compiled some fun statistics based on the robot’s log files. The detailed data is available in this document for anyone curious to dig deeper. I’d like to highlight this graph in particular, comparing a few statistics between our two events:
I think it’s fascinating to try to explain some of these comparisons. Our loop cycle, distance drive, and power usage were all similar. The shot count increased by 104, which is probably an accurate reflection of our increasing scoring ability (largely thanks to our new motorized hood). Intake actuations increased by a factor of four because we changed the operator control scheme — the intake automatically retracted when it wasn’t being used to avoid penalties. The oddest change to me is the decreased number of flywheel rotations. My best guess is that we just never ran it as much in the pit for testing. The majority of flywheel rotations happened off-field at Granite State, while the majority happened on-field at Greater Boston. I also enjoy examining the distance driven broken down by match:
We can see that in QM 38 (with the heavy defense), we drove significantly farther than the preceding matches. The semi finals and finals also brought some heavy driving and rapid cycles, which is nicely reflected in the data.
In running this analysis, we replaced the count of hood actuations with a total measure of degrees traveled. Throughout the event, the hood moved 13867° (about 38.5 rotations). Here’s a useless statistic — that means the hood moved 0.06% as far as the flywheel. I’m so happy that we can finally find answers to important questions like this.
This is a feature that we added before Granite State but just forgot to mention. I guess it’s better late than never…
Now that we’re logging data from every match and practice session, we realized that it would be very useful to associate the battery in the robot to each log file. If we ever saw some suspicious behavior, we could test the battery more thoroughly or pull it out of our competition rotation. To accomplish this, we mounted a barcode scanner to the back of the robot, pointed at a QR code taped to the battery. Each battery is given a unique identifier based on the year, and the scanned code is saved as metadata in each log file.
The scanner connects to the RIO over USB, and we wrote a
BatteryTracker class to interface with it. While we haven’t had any catastrophic problems that would require this data, we can use it to compile more fun statistics (did you really not see that coming?)
This graph includes all of the data from Granite State, Greater Boston, as well as practice and tuning sessions. We favor 2021 and 2022 batteries in match play, though it seems we have a bit of a grudge against battery 2022-005. We’ll have to make sure to put a few extra cycles on it before Houston…
What a nice simple question that definitely doesn’t require dozens of log files to answer. It’s right there in the manual!
After any network latency and other complicating factors, we were curious about how many cycles the robot is actually enabled for. That’s especially important during auto; in our case, our five cargo auto is designed to use all 15 seconds to the fullest — any premature disables could risk preventing our final shots from being fired.
With 42 matches of data, here are the measured lengths of autonomous, teleoperated, and the disabled “gap” between them:
It seems that the FMS already accounts for the possibility of latency when enabling, consistently giving an extra 0.3-0.4 seconds in auto and teleop. We haven’t seen this documented anywhere yet, but hopefully it proves useful to someone.
We’re still hard at work on improvements for Houston, so keep an eye out for more posts in the near future. We’re happy to answer any questions.
Here’s a quick update with some of our final changes before heading to Houston! During our last event, we noticed that this happened a few times:
After extensive analysis, the strategy team has concluded that this sort of behavior is apparently “bad.” To prevent this from happening in the future, we added a color sensor along the ball path, connected via a Raspberry Pi Pico using this library.
We found that the measurements of color and proximity were quite reliable — changes in lighting made very little difference to the detected color since the ball passes right in front of the sensor. To distinguish between cargo colors, we compare the red and blue channels (if one is greater than 150% the value of the other, a valid color is detected). Here’s some sample data from intaking a few balls:
Based on the detected color, the robot can automatically eject opponent cargo out the shooter (for the first ball) or out the intake (for the second ball). Here’s a demo of the system in action:
The controls for intaking, shooting, and ejecting are now significantly more complex, so we did a complete refactor of the feeder controls. Everything is built around a central feeder subsystem that controls the hopper, tower, and kicker wheels using a state machine (
It takes the operator controls, proximity sensors, and color sensor as inputs to decide what the feeder should be doing (including temporarily taking over other subsystems like the flywheel as necessary). Here’s our whiteboard version of the state diagram — apologies for the messiness, but hopefully you get the idea.
This feature also allows the drive team to intake without needing to be as cautious around opponent cargo, which should hopefully decrease our average cycle time.
With a working color sensor, we can now revisit an autonomous routine that we first attempted before Greater Boston. This is our new three cargo auto on the alternative side of the field:
For those who aren’t familiar, this auto steals a cargo from the opposite side of the field based on the fine print in this rule:
By knocking the cargo off of its starting location, the robot can collect it without earning a tech foul. We found that the trickiest part of the auto was lining up the shot from the intake, since we need to hit the cargo on the opposite side directly in the center (otherwise it could roll unpredictably). Even resetting odometry from the hub target wasn’t quite accurate enough to do the job properly. Thus we can return to a quote from one of my earlier posts.
Our solution was to raise the hood to this ridiculous position, spin the robot 180°, use the Limelight to track the cargo, and spin back 180° (relative to the aligned position). We just barely have enough time to make this maneuver, and throughout our testing it has worked much more reliably than any other alignment method.
Once both cargo are collected, the robot uses the color sensor to detect the order of the red and blue balls. This allows it to shoot only the correct color while ejecting the opponent cargo at a low speed. The order tends to be pretty consistent, but the color sensor acts as a backup in case the balls roll differently (such as if we misaligned the intake shot a little bit).
The code for the auto is here: (
ThreeCargoAutoCrossMidline.java). I’ll also note that the version of the auto in the video above is a little out of date. We’ve since improved the alignment to the cargo and stopped the blue alliance version from crashing into the wall quite as hard. We’re very excited to add this new routine to our existing suite of autos.
Houston, here we come!
How are you reliably doing this? We ran in to major issues attempting to track cargo and abandoned the project in favor of trajectory following, but it seems you guys are confident enough in the reliability of your LL pipeline to risk the Tech Foul if it doesn’t work.
We were initially concerned about the reliability of this, but tracking the cargo for this auto is a very different problem from tracking it during tele-op or to run more complex navigation. We’re only looking for a single piece of cargo in a relatively known location (roughly centered once we turn around and 3-4 feet away). We don’t need to worry about shape detection either, since robot bumpers wouldn’t fit between us and the ball. This specific situation is really a best case scenario for tracking the cargo, which is why we’re willing to use it.
Congratulations on the division win! You all played an incredible set of matches. You’re making New England proud and representing the open alliance well.
Good luck on Einstein!
Three years ago, we won Battlecry 2019 with 6328 and their postseason rebuild. It was clear that something really special was brewing. Congrats on a well-deserved trip to Einstein.
What a rollercoaster of a week. First off, it was great to meet many of you who follow along with us and hear about the things you took inspiration from. It’s stories like that which make us love being open and continuing to work to be a better resource for the community.
Our season goals were relatively modest robot wise and centered around having a simple and reliable robot capable of performing well at the district level. I think we’ve been able to push this robot much farther than any of us ever expected and I’m so proud of the team for what we’ve managed to accomplish. Having won 5 blue banners, one of which being for our division is an absolutely surreal feeling.
I’d like to give a huge thanks to our Roebling alliance partners 5940, 2471, and 3534. Everyone was great to work with and stepped up when they needed to.
You’ve probably seen this already, but we were very excited to get to try out some fun tricks on the biggest stage of the year.
Overall the robot performed extremely well with no major mechanical failures and minimal repairs required. Simplicity and reliability were a major goal for this year, so we were very happy to have a calm pit the majority of the time. The issues we did experience were
You’re probably sensing a common theme here. The intake is definitely top of the list for offseason redesigns. Conveniently we would have to do that anyways if our ground clearance suddenly got slightly higher… somehow.
Here is our scouting data from the Roebling division. Normal disclaimer that we haven’t validated this for accuracy, there were a few issues which were uncovered in the strategy meeting but didn’t get fixed in the raw data.
If you’re interested in the raw shooting locations that data is available here. What everything means is explained above.
ChiefDelphi_ShootingData_Export.xlsx (326.0 KB)
We will have a more detailed season recap along with some cool data comparisons between the odometry our robot had and the Zebra tags during the round robin sometime soon. We will also be sharing what we are up to in the offseason through trainings, robot upgrades, new robots, more software development, shoe consumption, attempting to move sideways, and hopefully much more.
We will also be asking for feedback from you all on how we can be more helpful and a better resource to the community (when we figure out the best format to do that).
Was great to be on your alliance!
Best gif of the entire build thread right there.
And let me say publicly, THANK YOU, Max, for your guidance and leadership. Our students aren’t just learning the technical skills to build a robot, they are (more importantly) learning to thoughtfully work through the entire engineering process while building connections within the team and with the larger FIRST community. And, for me, that’s what this is all about. Great work by everyone.
I had the opportunity to see a few practice Chairman’s presentations from 6328, virtually in 2021 and the lead up to Champs, and in person at Champs. I came away thoroughly impressed and very convinced that they’re on a Championship Chairman’s Award trajectory. This team is making measurable and sustainable impacts in a multitude of areas. It’s obvious that you need to watch them on the field, but their impact off the field is just as impressive. The total package.
Congrats y’all, getting to be there with you to watch this happen was an incredible moment I’ll never forget. I know how hard the team has worked over the last bunch of years to get to this point and you deserve it more than anyone.
To all the 6328 seniors reading this, congrats on a great final year. I was sad to leave y’all a few years ago, but I knew the team was in good hands, go out and make the world a better place now, you got this!!!
Love you guys
Thank you so much Karthik! Your support for the past 2 years, and especially last week in-person was so meaningful and calming for all of us.
3467 thoroughly enjoyed cheering on our Roebling friends. Congratulations on an exceptional performance.
We had a similar issue with our launcher at NE DCMP Calcium Finals. I have some working theories on our issue, but would love to stay in the loop on your investigation. Did the DS logs show any current draw on the motor at that time?