Software Update #4: Pre-Competition Smorgasbord
As we ramp up to our first competition at Granite State next weekend, we’ve been making lots of refinements in both software and hardware (plus many hours of driver practice). I’ll focus mostly on software features, others can add on anything I’ve missed.
Many thanks to our friends at Teams 1100 and 2168 for allowing us to visit and practice on their fields! (As well as helping with a few repairs). It was also fantastic to chat with Teams 78 and 7407 on Saturday, and we wish you all the best of luck in the coming weeks.
The Name!
After an internal voting process, we have selected the name of our 2022 robot:
We’re pleased to welcome the latest addition to our Bot-Bot family, along with Bot-Bot Strikes Back (2020/2021), Bot-Bot Begins (2019), The Revenge of Bot-Bot (2018), and of course the original Bot-Bot (2017).
Auto-Drive and Auto-Align
We’ve focused our efforts on two key shots; scoring from directly against the fender and from the line at the back of the tarmac. To help the driver align quickly for those shots, we put together two assistance features.
To align while at the back of the tarmac, we use a fairly traditional “auto-aim” that takes over controlling the robot’s angular movement and points towards the hub. As with our auto-aim last year, this is based on odometry data that’s updated when the Limelight sees the target (see my previous posts for more details). One key improvement we made over our previous auto-aim is that the driver can continue to control the linear movement, allowing them to begin aligning as they approach the line. This means that aligning doesn’t have to be a separate action; once they reach the right distance, they can begin shooting immediately.
The fender shot is trickier, since just pointing at the target isn’t the key issue. We wanted to reduce the amount that the driver needs to maneuver around once they reach the right distance. To solve this, we use an “auto-drive” system that guides the driver on a smooth path towards the nearest fender. The driver remains in control of linear movement while the software handles turning. It points towards a point projected out from the fender 75% the distance of the robot (shown as a transparent robot in the video). We’ve found this system to be especially helpful on the opposite side of the field where visibility is low.
AutoAim.java
— Traditional auto-aim for the tarmac line shot.DriveToTarget.java
— Auto-drive for the fender shot.
Vision & Driver Cameras
Our vision system was originally built to use PhotonVision, since it can report the corners of multiple targets to the robot code. All of this extra data feeds into our circle fitting algorithm, which produces odometry data. When it came time to connect our two driver cameras as well, PhotonVision was appealing as it supports multiple simultaneous streams. All three cameras were originally set up to run through our Limelight with PhotonVision.
Unfortunately, our results with this system were less than satisfactory. Here are a few of the problems we encountered:
- PhotonVision appears to be very picky about what cameras it will talk to successfully. It took several iterations of trying different driver cameras to find ones that (almost) worked reliably. Many simply wouldn’t connect properly (e.g. showing up as multiple devices where none worked correctly), or caused PhotonVision to freeze up. Even after lots of experimentation, one of our driver cameras never ran above 3 FPS.
- Also, the Limelight hardware doesn’t seem to be capable of running three streams simultaneously at full speed (where one involves vision processing to find the target). All of the streams ran at fairly low framerates. This problem seems predictable in hindsight, but definitely prevents us from using one device for everything.
- Even with no driver cameras connected, we’ve encountered repeated issues with PhotonVision. The stream often starts at the wrong resolution, covering the cameras briefly can sometimes cause target tracking to fail until we flip into Driver Mode, and most significantly it often fails to connect to the RIO at all despite extensive experimentation with network settings.
While many of these issues could be worked around with enough effort, the Limelight 2022.2.2 update now provides the corner data our algorithm requires. We’ve been very satisfied with the reliability of the stock Limelight software over the past couple of years, so we’ve switched back for the time being. Supporting this in robot code was trivial since all of the interaction with PhotonVision is abstracted to an IO layer. We just wrote a Limelight implementation and were tracking again without touching the rest of the code!
For the driver cameras, we mounted a separate Raspberry Pi running WPILibPi. This has been working flawlessly with two camera streams, so we’re feeling much better about our vision setup overall.
LEDs!
We mounted several strips of LEDs controlled by a REV Blinkin. It’s driven over PWM to change patterns and indicate robot state (e.g. intaking, two cargo are held, aligned to target, etc). Here are the classes we’ve written to control it:
BlinkinLedDriver.java
— This is a simple wrapper for a Blinkin that includes an enum for all of the available patterns, since it appears a similar class isn’t included in REVLib. This makes pattern definitions much cleaner in the rest of the code (e.g.blinkin.setMode(BlinkinLedMode.SOLID_GREEN)
instead ofspark.setSpeed(0.77)
).LedSelector.java
— This is our class for selecting the current LED state. We didn’t want to set up the LEDs as a subsystem required by commands, since this affects which commands are allowed to run simultaneously. Instead, each command/subsystem writes its own state to this object, which has a prioritized list of which patterns to use (including which to skip during auto). It also supports “test mode,” where a list of all the supported patterns is displayed on the dashboard. We had quite a fun time looking through all of the options to pick the pattern for each state.
During driver practice, we had some issues with the Blinkin failing to drive LEDs and not responding to user input, even after a power cycle. Thus far, these issues have been fixed by a factory reset (or just waiting long enough, apparently?) We’ve still investigating this, but we’re hopeful that we can mitigate these issues.
Driver Practice & Shot Tuning
We’ve spent many hours tuning the fender and tarmac line shots, including several iterations of the shooter design (such as connecting the main flywheel to the top rollers mechanically). Others can add more details about those changes. Below are several videos from driver practice demonstrating those shots.
After some tuning of the angle (so that both shots go just over the rim of the hub), bounce-outs seem to be pretty rare. Making reliable shots also depended on getting the velocity PID control of the flywheel working well. Below is a graph of our flywheel speed during a set of shots in auto.
When ramping up, we found it useful to implement a trapezoidal profile to reduce overshoot (this enforces an acceleration and jerk limit).The flywheel is tuned such that it dips to a similar speed for each shot, meaning the arcs are consistent.
Mid-Rung Climb
We’re planning to stick with a simple mid-rung climb for our week one competition, with higher levels to come later. Below is the first test of our climber, which is driven by a position PID controller to extend and retract quickly.
Five Cargo Auto
At last! After running the five cargo auto in simulation or with just an intake for weeks, it’s very satisfying to see the routine working in full. With our first shooter design, we were initially discouraged about whether we could pull off this auto. However, later iterations could shoot far enough back from the hub to make it viable again.
The path has also changed a little bit throughout our testing. Rather than driving backwards before each shot, the first set doesn’t require moving anymore (instead, it angles itself as it intakes, and there’s a turn-in-place to maneuver towards the next cargo). That second shot still moves backwards to avoid a sharp turn while intaking the cargo. The video below shows the full path running in a simulator:
Ultimately, we decided not to use vision data during this routine. We found that the data could sometimes offset otherwise reliable odometry (especially when moving quickly while far from the target). The vision system is still essential during tele-op where precise odometry is harder to maintain.
Another interesting note; we realized that every moment spent at the terminal makes the HP’s job much easier since the timing is less precise. With this in mind, the routine is set up to always finish at exactly 14.9 seconds, with any spare time spent at the terminal. This was very useful as we worked on refining the rest of the path, and we feel that the current version provides a reasonable length of time for the HP to deposit the fifth cargo. We also set up the LEDs to indicate when the HP should throw the cargo (the LEDs weren’t working during this particular run of the five cargo auto, but you can see them in the next video).
The command for our five cargo auto in here for those interested.
…And Other Autos Too
Of course, we also tested the rest of our suite of auto routines from our “standard” four cargo down to the three, two, and one cargo autos. We want to be well prepared with fallbacks in case the more complex autos aren’t functioning reliably (or aren’t needed based on the match strategy).
This is a fun variant of our normal four cargo auto, which starts in the far tarmac and crosses the field to collect cargo from the terminal. This is meant to run alongside a three cargo auto on the closer tarmac, though the strategic use case is admittedly niche. (Also I have a sneaking suspicion that @Connor_H may have violated H507 in this example…)
-Jonah
Do you mind joining the PhotonVision discord (PhotonVision) and providing some more information regarding your experience so we can fix these issues for users in the future? Appreciate it.
Awesome work 6328, it was really awesome seeing you and your bot work this weekend. As a whole everything was extremely smooth, and we all know that smooth is fast. After seeing your shooting and how consistent it was I went to look at the cad, and I noticed that the shooter you were running over the weekend doesn’t match what your public onshape doc shows. Your doc has a single back roller, and at the field you had 3. What made you switch to the three rollers? How much angular range do you have in hood rotation? And any chance you could post the cad to that so we could dig in?
I was going to make a post on this shortly, but with the madness of preparing for the event I haven’t gotten around to the full write up. Up until Friday we had what is in the CAD with a single roller, but weren’t happy with how it was performing. We had problems with bounce out due to our trajectory being too vertical, and the upper hood position wasn’t optimized for shooting from the tarmac as it was mostly just for the low goal. Our test space is very limited, so we only fully discovered these pitfalls when practicing for the first time on a full field (thanks 2168 for having us).
We were previously thinking about multiple top rollers as a future upgrade path, but seeing these issue caused that to get accelerated to before week 1. Our redesign ended up being heavily inspired by 2168s shooter. Due to the haste and experimental nature of the redesign this CAD was done in a separate branch in Onshape. We didn’t realize branches aren’t public, so sorry about that, it was an oversight on our part. We can try to get someone to clean that up and merge it into mainline, but it is currently a disaster which will break many other things which we also need to make fixes on and we have limited bandwidth.
Here’s a photo of the setup, we basically just replaced the 3 gussets on our hood 1x1 with new plates which accepted 2 more shafts and replaced the printed hood with the tuning forks which ride between the wheels to support the ball in the top position.
The pistons on the hood provide a 19° rotation from bottom to top.
Sorry I couldn’t be more helpful, looking back at our exploded assembly and all the red constraints has given me a headache thinking about fixing it. We’ll be sure to get this cleaned up and merged when time allows.
-Max
Thank you for such a fast reply! Don’t put too much effort into cleaning up the public version on my account, you’ve provided plenty help already. If I am seeing this correctly, the tuning forks are fixed to the non moving part of your shooter, they just slide between the wheels, and they are below the surface of the wheels by some small amount. If they were to flex they would presumably just contact the spacers on the shafts? It’s really clever and I wish I would have looked more at both bots shooters just to pick up on those subtleties.
Yep, that’s correct. We actually had an initial oversight where the forks were tangent with the surface of the wheels, which didn’t make a whole lot of sense
Just as you mention, we were also a bit worried about the movement/flex of these forks as the ball went through the shooter. In an attempt to reduce that, we have some bearings on the roller shafts that contact those forks - so in the event that they do want to move backwards/flex, they’re supported by those bearings. I can’t say that this was an original idea either - see Max’s above post.
Pre Granite State Climber Update
We originally thought this would be an upgrade for after week 1, but it seems we have miraculously got it working. Here’s a little teaser from onboard the driver cameras.
JK about teasers here's the full thing
Our Low & Mid elevator mounting/gearbox was designed to have holes for a traversal to mount to, but at the time we didn’t know exactly how it would work.
This design is very similar than many this year, with the largest exception being everything besides the elevator is passive. The hooks are spring-loaded to get out of the way when the bar comes down and then when the elevator is pulled back up the weight goes onto the passive hooks and they tilt out by the weight of the robot. We then extend and hook onto the next bar, exploiting the fact that the top of the hook is sloped and that we are swinging into the bar. This allows us to avoid having any pneumatics on the High/Traversal system.
Due to somewhat relying on timing of the swing this entire sequence is automated with a button push to trigger this whole sequence.
Good luck to everyone competing week 1 and excited to hit the field for the first time!
-Max
One of the less scary traversal climbs I have seen, but all of these swinging robots are going to make my blood pressure skyrocket during matches.
I’m glad my team’s doesn’t hit the wall very hard. That was something we wanted to avoid
Here’s another fun tidbit; during our testing, we briefly looked at using the robot’s pitch to control the automated climb. While we eventually switched to a timed sequence, the pitch data is still part of our log files! Below is an example from one of the traversal climb tests. The perfect sinusoid makes me so happy…
Why’d you end up switching?
The swing is predictable enough that a timed sequence seems to work reliably. Also, relying on the gyro would mean risking the auto stopping because (for example) the swing wasn’t quite far enough. We think it’s preferable to always try to continue and the operator can cancel if necessary.
Software Update #5: Thirty-One Football Fields
As we take some time to relax after a successful weekend at Granite State, I’d like to share a few interesting statistics. We created our logging framework for debugging first and foremost (this was very helpful during the event), but a side-effect is that we can now produce some interesting summaries of what the robot was doing over three days of an FRC competition. Be sure to keep an eye out for some more detailed post-event updates soon.
- All of the log files generated by the robot are included in this analysis, both on-field and off-field. This includes 98 log files totaling 470 MB of data.
- In total, the robot drove 2.13 miles during the event. That’s the length of about 31 football fields!
- According to the robot’s proximity sensors, it shot 388 balls over the course of the event.
- The code executed 1.9 million loop cycles at 50Hz.
- The main flywheel rotated 121k times.
- The shooter hood was actuated 644 times and the intake was actuated 224 times.
- A total of 860 watt hours were consumed, calculated based on the battery voltage and total current draw.
We can also break down this data in other ways, like looking at the distance driven by match. Apparently, we drove way more than usual during qualification 39.
Since the robot is logging at all times (including in the pits and on the practice field), we can also compare the totals on-field vs. off-field:
You can find more details on the data here, including all of the statistics broken down by match. The log files for all of our matches are available here and can be viewed using Advantage Scope for anyone curious to dig deeper.
I’m very excited to see what this data looks like by the end of the season…
-Jonah
Hello from team 7913: Bear’ly Functioning! After seeing your design at this weekend’s competition, our team was considering creating a climber system similar to yours to get our robot a little more functioning. I just have a few questions:
- Approximately how long would it take a team of 13 people to replicate your system? Additionally, would it be feasible to have it done for week 5?
- What did you use for materials? How much did it cost?
- Is there anything that you would recommend to be done differently? Time-savers, cost-cuts, better performance/durability, etc.
Any help would be greatly appreciated. Thanks!
– Nick
EDIT: Did some proofreading.
Oops, I totally forgot to specify that we’re only looking into redesigning the climber. My apologies. I’ve edited my above post, too.