Thanks! (And send my regards to building maintenance…)
(Since I don’t think Keith is here)
We are lucky enough to have a fantastic workspace but we also “get” to do our own maintenance
As promised, attached are our redesigned business plans for FRC and FLL. Students had a rough path condensing from 49 pages to 3. Feedback welcomed.
(Three appendices - org chart, SWOT graphic, and balance sheets - will be included in the final version.)
Eh, we only had a student actually drive a robot through the drywall that one time…
Actually as a result of that incident most of the walls in our drive space now have wood/plywood up about 3’ - including the wall shown in the video above, thankfully! Though recent drive practices have lately been “helping” me find all the spots I still need to cover.
This week, we now have a drivable robot, a prototype intake, and a motion profiling system ready to go. What is there to do but write a few auto routines? Even without a shooter, we can still test most of the important functionality. After some discussion with the strategy team, we began work on the following routines:
- One cargo (from any position)
- Two cargo (collect any of the three cargo around the tarmac)
- Three cargo (collect two cargo from the lower tarmac)
- Four cargo (intake one from the terminal)
- Five cargo (intake from the terminal, including the HP cargo)
See the videos below for the three, four, and five cargo autos in action.
(The three cargo test had an extended shooting duration.)
While we don’t have a shooter on the robot yet, we know enough about the design to control it in software. The structure of the subsystem code allows the hardware to be replaced while preserving the control logic — we can switch between SparkMAX and CTRE motor controllers, disable the hardware in replay, or turn it into a physics simulator. Throughout these tests, the robot is running the flywheels in a simulation, meaning the routine can check if they’re up to speed when shooting. We use the WPILib
FlywheelSim class. The graph below shows the flywheel speeds produced by the robot during the video of the five cargo auto. The red boxes show when the feed system was “running.”
We decided to position the intake and shooter on opposite sides of the robot, which makes these routines much simpler. The three and four cargo paths are fairly intuitive, as they aren’t significantly limited by time. On the five cargo path, we considered two possible ways to run the routine:
- Shoot one cargo immediately, collect two from around the tarmac, shoot two, collect two from the terminal, shoot two.
- Collect one cargo from around the tarmac, shoot two, collect one from the tarmac, shoot one, collect two from the terminal, shoot two.
We ended up going with the second option for several reasons. First, shooting immediately requires waiting for the flywheels to spin up. While this may happen quickly, we don’t know for sure right now. Second, moving between the two cargo around the tarmac in a single path is tricky. With a differential drive, the fastest method seems to be a “back up and turn,” which is complex and still inefficient.
Another unknown variable is the time required for shooting. While the three and four cargo routines can be fairly generous by sitting still for multiple seconds, we don’t have the same luxury during the five cargo auto. The middle shot (with one cargo) requires that the feed be timed such that the robot continues moving. We may take a similar approach with the other shots as well, depending on the speed of the feeders.
As we haven’t yet mounted a camera to this practice bot, we aren’t using vision to aid with odometry tracking. The trajectory constraints are therefore tuned to reduce wheel slip as much as possible — in the future, we hope to adjust these constraints to shorten some of the paths. This may allow us to spend more time shooting or intaking from the terminal during the five ball auto.
The robot was using simulated flywheels while running the routines, but we can also run the autos entirely in a simulator for testing. The video below shows the five ball auto running without a real robot at all.
As with the flywheel, this is as simple as replacing the hardware implementation of the drive with a physics simulation. We use the WPILib
DifferentialDrivetrainSim class. Running the routine in a simulator allows us to iterate very quickly, and it means we’re more confident that everything will work when we test it on the real robot.
- Motion profiling command (here)
- Five cargo auto (here)
- Four cargo auto (here)
- Three cargo auto (here)
- Two cargo auto (here)
- One cargo auto (here)
We’ll continue to work on refining these routines in the near future, like making sure that they stay clear of the lower exits to the hub. In the meantime, we’re happy to answer any questions.
In past years, we’ve gotten incredibly useful feedback from doing practice Chairman’s presentations with other teams. In the spirit of Open Alliance, we’re working with 7407 to pilot a
Chairman’s Alliance project so more teams have that experience. We’re hoping to bring together a few teams (virtually) to run through their presentations, field practice questions, and receive actionable feedback, all led by an experienced facilitator.
These practices are meant to be working meetings and presentations are expected to be rough drafts. We’re scheduling these early enough that teams can to incorporate feedback and still have plenty of practice time before their official Chairman’s presentation.
Is this your team’s first year presenting for the Chairman’s Award, or are you a rookie team? We invite you to come observe, learn, and ask questions. Sign up through the registration form.
We will do our best to accommodate if there is a date/time conflict so don’t hesitate to sign up if it’s just the exact date that’s the barrier.
The first practice will be Tuesday Feb 22 for teams that are presenting at Week 1 or Week 2 events. Teams that are presenting later are welcomed, of course, but we know that presentations may not be ready even in rough draft that far in advance. If there’s enough interest, we will organize another practice in mid-March.
More info in the registration form.
Going into this build season, we wanted a simple way to operate motors on prototypes. Before we went mostly brushless in 2020, we’d hook up prototypes to the Talon SRX’s on a kit-bot, but that was always a little clunky. For 2020 prototyping, we ran Neos’ SparkMax controllers on a janky SB50-to-PowerPole cable without overcurrent protection, each controller hooked to a laptop to run the Rev application. For efficient and safer prototyping we needed a better solution.
For prototyping complex assemblies, we knew we needed to run multiple motors at once. Commercial single-channel PWM solutions, like the Thrifty Throttle, were not cost effective in a 4-channel configuration. At the other extreme, we could have built a mini-control system around a Rio and done full-up control of however many motors we wanted, but we didn’t have a spare Rio nor did we want the complexity of needing robot software and a driver station just to run motors.
Fortunately for us, capable microcontrollers are ubiquitous and cheap! We settled on a 4-channel solution, which could be run on a standard Arduino Nano. To keep complexity down, there’s a simple set of physical controls: each channel has a button to toggle it on and off, an LED to show its state, and a center-detent slider for speed control. While this is overall a pretty simple build, we’re sharing it here in hopes it may be useful to other teams, either to build as-is or as a starting point for a different variant.
Here’s the schematic of the control board. Note that pins 2 & 3 on JP1 through JP4 are reversed; the PWM signal should be on the third, not middle, pin.
And here’s a picture of the board, which one of our students built with point-to-point wiring on a stock prototyping PCB:
The control software is a quick sketch of around 100 lines:
This uses the standard Servo library for generating the PWM signals; this works very well for driving FRC controllers’ 1000uS to 2000uS pulse width inputs. Channel state is toggled based on debounced reads of the buttons. The software continuously scans the analog sliders and scales the values to the appropriate PWM value.
Of course, the control board is only half the solution; we need motor controllers and a safe way to power them. For four channels, we used these power bus bars
and a set of cheap automotive-style self-resetting 40A breakers.
You can, of course, use whatever controllers you wish. Initially we built the board with two Spark controllers for brushed motors, and two SparkMax controllers for brushless. This allowed us to use either motor type without having to switch modes on the SparkMax’s; we feared that if left in the wrong mode we could easily damage a motor. Almost immediately, of course, we needed to run a prototype with 3 Neos. That wouldn’t be the last time, so we added two more SparkMax’s; with PowerPoles on both the Sparks and the SparkMax’s, it’s quick and easy to use either type (though it made the wiring look less neat than in the original build). The four PWM outputs on the control board can likewise be plugged into whichever controller is needed.
Here’s the finished build. It’s perhaps not the prettiest thing in the shop, but it is very functional and adaptable.
Some prototypes need motors running at the same speed; a Y cable can be used to run two controllers from one channel. Sometimes those motors need to rotate in opposite directions. Brushed motors, of course, easily achieve this by swapping positive and negative leads. For brushless, we use the Rev hardware client to reverse the output on one of the pair and save the configuration to flash.
The main cost of this rig is in the motor controllers. In our case, we had Sparks in a drawer, and two of the SparkMax’s had previously had a issue of uncertain origin that caused us to flag them as “not for competition use” anyway. A Nano clone was about $5, most of the small electronics parts were on hand, and we spent less than $50 on the rest.
Sadly, we did not include a hyperspace control.
Sorry for the delay in posts, we’ve been super busy finishing up the design and getting all the facts straight. We’ve been making and assembling lots of parts already too, but that will be saved for a slightly later post to keep this one resonable.
We’re still putting some finishing touches on the climber and are hoping to put forth a lot of effort into a high-rung climb after we have most of the robot assembled and running. Ideally we’d have a group of mentors & students devoting the majority of their time to that while the rest of the robot is being manufactured & assembled but due to our current mentor & student count, that’s unfortunately not entirely realistic for us.
Overall, we’re pretty happy with how modular the design turned out. Thanks to the gusset+tube construction style that we’re utilizing more of, we should be able to fairly easily adjust portions of the design when something inevitably doesn’t work / breaks / etc.
Overall weight: ~94.702 lb (not including battery or bumpers)
Onshape predicts our robot to be roughly 110 lbs, which gives us at least some headroom for the rest of our climb and whatever high-rung mechanism we come up with.
Center of mass
We think that it’ll be important for our drivers to just be able to punch it and not have to worry about the robot tipping (a couple of software mechanisms will help with that as well). Hopefully this can get even lower with any ballast we add to get up to 120 lbs.
Length x Width x Height: 30x29x40
Banana for scale (except it was just photoshopped in and may not be the correct scale).
- Drivetrain: 6 NEO
- Intake: 1 NEO
- Hopper: 1 NEO 550
- Tower: 1 NEO 550
- Feeder Wheel: 1 NEO 550
- Flywheel: 2 NEO
- Climber: 2 NEO
There haven’t been too many changes made to the intake since our last update post, but we were able to shorten the link lengths which should help reduce at least some of the floppiness.
As in our prototype videos, we’re also using a dropdown bar - we found that this helped our prototype apply more favorable compression on the bumper, so we’ll be sticking with that for our final intake as well.
The dropdown bar starts spring loaded up via surgical tubing - it will be knocked down by our intake at the start of the match, at which point the surgical tubing over centers and is consequently spring loaded down throughout the rest of the match.
Gear ratio: 60:16
We picked this to be roughly in the right ballpark from our guess while packaging in a N:16 for our prototype and it worked well enough that we didn’t feel like we needed to change it. The top roller, however, is just slightly slower than the other 2 rollers - the hope is that this will help reduce the tendency for balls to jump out the top of the “hopper” area.
Hopper is probably the wrong term here as all it does is center the balls and not store them, but for lack of a better term that’s what we’re going with.
We had good success with this in our prototype testing and were surprised that it worked pretty well with balls entering from the edges of the intake. There will be some adjustments that we’ll need to make when it’s actually on the robot (the height of the omnis, maybe the distance between the omnis and the bumper, etc.) and we may end up adding a couple pieces of polycarb around the “hopper” if we encounter issues with cargo bouncing out of the robot.
We chose to go with a tube-style hood (as opposed to plate) so that we’d have an easier time fixing any small issues that we may encounter with locations of rollers/cylinders/etc., as well as potentially improving it as the season progresses.
From our testing, it seems like a top roller helps improve consistency and removes some of the backspin that the main flywheel imparts on the ball - something that may (or may not) help reduce the likelihood of cargo bouncing out of the goal.
Right now the CAD shows the top roller being controlled independently of the main 6” flywheel - the hope here is that we can get more flexibility in the speed & the amount of backspin that we put on the ball, which will hopefully let us control our shot trajectory a bit. This seemed to work from our prototype testing, but we’re a bit nervous that them being mechanically separate may lead to shot inconsistency due to the recovery time of each being different. If this approach ends up being unsuccessful, we’ll just belt the second motor to the 6” flywheel and print that double pulley with a hex bore instead of a bore for bearings.
The tower is copied from our prototype tower/shooter and simply swapped to the real construction style. The exact control scheme for how we will index the balls in the hopper is tbd after playing with it more, but we have a few ideas involving 1 or 2 beam break sensors.
The climber is a single stage telescoping with a 2x2 outer tube and 1x1 inner. It’s pulled up by constant force springs and winched down by2 NEOs through MAXPlanetary gearboxes. The stages slide on two 3D printed blocks, the top of which has bearings pressed in on pins. There is a WCP shifter shaft and PTO kit block to act as a break which goes directly into the MAXPlanetary output.
There is still the most work to do to finalize the details here before manufacturing. We will also be adding (at some point) a set of passive tilting hooks to have a 2013 style High/Traversal climb (or at least that’s the dream). More details to come in a later post.
We didn’t immediately arrive at this design, there were many small (and not so small) iterations which were made before actually manufacturing a first revision. Look out for a post soon going through some of the changes which were made to lead to what you see now.
Singulator is the term we use.
6328 Mechanical Advantage shows off their impressive 3, 4 & 5 ball autonomous routines, provides an overview of their completed CAD and details their robot assembly https://youtu.be/NUEpCExVE2Q
Can you all talk more about how this is working out for you and the process you use to get this command to work with SysID?
SysIdCommand communicates with the SysId logger using the same NetworkTables protocol as the standard SysId robot code. Essentially, we just looked through the source of the project that’s deployed to work out the protocol and provide the same data. When using the command, we connect to the robot using the “logger” in SysId while skipping the “generator.” When starting each test, we enable the robot with the correct auto routine selected; the rest of the process is identical to the standard usage of SysId. This process worked well for us and definitely saved time compared to figuring out the project configuration each time. We didn’t notice any significant difference between the values generated by SysId using the standard project and our command.
Ultimately, we found that we only ended up using a very limited set of values from SysId: kS, kV, and track width. Given the relative simplicity of calculating these values based on the raw data, we realized that the process of running SysId was more time consuming than necessary (running a separate application, doing more tests than necessary for track width, saving and reopening the data, etc). Instead, we put together the following commands which handle analysis directly on the robot:
- FeedForwardCharacterization - Calculates kS and kV using a quasistatic test.
- TrackWidthCharacterization - Calculates track width by spinning in a circle.
These are set up as auto routines (here), and the results are printed directly to the console after the robot is disabled. We’ve found that while this process doesn’t match the more advanced capabilities of SysId, it’s a much better fit for our needs. Running one of these tests is now trivial since they’re always available with no extra setup. For example, this means that we could quickly check for mechanical changes during a competition in case of issues.
It seems like we removed the original
SysIdCommand when we switched to the new characterization commands. I’ve added it back as reference or in case we change directions in the future. It’s available here.
Really proud of our Chairman’s team this year. We have a very experienced crew (mostly seniors) who have been working with a Shadow team to transition their knowledge and process in the name of team sustainability. The written submissions were actually completed a bit ahead of schedule (unexpected for anything about FRC!)
Fill out the presentation script and PRACTICE PRACTICE PRACTICE (we are presenting at a Week 1 event)
Pull together the video in the next 10 days
Chairman’s Alliance practices next week over Zoom with the teams that signed up
2022 Chairman’s Submission 6328.pdf (661.2 KB)
Our high level robot concept has been the same for a while, but there are tons of updates to the lower level details. This post walks through where we were before and the high level feedback which lead to our design changes. This feedback comes from a combination of internal discussions and reviews with @davepowers. There’s tons of small details which we weren’t able to cover here, but this touches the major ones.
At first glance this looks very similar to our current design, but without much of the assembly details there.
Feedback: Try and minimize the CG as much as possible. The tower can comfortably hold 2 balls already, so it should be able to shrink down without issue.
Result: Lowered the tower height by 6 inches which results in one ball being held at the top of the tower and another on the curve at the bottom. This was a remarkably easy change (updating 1 number) in the CAD due to how we derive the top level geometry of the robot off a main sketch. We expected everything to explode when doing this, but it somehow updated without issue.
Feedback: Minimize the weight at the top of the tower.
Result: Removed the top 2 motor mounts and made the different belt spacings in line with each other at the bottom. This limited us to 2 motors between the main flywheel and hood wheel, but this seemed more than sufficient from our testing.
Feedback: The intake arms are incredibly floppy side to side
Result: This was by design, but it was still somewhat crazy. To reduce this we shortened the main link with a shorter belt, which moved the pivot point closer to the frame
Feedback: The drop down bar pivot is also very long, it should be able to have it’s pivot moved much closer to the bumper.
Result: In looking at the geometry again, it was possible to move the pivot point of the passive drop down bar to the mounting plate the piston pivots from, making the drop down bar much more compact.
Feedback: The cross bar currently blocks the electronics, we should try to minimize components in that area to make access easier.
Result: The hopper became mostly two independent parts, with only a belt crossing the center. This made it much easier to access the compressor and pneumatic hub without taking the hopper fully off.
We should be mounting the climber which is the last major component on our robot tonight and will be sharing the final robot shortly once everything is running.
As we ramp up to our first competition at Granite State next weekend, we’ve been making lots of refinements in both software and hardware (plus many hours of driver practice). I’ll focus mostly on software features, others can add on anything I’ve missed.
Many thanks to our friends at Teams 1100 and 2168 for allowing us to visit and practice on their fields! (As well as helping with a few repairs). It was also fantastic to chat with Teams 78 and 7407 on Saturday, and we wish you all the best of luck in the coming weeks.
After an internal voting process, we have selected the name of our 2022 robot:
We’re pleased to welcome the latest addition to our Bot-Bot family, along with Bot-Bot Strikes Back (2020/2021), Bot-Bot Begins (2019), The Revenge of Bot-Bot (2018), and of course the original Bot-Bot (2017).
We’ve focused our efforts on two key shots; scoring from directly against the fender and from the line at the back of the tarmac. To help the driver align quickly for those shots, we put together two assistance features.
To align while at the back of the tarmac, we use a fairly traditional “auto-aim” that takes over controlling the robot’s angular movement and points towards the hub. As with our auto-aim last year, this is based on odometry data that’s updated when the Limelight sees the target (see my previous posts for more details). One key improvement we made over our previous auto-aim is that the driver can continue to control the linear movement, allowing them to begin aligning as they approach the line. This means that aligning doesn’t have to be a separate action; once they reach the right distance, they can begin shooting immediately.
The fender shot is trickier, since just pointing at the target isn’t the key issue. We wanted to reduce the amount that the driver needs to maneuver around once they reach the right distance. To solve this, we use an “auto-drive” system that guides the driver on a smooth path towards the nearest fender. The driver remains in control of linear movement while the software handles turning. It points towards a point projected out from the fender 75% the distance of the robot (shown as a transparent robot in the video). We’ve found this system to be especially helpful on the opposite side of the field where visibility is low.
AutoAim.java— Traditional auto-aim for the tarmac line shot.
DriveToTarget.java— Auto-drive for the fender shot.
Our vision system was originally built to use PhotonVision, since it can report the corners of multiple targets to the robot code. All of this extra data feeds into our circle fitting algorithm, which produces odometry data. When it came time to connect our two driver cameras as well, PhotonVision was appealing as it supports multiple simultaneous streams. All three cameras were originally set up to run through our Limelight with PhotonVision.
Unfortunately, our results with this system were less than satisfactory. Here are a few of the problems we encountered:
- PhotonVision appears to be very picky about what cameras it will talk to successfully. It took several iterations of trying different driver cameras to find ones that (almost) worked reliably. Many simply wouldn’t connect properly (e.g. showing up as multiple devices where none worked correctly), or caused PhotonVision to freeze up. Even after lots of experimentation, one of our driver cameras never ran above 3 FPS.
- Also, the Limelight hardware doesn’t seem to be capable of running three streams simultaneously at full speed (where one involves vision processing to find the target). All of the streams ran at fairly low framerates. This problem seems predictable in hindsight, but definitely prevents us from using one device for everything.
- Even with no driver cameras connected, we’ve encountered repeated issues with PhotonVision. The stream often starts at the wrong resolution, covering the cameras briefly can sometimes cause target tracking to fail until we flip into Driver Mode, and most significantly it often fails to connect to the RIO at all despite extensive experimentation with network settings.
While many of these issues could be worked around with enough effort, the Limelight 2022.2.2 update now provides the corner data our algorithm requires. We’ve been very satisfied with the reliability of the stock Limelight software over the past couple of years, so we’ve switched back for the time being. Supporting this in robot code was trivial since all of the interaction with PhotonVision is abstracted to an IO layer. We just wrote a Limelight implementation and were tracking again without touching the rest of the code!
For the driver cameras, we mounted a separate Raspberry Pi running WPILibPi. This has been working flawlessly with two camera streams, so we’re feeling much better about our vision setup overall.
We mounted several strips of LEDs controlled by a REV Blinkin. It’s driven over PWM to change patterns and indicate robot state (e.g. intaking, two cargo are held, aligned to target, etc). Here are the classes we’ve written to control it:
BlinkinLedDriver.java— This is a simple wrapper for a Blinkin that includes an enum for all of the available patterns, since it appears a similar class isn’t included in REVLib. This makes pattern definitions much cleaner in the rest of the code (e.g.
LedSelector.java— This is our class for selecting the current LED state. We didn’t want to set up the LEDs as a subsystem required by commands, since this affects which commands are allowed to run simultaneously. Instead, each command/subsystem writes its own state to this object, which has a prioritized list of which patterns to use (including which to skip during auto). It also supports “test mode,” where a list of all the supported patterns is displayed on the dashboard. We had quite a fun time looking through all of the options to pick the pattern for each state.
During driver practice, we had some issues with the Blinkin failing to drive LEDs and not responding to user input, even after a power cycle. Thus far, these issues have been fixed by a factory reset (or just waiting long enough, apparently?) We’ve still investigating this, but we’re hopeful that we can mitigate these issues.
We’ve spent many hours tuning the fender and tarmac line shots, including several iterations of the shooter design (such as connecting the main flywheel to the top rollers mechanically). Others can add more details about those changes. Below are several videos from driver practice demonstrating those shots.
After some tuning of the angle (so that both shots go just over the rim of the hub), bounce-outs seem to be pretty rare. Making reliable shots also depended on getting the velocity PID control of the flywheel working well. Below is a graph of our flywheel speed during a set of shots in auto.
When ramping up, we found it useful to implement a trapezoidal profile to reduce overshoot (this enforces an acceleration and jerk limit).The flywheel is tuned such that it dips to a similar speed for each shot, meaning the arcs are consistent.
We’re planning to stick with a simple mid-rung climb for our week one competition, with higher levels to come later. Below is the first test of our climber, which is driven by a position PID controller to extend and retract quickly.
At last! After running the five cargo auto in simulation or with just an intake for weeks, it’s very satisfying to see the routine working in full. With our first shooter design, we were initially discouraged about whether we could pull off this auto. However, later iterations could shoot far enough back from the hub to make it viable again.
The path has also changed a little bit throughout our testing. Rather than driving backwards before each shot, the first set doesn’t require moving anymore (instead, it angles itself as it intakes, and there’s a turn-in-place to maneuver towards the next cargo). That second shot still moves backwards to avoid a sharp turn while intaking the cargo. The video below shows the full path running in a simulator:
Ultimately, we decided not to use vision data during this routine. We found that the data could sometimes offset otherwise reliable odometry (especially when moving quickly while far from the target). The vision system is still essential during tele-op where precise odometry is harder to maintain.
Another interesting note; we realized that every moment spent at the terminal makes the HP’s job much easier since the timing is less precise. With this in mind, the routine is set up to always finish at exactly 14.9 seconds, with any spare time spent at the terminal. This was very useful as we worked on refining the rest of the path, and we feel that the current version provides a reasonable length of time for the HP to deposit the fifth cargo. We also set up the LEDs to indicate when the HP should throw the cargo (the LEDs weren’t working during this particular run of the five cargo auto, but you can see them in the next video).
The command for our five cargo auto in here for those interested.
Of course, we also tested the rest of our suite of auto routines from our “standard” four cargo down to the three, two, and one cargo autos. We want to be well prepared with fallbacks in case the more complex autos aren’t functioning reliably (or aren’t needed based on the match strategy).
This is a fun variant of our normal four cargo auto, which starts in the far tarmac and crosses the field to collect cargo from the terminal. This is meant to run alongside a three cargo auto on the closer tarmac, though the strategic use case is admittedly niche. (Also I have a sneaking suspicion that @Connor_H may have violated H507 in this example…)