Hey all, we’ve really been trying to disperse commands (intake, shooter, auto-align, etc) between the driver and operator controllers over the past couple of years. We’re interested in seeing how other teams are doing it, as we want to take pressure off the driver so they can focus on driving, but then they have to communicate more with the operator, so we’re looking for inspiration on the balance there.
We’re also interested in what non-essential commands teams are making this year to help the driver/operator better control the robot (autonomous functions, Shuffleboard, LEDs, etc).
Driver controls the drive train, operator controls the robot mechanisms, like intakes, shooters, and arms, often with help from the presets my team codes beforehand.
We use LEDs, Elastic (a alternate to Shuffleboard with a very user friendly/customizable UI), and new this year: controller rumbling, coded by yours truly.
We do largely the same thing as Asha, however one thing that you might want to put on the driver is intake. Intake can be useful to have on the driver because they have the best idea about where the robot is and you want the intake to be as quick as possible.
Also don’t be afraid to put controls on both drivers. We have had an automated shoot button on the driver with backup manual shooting on operator controls. It depends on the year and the amount of practice.
Generally we start with just drivetrain and add stuff as they get more experienced.
Some teams build simple enough robots that they only need one driver and can have the other one find game pieces or lookout for other robots. One example is 329 from 2023.
It really depends and see what your drive team wants.
I think we are going to try do this, but so we can have one human player at the Amp and the other at the Source, especially for early season qualification matches.
My team has the driver control the drivebase and the intake and any commands that involve control over the drive base, such as auto aligning with the amp station. For this year some of the auto alignment may be different if my team decides to go with the route of an automatic climb and in that case we may have that with the operator. But otherwise our elevator, shooter, pivot, and climber subsystems will all be controlled by the operator, likely by a button board.
I think anything that relies on robot position / timing with defense needs to be in the main drivers control.
What we did in 2023, main driver had a score button, an intake button and the driving. Secondary driver controlled everything else, including where to score, which piece to intake etc.
This way there are less timing issues with miscommunication. Secondary driver can select which intake state the main driver’s intake button will go to, but main driver will only ever have 1 intake button. Same for scoring
The primary driver should be in charge of pulling the trigger shooting. No matter how great the communication is between primary and secondary only the primary driver really knows if they will adjust the bot at the last second which could miss the shot. Secondary should be doing all the things that make that trigger pull easier (making sure shooter is at correct angle, shooter wheels up to speed, basically anything your team hasn’t figured out how to automate). This year I don’t see the secondary having a lot of active robot tasks, so maybe make them a almost a secondary coach to scout game pieces?
Everything on the robot should be automated enough such that a single driver can do everything.
Only possible exception being the climb at the end (we’re not doing the trap).
Driving swerve, shooting, intaking, scoring in the amp can all be done with a single controller with thoughtful automation. Who knows, maybe climb could be thrown in there too.
Driver - anything that affects the motion of the drivetrain (auto-align, boost, etc) [though in 2023 we gave driver the confirm button for arm to place the piece]
Operator - anything else
In the end it breaks down to personal preference and this is what we ended up liking. Season wise the conversion should be something along the lines of software asking driver/operator who wants what and adjusting throughout drive practice
My philosophy is that the main driver should have as much control over the robot as reasonably possible. Any actions that interact simultaneously with movement should be primarily under the driver’s control (that includes both intaking and scoring, and some end game mechanisms). The operator (secondary driver) can then have manual operations over various mechanisms should automation fail (such as manual arm/elevator/conveyor positioning if the driver’s pre-sets aren’t working properly), or devices that are largely independent of driving (some end game, deployable mechanisms, etc).
The analogy I use with my team is to imagine you’re playing a first person shooter video game. You have the ability to move your character and aim, but then you have to shout at someone sitting on the couch next to you to actually pull the trigger. I think we all would understand the frustration that would cause. So why put that frustration on our drivers?
The ability to anticipate is huge in maximizing your driving in RC cars, video games, and naturally also FRC. Driver and operator pairs can eventually build that trust and anticipation with each other, but it takes a lot of time and practice (and the right pairing of individuals). The more we can allow one driver to do it naturally, the better.
We haven’t yet truly achieved this philosophy in our end results, but I hope that we can.
Last year we had every single robot movement be done by the driver (this means driving, ground intaking, shelf loading, and scoring). The only things the operator did were control LED strips to signal to the human player and to program where to score the game piece (though the driver was the one that actually triggered the movement).
This served two purposes. First, minimizing communication for robot actions is very beneficial because it means we can run cycles faster. I don’t have to tell my operator to put down the intake, or have it down earlier than necessary; rather, I can just put it down while I’m driving. Second, the operator can serve as a second coach; last year, my operator was mostly focused on play-by-play calls (get that game piece, score it in this node) and my coach was focused on what other teams were doing, making sure we got links and RPs, et cetera.
Edit: I’d like to add that this was made possible by a lot of automation. The best way to minimize controls for the operator is to minimize controls overall; only one button to put the intake down, and it comes back up on its own, or one button to put our scoring arm out and when you release, the object scores. It’s definitely more work than necessary but it pays off.
Thank you, all! Any thoughts on how to do auto-align? There are a ton of things to auto-align with this year (amp, speaker, source, stage, notes), and we’re debating whether to have a button for note align and then another button for amp, speaker, source, and stage, or to separate them all amongst five distinct buttons. The former would be easier to manage button-wise, but then there’s the issue of if the camera can see multiple april tags at once.
We’ve got automatic pathfinding in place for the amp and the source, so hopefully we can have one button scoring (including driving) for the amp. We might not use the source pathfinding, as it might end up being faster to pick up off the ground rather than be parked right up against the source.
For the speaker we’ve got auto aim, and eyes set on automatically finding distance to adjust shooter angle/speed accordingly. Its set to also take into account the robot’s own velocity, and hopefully allow for shooting on the move. Will probably need a bit of adjustment to get it to work correctly.
Last year, we had a touchscreen with buttons for the different scoring levels. Operator selected the level; driver controlled when the extension and retraction occurred.
I know of at least one other team whose operator controlled the levels by holding buttons on a gamepad, but the driver still controlled when.
I could see either of these be options for this year – let the driver hit the “align to x” button and let the operator be the one who has pre-selected x. Or, if you’re doing full pose estimation, have the auto-align be location-dependent instead.
Another thing to look into for an operator to use as a control surface is an Adafruit RP2040 Macropad. It’s pretty cheap, can be programmed to show up as a standard gamepad, making it easy to integrate into robot code, without having to run a separate utility on the driver station laptop (like a MIDI to gamepad type thing).
I’ll be sharing the CircuitPython code I did pretty soon (as long as I remember to do it). So far I’ve gotten it to show up as a standard controller (as previously mentioned), and have the small OLED display show what each button does, and when it is pressed.