Recently I watched a video on how a team used sensors to alert their operator when a ball was in their storage. This got me thinking how automated an FRC robot can become with some programming magic.
Out of curiosity, how has your team automated your 2022 bot or improved the feedback your drive team receives during a match? What kind of sensors or code do you use? What features have you adapted to relieve some stress off of the drive team’s performance?
- vibration in driver xbox controllers for shooter up to rpm/aligned
- single button for align + speed up and another one for shoot
- climber telescoping set points but we are working on full auto climb
- and ofc 5-ball
We use mainly encoders for odometry + flywheel and we use ir sensors for ball detection in the indexer. We also use rev color sensors for bad ball detection
our biggest driver feedback was our robots leds. We strung addressable ones along our shooter plates and used them to indicate the number of balls (dotted for zero, dashed for one, solid for two) and if we had a vision lock on the goal (red for no, blue for yes). We had our full shooting sequence automated so the driver only had one button to line up and shoot. Our routing was fully automated. In the end our driver mainly used two buttons: intake (extend and spin) and shoot. there were a few others to help with edge cases too but that was 95% of it.
Our big driver feedback mechanism was using LEDs along the Limelight mounts to display different states for our driver and operator. The lights for an unloaded robot were the alliance color (primary red or blue), changed to light blue when loaded (as indicated by the beam-break sensors in the indexer, which also controlled the different feed wheels to properly position the cargo), changed to light purple during aiming, then dark purple when the Limelight locked on the target and was spun up and ready to shoot. The LEDs also changed to green to indicate when the climb controls were activated. It worked out pretty well, giving our driver a lot of info without requiring him to take his eyes off the robot.
That’s cool. For the fly wheel did you set that to run idle throughout the match or would you spool it from zero every time?
Are you guys running pre programmed paths during teleop? That is impressive. How do you react to defense with that set-up?
That’s really smart. I’ve been seeing a lot of strategic usage of LEDs this year.
We had an auto shoot mechanism which shot when the criteria was right (turret has target in camera view, right rpms for distance, robot motion below threshold). Similar, but not as robust or sophisticated as 1690’s because we’re still camera-dependent.
 Our initial idea was status LEDs like some others have mentioned, but then we decided to just use the info directly. We did still also have manual mode for the shooter.
[edit2] Depending on the game design, my opinion is we could see a robot capable of autonomic operation for the whole game as early as next season.
We did not have paths preprogrammed but we do use a PID loop for rotation to line up with the target. Paths could have been cool but have the problems you mentioned
It has a very very high moment of inertia so we ran it at the most common shot we take (tarmac line)
The main design philosophy I’d recommend to all teams: Start by defining the minimum amount of information that has to be transferred human->robot, and robot->human.
The smaller you can make these two buckets, the more time the human will have to make match strategy decisions. Which, ultimately, is what they’ll pretty much always be better at than the robot.
For our design, it came down to:
- Drivetrain motion (3 axes)
- Pull cargo into the robot
- Shoot cargo high goal
- Shoot cargo low goal
- Climbing desired/not-desired
- Raise/lower climber
Small enough we could fit it onto one controller, and one set of LED’s.
Everything else that we could automate was automated. The sequence of lowering the intake, running one or more motors in the intake and serializer, spooling up the shooter wheel, feeding balls into the shooter wheel at the appropriate pace and only when the wheel was in range to shoot accurately…
I still have this pipe dream of doing all the control on a fancy touchscreen tablet, where the drive team just presses buttons to add/remove/reorder tasks in a queue, and the robot simply chunks through the queue as fast as it can. The biggest missing pieces IMO is accurate-enough full field odometry, and “just the right” sensor solution to execute a collision avoidance algorithm.
Would such a thing be better than the best human drivers? Probably not. Would it show off the crazy cool things this control system is capable of? You betcha.
This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.