Intro
I am Silas and I am the programming lead on 4265 Secret City Wildbots. We won two separate regionals and went to worlds this year with our robot Sidewinder and have been there ten out of the eleven years we have been a team. Our robot for this year includes a dual intake, shooter, mid-rung climber, and best of all: a shifting swerve! This is an overview of our LabView code for Sidewinder, a description of our custom python based path planner and repositories for our previous and current code.
Shifting Swerve Code
Kinematics and Controllers
Our swerve drive automatically handles failures and calibrations. If an azimuth module fails, we switch the motor to coast mode, disable the drive motor for the affected module, and remove the module from the robot pose calculations. To prevent the swerve modules from opposing each other while transiting to a new azimuth heading, we slow down any module that is going against the others. To reduce system complexity, only relative azimuth encoders are used, requiring additional error checking to recover from possible motor controller or RoboRIO brownout conditions. Additionally, we assist the driver by locking the robot’s orientation at high speeds, which prevents the robot from drifting. Additionally, orientation control is automatically switched on while we are climbing for easier adjustments and proper line up. New this year, an angle unwrapping algorithm allows the azimuth control loop to run on the motor controllers while still choosing the optimal (per-module) rotation and drive motor directions. We also automatically control whether the swerve drive is in robot-oriented or field-oriented mode based on driver inputs and emergency overrides. Finally, we use customizable driver profiles so each driver can have personalized settings and while we have not had much time to practice it, drivers can also change the swerve rotation center to enable complex maneuvers
Shifting
Each swerve module pneumatically shifts (i.e., low gear and high gear) independently. We use two photo-interrupters on each module to determine the real shifter state the robot is in and compare this state to our requested shifter state. Using this information, we can determine whether the robot is in high or low gear so that we can use the correct unit conversion factors, feedforward coefficients, and control saturation scalings. We can also compare the requested and actual shifter states to know when we are transitioning between gears, stutter the ball shifter if it gets stuck while transitioning, and detect any shifter failures. The robot pose is used to latch the azimuth angles during the rapid deceleration, which can occur when down-shifting, to prevent the robot tipping from at high speeds The pose state information is also used to estimate the robot velocity during shifting when the drive motor encoders are decoupled from the drive wheel.
Autonomous Code
Robot Autonomy and Path Planner
Before a match, our autonomous program loads a text file of commands to execute the autonomous play. We parse these user-input commands into parallel drive and subsystem instruction sequencers. These commands have multiple modes including a configurable delay that enables us to synchronize with our alliance partners. To drive complex paths, we created a custom path planner that lets the user specify multiple waypoints, translational velocities, rotational velocities, and orientations that the robot must reach. Our path planner automatically smooths sharp turns, limits the robot’s translational acceleration, and ramps the rotational velocity during these paths. Our LabVIEW path follower is a modified pure-pursuit algorithm (see more info below). It includes both a feed-forward controller that calculates our goal location along the path and a feed-back controller that compensates for the inherent errors in following the path. New for next year, the path planner will allow us to input calibration points based on measurements made on the competition field.
Once we finish writing a play or path, we can immediately test autonomous even while our robot is stationary on the cart. This autonomous test mode allows us to simulate or bypass crucial sensor readings, such as the IMU and LimeLight. Because the robot is stationary on the cart, we had difficulty visualizing its estimated position on the field. Therefore, we created a map that displays the robot pose, shoot-while moving compensation, and the path the robot has taken, either using the physical sensors during a match or simulated sensor readings in test mode. To help us easily debug any problems, we record this path, audible warnings, and the robot’s status during each match using OBS Studio. Our guiding design principle has been to allow us to develop new auto plays and paths “in the queue line” and have them complete successfully during the match without any testing on the field.
On the field, our robot estimates its position and orientation (pose) by constantly fusing data from multiple sensors, including the IMU, swerve drive encoders, swerve azimuth encoders, and the LimeLight vision camera. We continue to run our autonomous modes after the autonomous period of a match ends, until the driver moves the joystick. This feature allows us to leverage the crucial seconds between the end of autonomous and when our drivers can pick up their game controllers.
Autonomous Path Follower
Essentially, we have a feed-forward path follower with a feed-back pure pursuit controller running on top of it:
-
First, we generate a smooth path using a custom path planner. This computes smooth translational and rotational velocity profiles and does some basic acceleration limiting but we haven’t explored curvature related limits yet.
-
Based on the robot’s pose, we identify the closest smooth point on the path within our search window (which we prevent from ever extending backwards, effectively preventing the robot from traveling backwards along the path).
-
Based on this closest point we grab the following information:
-
Distance along the path from the start.
-
Distance along the path from the end.
-
The feed-forward velocity magnitude.
-
The feed-forward velocity angle (for us this is essentially the pointing direction for our swerve modules).
-
The robot orientation goal (for us this is an independent degree of freedom).
-
The robot orientation feed-forward velocity magnitude.
-
We then identify our look-ahead point, this is where the robot should be at any given time and is used to calculate the error used by the feed-back controller. The look-ahead point is defined as either the smooth point that is the look-ahead time away from our current position, or the “chase” point, whichever is closer to us and assuming neither are behind us. Where the chase point is where the robot should currently be located along the path based completely on the feed-forward values generated by the path planner. This (a) compensates for us drifting off of the path due to imperfections in the feed-forward controller and (b) keeps us moving along the path when we stop to touch points like in the 2021 bounce path.
-
Once we have this look-ahead point we grab the following information:
-
Distance error from the current pose to the look-ahead point.
-
Pointing angle from the current pose to the look-ahead point (for us this is essentially the pointing direction for our swerve modules).
-
Next, we have to do some conversions. The feed-forward velocity is in inches/s from the path planner and must be converted to our standardized non-dimensional drive command [0,1] based on the maximum ground velocity we can achieve. The distance error is in inches, and is “converted” into a non-dimensional drive command [0,1] by the PID controller. The independent heading error is “converted” from degrees to a non-dimensional rotation command [-1,1] by another PID controller.
-
The feed-forward and feed-back drive commands are broken into their vector components using the feed-forward velocity angle and the feed-back pointing angle, respectively. They are then summed and the result is field-centric X and Y drive commands.
-
These drive and rotation commands get passed to all of our TeleOp swerve commands. Downstream we are running in voltage compensation mode on the Falcons during auto and applying another linearizing feed-forward controller to convert the non-dimensional motor commands [-1,1] to non-dimensional motor power commands [-1,1] which scale linearly with motor RPM.
Robot Pose
Our robot uses a variety of things to locate its own position on the field. First, an estimation of our pose is created based on our previous pose data (which is initially set from manual coordinates on the dashboard), how far we have measured the azimuth and drive encoders to move, and measurements from our IMU (odometry). This only provides an estimation, however, and we then fuse this odometry with data received from the LimeLight when it is available. We must use a Kalman filter in order to sync up the times that the LimeLight data and encoder data were received because there is extra delay from the LimeLight. We then do a check afterward to make sure that the robot is within field boundaries for safety purposes. This produces our final calculated pose which is displayed on the dashboard. Initially with this system, we struggled quite a bit with network table latency, particularly with the LimeLight. After adjusting how we send data, we fixed this issue.
Subsystems
We have a shooter on a turret which automatically tracks the target based on the estimated robot pose. The robot shoots automatically when requested by the driver and when ready (i.e., when the shooter is at speed and the robot is properly aimed). We also calculate the future pose of our robot based on our current velocity and use that to shoot while moving. Compensations are made to adjust for the changing robot pose with different turret angles and shooter speeds.
Our ball handling starts with the intake. The intake’s encoder velocity is measured in order to detect jammed balls which are removed by reversing the intake direction. We also have photon cannons mounted above the intake which are turned on whenever the intake is active. This helps the driver to see the balls they are collecting for faster ball collection. In the indexer and tower, we have two beam-breaks and a color sensor allowing us to see how many balls we have and automatically reject wrong color balls (shooting them toward the nearest HANGER or driver station wall) while auto-targeting.
We also have a climber which is capable of a mid bar climb. While in climbing mode (activated by the manipulator), the robot automatically faces in the correct direction and orientation control is switched on for easier line-ups and faster climbs.
Driver Feedback and Dashboard Code
Our dashboard code is the primary interface with the user. To help drivers navigate the dashboard, we used three main tabs: Setup/Autonomous, TeleOp, and Tune. Information for the manipulator appears in the TeleOp tab which we access during the match The tablet provides continuous driver feedback with warning lights, voice commands, state indicators, and camera overlays and enables faster UI and robot design iterations. The TeleOp screen also includes multiple overrides including a safety override for emergencies. The Tune tab also includes crucial information including Driver Profile settings (which can be adjusted from the Tune tab), numerous calibrations, swerve module information (i.e., temperature and angle), and a custom actuator testing mode which allows each motor, piston, and also compressor to be tested individually. Physical indicators such as robot LEDs and the driver’s joystick rumble provide further feedback by showing the robot mode. Our LEDs have different modes to communicate different types of information. In the default mode, the state of the robot (autonomous, TeleOp, or disabled) and master alarms and cautions are shown. We also have debug mode with information about different modules, sensors, and the turret, locate LED mode to pick specific LEDs from the strip, choose hue to choose the best colors from the LEDS, and Party mode for fun! The joystick vibrations are used to indicate active intakes. We also use audible warnings with a voice we named “KAREN!” KAREN! alerts the drivers about specific sensor and swerve module failures, motor temperatures, electrical faults, pressure leaks, control latency, autonomous play typos, tablet failures, and calibration instructions. Finally, we have added an error map diagram overlay to display which parts of the robot have errors and a joystick command diagram overlay.
I am happy to answer any questions you might have about information listed here.
Repositories