4265's 2020 Code and Technical Resources

Hi, I’m on the Secret City Wildbots and here is our code and resources from 2020. I am happy to answer any questions! :grinning: :robot:

Secret City GitHub



Our autonomous code uses a pure pursuit algorithm with rotation control. We use two disparate sequencers, the drive and subsystem sequencers, that operate in parallel during autonomous mode. Autonomous plays are written as text files and uploaded to the roboRIO via FileZilla. The appropriate play is selected prior to the match via the FRC 4265 Dashboard. For each sequencer, every step contains a command and a parameter string of arbitrary length. The drive sequencer selects commands using a case structure, while the subsystem sequencer selects commands using static VI references. We used these two separate sequencers due to conflicting memory handling and ease-of-creation tradeoffs. Our code automatically checks for and handles errors, especially while parsing the user-created autonomous play file. We also can test new plays and paths while the robot is on the cart by simulating our autonomous code. This simulated autonomous mode required us to bypass or imitate crucial sensors, such as the LimeLight, the LIDAR, and the IMU. Throughout both of these autonomous modes, status updates are sent to the FRC Driver Station and the FRC 4265 Dashboard. We continue to run our autonomous modes after the autonomous period of a match ends and until the driver moves the joystick. This feature allows us to leverage the crucial seconds between the end of autonomous and when our drivers can pick up their game controllers. We also instituted a configurable auto delay at the beginning of each autonomous play that enables us to synchronize with our alliance partners.

Path planner

Our path planner for the pure pursuit algorithm allows the user to upload a picture of the field and add points to generate a planned path for the robot. The user can change each point’s “x” and “y” values, velocity, and orientation. We then apply a linear interpolation of the points and smooth out sharp turns. The path planner also automatically limits the robot’s acceleration and displays a scale model outline of the robot to assist the user with planning the paths. Path information, including the estimated completion time, is exported as both .csv files and .jpg image files.

Utility VIs

We have easily reusable utility VIs in our code. For example, we have a VI that loads .csv calibration files and applies linear interpolation, a custom PID algorithm (co-developed with MARS 2614), and a flexible state selector that converts boolean triggers into a robot state string, along with many more.


Our swerve drive automatically handles failures and calibrations and works to assist the driver. If an azimuth module fails, we switch the motor to coast mode, disable the drive motor for the affected module, and remove the module from the robot pose calculations. We also can easily correct misaligned swerve modules by zeroing and recalibrating them in the code. Additionally, we assist the driver by locking the robot’s orientation at high speeds, which prevents the driver from accidentally rotating the robot and helps when they go under the control panel. We also automatically control whether the swerve drive is in robot-oriented or field-oriented mode based on driver inputs and emergency overrides. We also enable each driver to have personalized settings and preferences by using customizable driver profiles.

General architecture

We have a “super test mode” which allows us to test robot states and behaviors without enabling the robot. We also automatically manage the bandwidth for the CAN bus and network table by minimizing updates. Rather than constantly sending updates, we only update the CAN bus and network table when the value of a given value changes. Similarly, we have an emergency power mode to conserve battery when the battery is low and while the robot is climbing. The emergency power mode automatically disables less important components, such as the LEDs and the compressor to save power. We also use robot profiles to ensure that our code can be run on robots with different hardware specifications. These profiles allow certain sensors to be bypassed in the code depending on the robot’s hardware capabilities.

Pose and Targeting

To estimate the robot’s pose, we constantly fuse data from multiple sensors, including the IMU, swerve drive encoders, swerve azimuth encoders, LimeLight, and LIDAR (laser range finder). These sensors, combined with the pose estimate, are used to continuously estimate the robot’s current range and angle to the target. The robot pose can be reset on boot from the Autonomous Sequencer, on command from the FRC 4265 Dashboard, or from the driver’s joystick.


We have an auto-targeting feature that enables drivers to automatically align the robot with the target and shoot when ready by holding a joystick button. We also constantly track the number of balls in the robot using a beam break sensor.

Unfinished Features

We have started creating the framework for additional subsystems that we do not have physically on the robot yet, so the code is unfinished. We have started the state controller for the turret, the framework for a shifting swerve module that automatically changes gears between high and low gear, and to modify the shooter to change the shooting angle with a servo, rather than changing the shooter power output.

Driver Feedback/Dashboard Code

Our dashboard code is the primary interface with the user. To help drivers navigate the dashboard code, we used three main tabs: the Setup/Autonomous tab, the TeleOp tab, and the Tune tab.

Rather than using a joystick or physically mapped buttons, we used a touchscreen tablet for our manipulator’s controls, which appear in our TeleOp tab. The tablet expedites pre-match setup and configuration; provides continuous driver feedback with warning lights, state indicators, and camera overlays; and enables faster UI and robot design iterations.

Physical indicators such as robot LEDs and the driver’s joystick rumble provide further feedback by showing the robot mode. We also use audible warnings with a voice we named “KAREN!” KAREN! alerts the drivers about specific sensor and motor failures, motor temperatures, electrical faults, pressure leaks, control latency, autonomous play typos, tablet failures, and calibration instructions.

TeleOp Tab

The TeleOp Tab contains our manipulator’s driver controls, along with sensor and mode indicators.

Tune Tab

The tune tab has sensor readings, setpoints, and calibration options for the joysticks, swerve wheel alignment, and turret. There are also safety and camera overrides in case of sensor failures and an option to test individual actuators independently.

Setup/Autonomous Tab

The Setup/Autonomous tab has driver profiles that control the robot’s settings. It also contains a robot profile, which enables and disables hardware so the code can be used by robots with different hardware capabilities. It also allows drivers to prepare for autonomous by pulling up specific instructions for each autonomous play. It also has an autonomous test mode that lets us test autonomous code while the robot is stationary on the cart. This autonomous test mode required us to simulate or bypass crucial sensor readings, such as the IMU and LimeLight, as well as visually gauging robot position. We bypassed this difficulty by creating a digital map of the field that shows the robot position and enables drivers to recalibrate the robot’s pose.

Technical Resources:

2020 Controls Lecture Series:


Robot Networking: https://drive.google.com/file/d/11SwYzjsO9nnkduIVMoo3SCxNQE-JnXIG/view?usp=sharing

Robot Troubleshooting: https://drive.google.com/file/d/1a0Ur7nMAvsKbcw7A-g5Ii1qTuH2nKmF-/view?usp=sharing

Software Installation Instructions:


Hardware Interfaces Spreadsheet: https://drive.google.com/file/d/1d3QiPxATptbsmm-oDSJnSBLX5UVoqzkS/view?usp=sharing

Lessons Learned:

Controls Hardware Masses:



Sweet, thank you for sharing! Labview usually doesn’t get much publicity around here, so it’s good to see some information presented in that context.