Team 1678 2019 Code Release

Team 1678 is proud to release the code for our 2019 robot, Buzz Lime-year!


  • Custom build/deploy system using Bazel
  • HTML dashboard for data viewing and auto selection
  • Automated testing with gtest-based unit tests for each subsystem
  • Message queues for thread-safe communication - every message sent is logged
  • Continuous Integration testing using Buildkite CI
  • Motion-magic control via TalonSRX on wrist and elevator
  • State-machine + bounding logic to prevent wrist, elevator, cargo intake, and hatch intake from conflicting
  • Automated climb sequence
  • Nonlinear state feedback controller for drivetrain path following
  • Model-based feedforward and drive profile parametrization
  • Quintic Hermite Spline path generation
  • Field relative positioning using encoders + gyro
  • Custom vision align control algorithm in teleop and auto with open loop linear, closed loop angular arc following via TalonSRX AuxPIDF
  • $1200 worth of limelights
  • 117 PRs merged during build season
  • 3 force pushes past midnight before a competition

General library in muan/, Year specific in c2019/

Please feel free to ask questions about what we’ve done - we’re happy to answer!




I mean, we did the same, but we just somehow decided that 2 extra was a good idea…

1 Like

As a team that used one…what benefit did using 3 give you? I could see 2…one for the front and one for the back…but 3?

1 Like

Haha, we bought one and never used it, programming couldn’t figure it out. Summer project for sure.

1 Like

It’s interesting to see you all move away from the state-space controllers that you used in previous years. Could you elaborate on the reasoning behind this?


For us, the reasons for the switch were twofold:

  1. Using CAN makes wiring incredibly simple and clean, as ever speed controller is daisychained. Hardware like the CANifier and SRX breakout boards allow us to centralize our GPIO sensors as well.
  2. State space was not sustainable training-wise. Since neither Linear Algebra nor Calculus is in our schools required curriculum, teaching students a host of math and physics really didn’t make sense. The robustness of control offered by state-space is easily met by the 1kHz PIDF loop running on the SRX. In fact, using the Mag-Encoders (4096 CPR) meant we got objectively tighter control, as compared to our old 512 CPR US Digital encoders.

Actually, we initially had two limelights - one for the front and back because of our wrist passthrough, mounted at an angle on both sides of the metal tube at the top of our robot. About halfway through build season when we started testing elevator code and vision, we realized that the front limelight was directly blocked by the wrist whenever we went to the second scoring level, or the bottom of the elevator carriage at the third level.

Enter “limelight-pricey”: mounted on top of our roboRIO at the bellypan, angled upwards. The code reads from different limelight feeds based on which goal we send the elevator; if we send a second-level score goal to the superstructure, it ignores the front limelight and reads from limelight-pricey until we give it a first-level goal.


Ah okay. So similar to a problem we ended up having.
Our limelight was very slightly blocked at L2 by the bottom of our intake at champs.

Have you considered switching to Kotlin?


It looks like they use C++. Kotlin is more of a replacement for Java rather than C++. C++ has a number of advantages over Java/Kotlin.

We are actively considering a switch to Java/Kotlin and a more standard architecture.


Would you mind elaborating on how your vision alignment works?

1 Like

Sure! First we calculate a very rough estimate of the distance to the target (we don’t use horizontal offset). We use the tx horizontal angle offset for our angular offset. We then calculate a linear voltage to apply to both sides of the drivetrain, that is scaled proportionally on distance, and then scaled back based on current velocity and extremeness of the angular offset. This voltage is passed to the “PercentOutput” for both sides of the drivetrain. We then use the “AuxPID” for heading, which subtracts and adds to each side via PIDF on the gyro setpoint calculated from tx to maintain alignment with the target.


How has the queue-based architecture worked out for you? Has the increased complexity paid off? Would you ever consider switching to a more standard command-based or other framework?

1 Like

What led you to use the Talon’s AuxPid as opposed to an on-RIO PID loop? We’ve avoided the Talon approach because it seemed a lot less approachable (plus we were up in the air about switching to NEOs), but I was wondering whether the performance benefits are high enough to offset that. On an unrelated note, how does your team process/visualize robot log data?


Talon’s AuxPID runs at a kilohertz, and sensor reading Talon to Pigeon reduces overhead on CAN loop and code processing. Overall, I’d advise using the internal PID loop whenever possible, since a) the roboRIO CAN stack is far from optimal and b) 1kHz>50Hz

All messages (inputs, outputs, goals, and statuses) are encoded as protobuf messages. These messages are passed around between threads and mechanisms via “queues”, which are globally accessible rotating buffers. Each time any message is written to a queue, the queue uses a reflecting CSV logger to log that message to a CSV file. Each message “type” (for example: drivetrain input) has it’s own CSV. Each time the code starts, it creates a new enumerated directory in which all logs are stored. We have a custom log viewer which you can find in scripts/logs/


Currently, we have everything super integrated. Our web dashboard automatically registers new queues and displays them, our logger automatically logs them.

However, the main issue with our current setup is that it is non-standard. To simplify interaction with CSAs, as well as sustainability with the WPILib toolchains constantly in flux, we are considering moving away from protobuf/bazel.


Excellent stuff, will need to spend more time when I get home to look at this. Thank you again for posting!!

Questions on the above items. Any reason you chose to go with these solutions over the more-standard WPI supported alternatives (gradle & shuffleboard)?


We use our custom web dashboard for its integration with our QueueManager; it automatically displays data from all queues, and auto selection/camera stream selection is also integrated. Not so importantly, we get enhanced customizability— if you can do it in JavaScript, it’s in the realm of possibility.

We have used bazel these last few years because, when it works, it’s a better build system than gradle. It’s much more friendly towards monorepo structures, and it allows us to build specific “packages” or sets of packages for quick tests. Bazel’s integration with gtest, and protobuf (cc_test, cc_proto_library) are also extremely useful. In years prior we also used bazel’s genrule functionality to generate state space controller gain matrices from python scripts.

However, Bazel is (still) in alpha, and due to lack of official support from WPI/FIRST, using it with the rapidly changing roboRIO toolchains and libraries is a growing pain, and a reason why we are considering a switch to GradleRIO.