AdvantageKit 2023 Beta: Log replay framework, now with WPILOG/NT4 integration

Last year, 6328 introduced our AdvantageKit logging framework as part of our openalliance efforts. Whether you’ve never heard of AdvantageKit or are already using it yourself (thank you :heart:), we have some exciting updates to share. Feel free to skip to “What’s New For 2023?” if you’re already familiar with AdvantageKit and log replay.

What is AdvantageKit?

Most logging frameworks in FRC (like WPILib’s built-in data logging) are designed to record a limited number of fields that are explicitly logged by the robot code. AdvantageKit goes beyond this traditional approach by recording all of the data flowing into the robot code, not just its outputs. By isolating hardware interaction in subsystems, log data can be replayed in simulation. Forgot to log a field? Run a replay and inspect the code more closely. Want to retune an odometry pipeline? Run a replay and see how it would have affected your last match. Input capture and log replay completely redefine the types of problems logging can solve.

Does this feel a little abstract? Let’s look at an example. Below is some log data from 6328’s 2022 robot, showing the flywheel setpoint during a match. It’s set automatically based on the distance to the target, but it seems we forgot to log the measured distance :frowning_face:. So how can we check that the calculation is working?

Let’s just log the new data with this call…

Logger.getInstance().recordOutput("TargetDistanceMeters", latestPose.getTranslation().getDistance(FieldConstants.hubCenter));

…run the code in replay…


…and enjoy our newly logged data:

We can confirm that the flywheel setpoint is increasing along with distance, so the calculation is working correctly. Using replay, we’re able to make that conclusion with complete certainty even though we’re using data that was never logged by the robot on the field.

Below are just a few more examples of how we’ve used log replay in the 2022 season. Solving any one of these issues would likely have taken multiple hours of testing on the real robot, but log replay took care of them with almost no time required for testing.

  • While working between shop meetings, we were able to debug a subtle issue with the code’s vision pipeline by recording extra outputs from a log saved from a previous testing session. More details here.

  • During an event, we retuned our odometry pipeline to more aggressively adjust based on vision data (tuning that requires a normal match environment with other robots). We tested the change using replay and deployed it in the next match, and saw much more reliable odometry data for the remainder of the event. More details here.

  • After an event, we determined that our hood was not zeroing currently during match conditions. This involved replaying the matches with a manually adjusted hood position and comparing the quality of the vision data processed with different angles (the robot’s Limelight was mounted to the hood). More details here.

For more detailed examples of AdvantageKit in action, check out the “What is AdvantageKit?” page in the documentation.

What’s New For 2023?

We’ve introduced a variety of new features for AdvantageKit’s 2023 update. Here are some of the most important changes:

  • WPILOG and NT4 have replaced our old RLOG format for logging and live data streaming, allowing AdvantageKit to interface with WPILib tools like Glass and Sapphire. AdvantageScope’s 2023 update adds support for WPILOG and NT4 (link at the end of this post).
  • Console output (STDOUT and STDERR) is now captured automatically by AdvantageKit, and can be displayed in AdvantageScope’s new console view.
  • The @AutoLog annotation automatically defines the toLog and fromLog methods for subsystem inputs, reducing the amount of boilerplate code.
  • New methods allow for convenient output logging of WPILib classes like Pose2d, Pose3d, and SwerveModuleState.
  • More fields and customizability has been added to the built-in system stats and power distribution logging.
  • Example projects are now provided with each AdvantageKit release: a skeleton project with just the basics, and a command-based example project with multiple subsystems.

The first 2023 beta of AdvantageKit has now been released. For information on getting started, check out the AdvantageKit documentation. We would welcome community feedback, questions, and suggestions as we continue our development effort.

We also just released the 2023 beta of AdvantageScope, our data visualization tool! More details here.




AdvantageKit 2023 Beta 2

We just released the second beta of AdvantageKit for 2023, which supports WPILib 2023.1.1-beta-6 (among other minor changes). The full changelog is available here. To update an existing project:

  • Click “WPILib: Manager Vendor Libraries” in VSCode, then “Check for updates (online)”.
  • Update the AutoLog annotation processor version to “2.0.0-beta-2” in “build.gradle”.
  • Update the WPILib version to “2023.1.1-beta-6” at the top of “build.gradle” (all robot code must be built with the same WPILib version as AdvantageKit).

Hi Jonah,

I’m very impressed by the functionality of AdvantageKit and appreciate all of the work you guys have put into it. I have two questions about it though.

  1. This is my team’s first year logging so we aren’t aware of the limit to the amount of data that can be logged without running into issues. Looking through 6328’s 2022 code, all data coming from motors is logged in addition to other things like OI. If we weren’t to log some of the extraneous information we don’t care all that much about, and started logging some of the “outputs” that you guys run through the replay framework, could you give us an estimate of how many fields we could log without running into problems? (We are a little hesitant to switch to the IO class stuff our first year).

  2. After trying to model our swerve code around the AdvantageKit framework, we noticed that you guys have an GyroIOInputsAutoLogged class that seemingly appears out of thin air in your code. When we initially setup the project we made it as a copy of the command-based template you guys provide, and after checking the AutoLog plugin in the build.gradle, and looking for anything that stood out, we were unable to find where this class exists. For example, these two lines provide errors saying no such thing exists.

import org.team5557.subsystems.gyro.GyroIOInputsAutoLogged;
private final GyroIOInputsAutoLogged gyroInputs = new GyroIOInputsAutoLogged();

Also, since I don’t know where this class even is, I’m not sure what it does either. Any explanation would be much appreciated. Thank You!!

The GyroIOInputsAutoLogged class is generated by the @AutoLog annotation on the GyroIO.GyroIOInputs class.

1 Like

Thats what I assumed but the following code doesn’t seem to do that.

package org.team5557.subsystems.gyro;

import org.littletonrobotics.junction.AutoLog;

public interface GyroIO {

  public static class GyroIOInputs {
    public double azimuthDeg = 0.0;
    public double azimuthVelocityDegPerSec = 0.0;

    public double pitchDeg = 0.0;
    public double pitchVelocityDegPerSec = 0.0;

   * Updates the set of loggable inputs.
   * @param inputs the inputs to update
  public default void updateInputs(GyroIOInputs inputs) {}

Is there any import or plugin I am missing? Do you have to run something from terminal to actually access the AutoLog plugin?

This is what the dependencies section of my build.gradle looks like

dependencies {



    simulationDebug wpi.sim.enableDebug()

    simulationRelease wpi.sim.enableRelease()

    testImplementation 'org.junit.jupiter:junit-jupiter-api:5.4.2'
    testImplementation 'org.junit.jupiter:junit-jupiter-params:5.4.2'
    testRuntimeOnly 'org.junit.jupiter:junit-jupiter-engine:5.4.2'

    annotationProcessor "org.littletonrobotics.akit.junction:junction-autolog:2.0.0"

    implementation "gov.nist.math:jama:1.0.3"

The AutoLog code is generated when you build the code. It should generate the files to a build folder (might be .build which would be hidden)

Yup… building the code will do it! Thanks everyone for your help.

I’m glad you got the issue with auto logging resolved. For your other question, I wouldn’t be concerned about logging too many fields. For reference, our 2022 code logged >200 fields and had no issues. The overhead from logging each cycle was ~1ms, and we haven’t observed that the number of fields logged has a significant effect on that (within reason). AdvantageKit automatically records the cycle times for user code and logging code under “/RealOutputs/LoggedRobot/”, so you can always double check that you’re getting the performance you expect.

Thats awesome! Such a cool tool.

AdvantageKit v2.1.0: Mechanisms!

AdvantageKit v2.1.0 is now available, with support for logging the state of a Mechanism2d object as an output field:

Mechanism2d mechanism = new Mechanism2d(3, 3);
Logger.getInstance().recordOutput("MyMechanism", mechanism);

The mechanism data can be visualized using AdvantageScope v2.1.0 or WPILib’s built in tools (Glass, SimGUI, etc).


Hey Jonah,
Our team is experimenting with multithreading to mitigate the effects of packet loss, and I was wondering how does AdvantageKit handle logging the same key multiple times with different values between Driverstation packets? The code below suggests that WPILOGWriter will append all the updated data to the same table, with the same id and timestamp. This means that something probably very spooky happens when you try to simulate from a log, making multithreading with AdvantageKit a no-go.

public class Logger {
   * Periodic method to be called before robotInit and each loop cycle. Updates
   * timestamp and globally logged data.
  void periodicBeforeUser() {
    if (running) {
      // Get next entry
      if (replaySource == null) {
public class WPILOGWriter implements LogDataReceiver {
  public void putTable(LogTable table) {
    // Append data
    if (appendData) {
      int id = entryIDs.get(field.getKey());
      switch (field.getValue().type) {
        case Raw:
          log.appendRaw(id, field.getValue().getRaw(), table.getTimestamp());
        case Boolean:
          log.appendBoolean(id, field.getValue().getBoolean(), table.getTimestamp());
        case Integer:
          log.appendInteger(id, field.getValue().getInteger(), table.getTimestamp());

This is just my initial assumption based on a quick skim through the logging code, so I thought I’d ask what the actual answer is before making a decision on how to use multithreading.

1 Like

AdvantageKit requires that the main loop be singled threaded, and each field can only have a single value per cycle. This is required for replay to work correctly because the state of the code at each cycle must be predictable. If the code was interacting with another thread, their interactions would not be captured by the log and thus the behavior in replay would not match the real robot (like if a background task took a different length of time to execute on the robot or in the simulator).

Multithreading in robot code tends to only be useful in very niche cases, so we’ve rarely found this to be a hinderance. If threads are truly required, they just need to exist within an IO implementation. This way all of the data flowing from the thread to the main loop is logged and can be replayed (without the thread needing to run during replay).

Could you elaborate on the problem you’re trying to solve? If you’re dealing with packet loss, I suspect that multithreading is not the correct solution.


Thanks for the quick response! Essentially, we decided to copy 254’s idea to run the autonomous code and subsystem code in a separate thread, it would allow the autonomous to run cleanly without packet losses effecting the accuracy of the robot. Running a separate thread also allows us to run the auto faster than default, increasing the accuracy to something that couldn’t be achieved with the normal autonomousPeriodic() frequency.

It sounds like you’re dealing with two different concerns — packet loss and auto accuracy. Packet loss never affects the execution of your code. WPILib used to include IterativeRobot that only ran a loop cycle when a DS packet was received, but the current classes (TimedRobot, LoggedRobot, etc) run with a constant period regardless of any DS packets. If too many DS packets are missed the RIO’s watchdog will disable motor outputs, but that safety measure is applied at a lower level than the robot code and will affect all outputs regardless of how they were set.

As for accuracy, are you trying to resolve a specific issue? The dynamics of a drivetrain are generally slow enough that a 20ms loop cycle is more than fast enough to develop very accurate autos. Typically we’ve found that accuracy issues are a result of suboptimal tuning rather than a limit of the loop period. Of course, you always have the option of running the main loop at a faster rate but that shouldn’t be necessary for most systems.


Interesting, I was told that TimedRobot (and by extension LoggedRobot) ran periodics only when a DS packet was received. Thanks for the clarification! As for the autonomous accuracy, it was a nice side-effect of running a separate thread, and never the main goal of the thread. However, in the hypothetical situation that a subsystem has limited time to achieve a goal and requires more clock cycles than the base 20ms can provide in that timeframe, would addPeriodic allow for faster cycles while keeping logger synchrony (idea is that WPILib will know about the separate “thread” and run logger things every cycle no matter the “thread” it’s in)? Regardless, I appreciate your help and the fact that I don’t have to deal with thread synchronization!

addPeriodic is also unsupported by AdvantageKit — LoggedRobot doesn’t have that method. In order for the extra periodic cycles to be useful, they would have to be able to read and write log data independently from the surrounding cycles. Essentially, every callback would carry the logging overhead of a full cycle. Overall, we felt that the use case was niche enough that it wasn’t worth the complexity and performance overhead. The closest equivalent is using a Notifier in an IO implementation, which we have used successfully in the past.

In general, our strategy is always to start with the simplest solution and build complexity only as required. In FRC, there are very few use cases that truly require multithreading or extra periodic callbacks. Also keep in mind that having the occasional loop overrun is rarely a cause for concern — you can use the cycle durations from AdvantageKit to debug issues if necessary. I’d recommend that you only look at adding complexity like multithreading if you start to see a measurable performance impact that can’t be resolved by other means, which in our experience is very rare.

Is there a classpath setting somewhere in the gradle file that needs to be modified so the generated files can resolve the path of the original file when compiled for simulation? I’m finding that in a simulation build, the generated files can’t resolve the original modules they were generated from.

Example (I’ll borrow the one given in the AdvantageKit documentation):

Given the following class defined in src/main/frc/lib/example/

package frc.lib.example

public class MyInputs {
    public double myField = 0;

The following code is generated under REPO_PATH/bin/generated-sources/annotations/frc/lib/example/

package frc.lib.example;

import org.littletonrobotics.junction.LogTable;
import org.littletonrobotics.junction.inputs.LoggableInputs;

public class MyInputsAutoLogged extends MyInputs implements LoggableInputs, Cloneable {
    public void toLog(LogTable table) {
        table.put("MyField", myField);

    public void fromLog(LogTable table) {
        myField = table.getDouble("MyField", myField);

    public MyInputsAutoLogged clone() {
        MyInputsAutoLogged copy = new MyInputsAutoLogged();
        copy.myField = this.myField;

When I build the code normally (“Build Robot Code” command in vscode), the generated java module compiles successfully. But, when I attempt to run it in the simulator (using the “Simulate Robot Code” command in vscode), the module fails to compile and shows errors wherever myField is referenced because it cannot be resolved as a variable.

My humble apologies in case I just missed some crucial step called out in the documentation.

1 Like

Please disregard the previous post. The problem seems to have been caused by an old generated/cached file that never got cleaned up. Deleting the offending file and reloading the project fixed the compile problem.


Hey Jonah,

We started using AdvantageKit and AdvantageScope this year and have been big fans. Last night, we were tuning our arm when we had what could be described as a “blip” and the arm jumped around. Looking through the logs, what stuck out to us was that we had a 1 second interval between 2 lines of our logs, right when the arm freaked out, and a “ConsolePeriodicMS” of 959 which seemed to correspond pretty close to the delay.

Timestamp ConsolePeriodicMS
30.482021 959.354
31.462051 0.043

What’s your take here? We don’t appear to be logging too much to the console. At least, what AK is reporting is literally 2 lines of logs in that time frame. Could this have been just an ill-timed garbage collection? Our CANBus utilization also hit 100% for that single frame, but that doesn’t seem to correspond with the jump in ConsolePeriodicMS, as I can find other 100% frames with no jumps.

Could you try updating to AdvantageKit v2.1.2 and WPILib 2023.3.2? It looks like you’re on v2.1.1 and 2023.2.1 right now. We fixed an issue in 2.1.2 with the console logging that resulted in unnecessary memory allocations and string operations.