Introducing Monologue: Annotation-based Telemetry and Data Logging for Java Teams

monologue. n. A one-sided conversation.

Monologue is a Java annotation-based logging library for FRC. With Monologue, extensive telemetry and on-robot logging can be added to your robot code with minimal code footprint and design restrictions.

“Isn’t this what Oblog does?”

Monologue is intended as a successor to the popular logging library Oblog. Though Oblog still works for many teams, its original purpose as a better way to define Shuffleboard code-driven layouts has become less useful, since Shuffleboard has become unstable and relatively unmaintained. Additionally, Oblog’s dependence on the Shuffleboard API means it is not easily adaptable to logging to DataLogManager/WPILog, or using the new-for-2024 struct and protobuf formats for structured data types like the WPILib geometry types.

Enter Monologue, which combines the ease of use of annotations with the new features below:

  • Automatic DataLog logging and NT logging in one annotation.
  • Individually-updateable NT and DataLog providers, meaning NT logging can be disabled on-field to save bandwidth/loop time, while still logging everything to the WPILog.
  • Additional Imperative API to log temporary/non-class-scope values that can’t be annotated.
  • Struct-based support for logging WPILib geometry types (other similar data types to come later)
  • Use of NT4 APIs directly, instead of using the Shuffleboard APIs.

Check out the GitHub wiki for more details and installation instructions!

Credit

Monologue is inspired by @Oblarg ‘s Oblog and modified from @oh-yes-10-fps AutoLog framework.

23 Likes

v1.0.0-alpha2 has been released. Alpha 1 had some broken build configuration.

1 Like

I’m not going to lie to you - I read this as “@oblarg” and laughed quite a bit…

Do you have any details on this for those of us who haven’t been following as close?

2 Likes

Struct and protobuf are various protocols for serializing structured data to binary, and then parsing it out on the other side. I’m not super knowledgeable on the particular benefits of one over the other, I just know that it’s a better (more computer-readable?) way to send some datatypes than putting them into an array of numbers and agreeing developer-side that this particular array of numbers should be parsed as [x,y,heading], for example.

4 Likes

@Amicus1 summarized it well. I need to write a docs page on it, but essentially structured data serialization closes a gap in telemetry/datalogging for complex data types. Prior to adding this feature, you essentially had the following options for sending a complex piece of data such as a Pose2d (or Pose2d[]) over NetworkTables (or datalogging it):

  • Split it into separate topics (e.g. a topic for X, a topic for Y, and a topic for Rotation) and send updates as separate values/arrays
  • Flatten it into a double[] with an agreed-upon but essentially undocumented order (e.g. each Pose2d would be 3 doubles in the array, in X, Y, Rotation order)
  • Serializing to a text format such as JSON
  • Manually serializing to raw bytes

Each of these have significant disadvantages.

  • Multiple topics: NT makes no guarantees of when updates from two separate topics are updated relative to each other, so the first one risks data slicing–essentially reading X from one pose and Y from a different pose, or with an array, having those two arrays be different sizes. With timestamps etc you can disambiguate it but it’s a lot of manual work.
  • Flattening to double[]: Works decently for things like Pose2d where all pieces are doubles, but is inefficient for mixed numeric types, and can’t handle non-numeric types at all. The biggest downside is the ordering must be previously agreed upon by both sender and receiver and there’s no way to “discover” what that ordering is.
  • Serializing to JSON: text is very inefficient both from a time and space perspective
  • Manually serializing: works, but is a lot of manual work, and without standard support for communicating the encoding schema, has the same problem as flattening to double[] in that there’s no good discovery mechanism for pulling apart the data dynamically.

What struct and protobuf offer is a standardized way to serialize to raw bytes. The schema (the description of how the data is encoded) is also published/logged so tools can dynamically decode the encoded data. The data is sent as a single topic, so there’s no data slicing concern, and it’s binary so it’s fast and small.

Struct encodes fixed-size structures (similar to a C struct). Structures can be nested (e.g. a Pose2d struct can be comprised of a Translation2d struct and a Rotation2d struct). Internal arrays (within the struct) must be fixed size. Dynamic sized arrays are supported at the top level (e.g. you can send a Pose2d[]). Structs are fast–we’ve benchmarked it at about half the speed of flattening to double[]. The serialization implementation (at present) is manually written, but as everything is fixed size this isn’t too bad (just some ByteBuffer calls in Java).

Protobuf is much more powerful, but at a performance penalty–it’s about 1/6 the speed of flattening to double[]. This is a Google-developed protocol that is quite common in industry. It offers several advantages over struct serialization, in that it allows variable-sized internal arrays and other dynamically sized elements (e.g. strings). The main disadvantage (other than speed) is that it doesn’t support dynamically sized arrays at the top level–if you want an array of Pose2d you have to create a protobuf that contains an array of Pose2d, and then send that. Serialization code is auto-generated from a .proto description file, but it’s still necessary to write some glue code to connect it to the actual object.

From a user perspective, the API is very straightforward. The idiom is for classes that natively support these serialization methods to include the glue code as static objects in the class implementation (.struct for structs, and .proto for protobufs), but it’s possible to write the glue code outside the object (e.g. to add support for arbitrary Java objects if desired).

For a struct array publisher to NetworkTables:

StructArrayTopic<Pose2d> topic = inst.getStructArrayTopic("poses", Pose2d.struct);
StructArrayPublisher<Pose2d> pub = topic.publish();
Pose2d[] arr = new Pose2d[] { new Pose2d(3, 4, new Rotation2d(0.1)), new Pose2d(5, 6, new Rotation2d(0.2)) };
pub.set(arr);

And similarly for a single Pose2d to protobuf (array isn’t supported at the top level):

ProtobufTopic<Pose2d> topic = inst.getProtobufTopic("pose", Pose2d.proto);
ProtobufPublisher<Pose2d> pub = topic.publish();
pub.set(new Pose2d(3, 4, 0.1));

The cool part about this is because the schema is communicated, the dashboard side can pull it apart completely dynamically:

image

8 Likes

Thank you both, I probably should have specified “more on WPILib’s use of protobufs” but I think both those posts are great for folks not familiar with them. (A few years ago I found myself referencing the wire representation source for protobuf2 in a discussion and realized I needed to not do that)

It also supports the concept of oneofs which are similar to union structs but imho a lot clearer imho. Though the oneof functionality also shares the requirement that they aren’t valid at the top level of a message.

The biggest thing I like about protobufs are that you can autogen interface libraries across languages with them as well as add custom extensions should you want to use that functionality.

One interesting performance thing - the first 15 field numbers (1-15) take up less space in the wire format so if you’re trying to optimize size the most commonly used fields should be in there (I’m assuming most WPILib protos are fairly compact… some of the ones I’ve seen at work are not compact.

Cool to hear WPILib is starting to use more industry standard pieces. I used protobufs as my message packing tool of choice in a personal project and it was really nice for communicating across languages.

1 Like

Most of this has been covered, but one other benefit I want to highlight is that the receiver can now identify the type of data being sent (which wasn’t usually possible with double[]). A double[] could represent a 2D pose, 3D pose, array of poses, etc so it’s easy to mix up data types.

For example, AdvantageScope 2024 allows you to use the new structured types on the odometry and 3D field views. When you drag out an object like a Pose3d or Trajectory or Translation2d[], only valid object types are offered — e.g. a Translation2d[] can be rendered as a trajectory or set of vision targets, but not a robot pose. The docs for each page list the supported data types, like for the 3D field. This should be a nice quality of life improvement compared to double arrays.

1 Like

Agreed. The main reason we are including struct is the performance penalty of protobuf is just a bit too high for our current robot controller. Assuming we get a significant performance bump with the 2027 control system, I could see dropping struct at that point. We are using an alternate implementation of protobuf for Java (QuickBuffers) for better performance / less GC pressure, but it’s still much heavier than a simple ByteBuffer read/write.

Some quick Java performance comparisons on a Rio 1 for NT puts. Numbers are multiple runs, averages across 1000 calls. These are all for Translation2d values and include the new of the Translation2d itself.

  • double array: 25.8 us, 36.4 us, 26.0 us, 27.9 us
  • protobuf: 150.5 us, 149.0 us, 189.3 us, 206.1 us
  • struct: 59.5 us, 43.1 us, 86.2 us, 65.3 us
1 Like

I’m actually shocked to hear the default java protobuf implementation is not great given the percentage of Google’s internal code that’s in java and the both extensive use of protobufs as well as the vibe that they can be treated as POJOs. I know it was one of the habits I had to break initially because I treated them as something heavy.

Google runs Java in a much different environment. We have real time constraints and very tight memory constraints on an embedded 667 MHz processor. The main difference with the QuickBuffers implementation is it is zero allocation during normal operation, so much less GC and memory allocation traffic. The QuickBuffers implementation is developed by a robotics company as well (Hebi Robotics) specifically for real time environments.

3 Likes

I doubt. they ran it on anything quite as anemic as the Rio. I know I saw protos for a lot of the robotics side of the house though so I’d be shocked if they weren’t using it on the real time side. (I was on the web side until I left)

do you have the msgpack equivalent numbers handy? just curious.

I do not. In Java, MessagePack is typically done with something like Jackson, although there may be libraries more in the style of the C++ library where you do piecewise emit? I would expect piecewise emit to be somewhere between struct and protobuf in terms of performance. Using something like Jackson would probably be substantially slower.

Note in either case, one of the big downsides of MessagePack (and JSON) is that they don’t have schemas, so to communicate “x”, “y”, and “rotation” names for the elements of the object, those literal strings must be output along with every data value–not great for something like an array of Poses where there’s a lot of data elements with identical structure.

1 Like

How much difference does it make? I think it could trip some teams up since math and (math) controllers can be based on loop time.
I ran into a team at champs that was running at 20hz and didn’t look into speeding up their code because it would change the behavior of their math.

Between you, the rest of CD, and I, I actually started pronouncing it “oblarg” because I had never read just “oblog” until using the software for a whole season

3 Likes

I straight up thought he wrote it until this thread. There’s a reason all the resources I publish are non-technical.

I did write Oblog, though it’s in life-support mode at the moment because the source is old and poorly-suited for refactoring.

I will likely be phasing out support in favor of redirecting people towards Monologue, which should perform substantially better for teams that are not using Shuffleboard in particular.

Huge credit to @Amicus1 for writing a modern replacement; I think the use pattern Oblog/Monologue enable is extremely useful and I’m pleased it’ll continue to be available for teams.

9 Likes

Was that 20hz nominal or 20hz because their code overran every loop? TimedRobot will put waits in the loop if it finishes faster than the scheduled period.

What is the difference between Monologue and AdvantageScope?

I’m hoping to get a lot of telemetry going this season to monitor voltages and current to look for poor electrical connections. But I hope this quickly moves to many other features to understand how the robot is functioning.

Monologue is a library that runs on the robot to collect data, and AdvantageScope is an application that runs on a laptop to view that data. Both tools support sending data via log files or over the network using NetworkTables.

You may be mixing up AdvantageScope and AdvantageKit, which is a more complex logging framework with different goals. If you’re just trying to log data like voltages and currents for debugging, AdvantageKit probably isn’t the right option for you. AdvantageScope is designed to work with any source of log data, including Monologue.

3 Likes