Is an occasional loop overrun normal during teleopPeriodic?

When our robot is in teleop mode, it will log a loop overrun message (typically 23-30ms instead of the 20ms limit) roughly once every minute or so.

We’re not doing any kind of significant calculations in teleopPeriodic. In fact, the only thing in the method is a call to run the command scheduler.

I’m not aware of any loops or blocking calls anywhere in the code.


  1. Is it normal to get occasional overruns like this? (Maybe the JVM GC is firing?)
  2. Is there a facility in the command scheduler that could help narrow down the overrun to a particular command task? (Assuming it’s the same task every time and not random like a GC)

We had some loop overruns on our code, and it definitely is not normal. We were able to clear some things up to where our code now runs free of any errors. For us, the issue turned out to be making multiple calls to Talons for various data and reporting it to network tables. Print statements can also slow down your loop times. I’m no programmer (yet) but this is what I know so far. Does the DS event viewer log give you a stack trace on the error?

Could you post your code? It’s usually a good policy with programming questions.

1 Like

There was a bug in an older Rio image that would cause this, check you are running the latest version.

Thanks for the info so far. Just to clarify a few things though:

  • The overrun messages are sporadic. The robot is sitting there doing nothing. No messages are being logged. Then out of nowhere an overrun message pops up. Then nothing happens for a while, then another overrun message. This happens roughly once a minute, but can be irregular.
  • These messages don’t seem to impact the behavior or performance of the robot. They’re just annoying in that they sit in the DS message window and slowly fill the log. They can also obscure the few, rare messages we are logging on purpose to try and debug things.

I’ll address the comments in reverse order:

  1. The sticker on that RIO says it’s running the v14 image from around March. I can validate this tomorrow. I’m not aware of a newer image.
  2. I would post code, but my options at this time are: post the single line that calls; or post everything. This didn’t seem like a good choice. This is part of why I was asking if there are any facilities in the Scheduler that can help identify which tasks are taking a long time. Sort of like Task Manager, or top, or any other tool like that.
  3. The specific message I’m seeing is the one that indicates a loop overrun during teleopPeriodic. It doesn’t provide a lot of information beyond the actual time taken and the expected time.
    Your comments about Talons are interesting. We have a couple SparkMaxs and if their CAN calls are blocking, that could contribute to the problem. I’ll have to look into that.

In any case, my initial question of “are occasional overruns normal in Java” appears to have been answered with a no. So I will proceed to dig into the problem. I’ll post back here with what I manage to track down.


We were having lots of loop overrun errors. The Scheduler itself can’t help you - what I’d do is run a code profiler as described in this thread: We are experiencing a lag issue in our code

We did that, and found that getting the current state of a solenoid was taking forever. So we instead now cache the last commanded state.


Besodes tje Talons - a lot of other items like sensors (Range finders, gyros, accellerometers) etc can only be polled/updated ever so often. Its usually burried in some timing diagram in the description. Also any kind of IO to the console or driver station is very time consuming and should be minimized. Cache them and only output what has changed.

Most sensors connected to the Rio io ports are accessed by fast native calls to the fpga, and don’t contribute much to a loop dt. Also, all calls to get or set driver station inputs/outputs (besides console prints) are cached on the Rio and are updated by a separate process that’s always running, thus calls to these functions are pretty fast.

Well, some things that are accessed over busses (SPI, I2C, UART) have certain transaction flows; e.g., you have to send a request to the device, wait a certain amount of time, then read your data out. The best (although more complex) way to handle this is to have a separate thread that’s polling the device, and that thread immediately returns the most recent value when queried by the application.

Yes, hence why I specified “most”. Sensors connected to serial busses have an inherent overhead, although they often can run in the background as a separate thread

True but reasonably fast sometimes is not fast enough otherwise you wouldn’t get that error. I have made the experience that sometimes every little helps and as for sensors… Well most work very well - but we did get bit a couple of times. Once a few years with a gyro/accelerometer and once about 2 years ago by a ultrasonic range finder. And updating the sliders/leds on the work station about 3 years ago.

Agreed. This was happening to my team occasionally until we re-imaged the roborio. As soon as we did, it did not happen anymore. If you have not done this yet, definitely try it because it may be your problem…

1 Like

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.