Configure Timers

We’re trying out JAVA this year on our Robot after using Labview the last few seasons (not ask)…

In Labview we use the Configure Timer VI’s to set the number of samples for averaging when using encoders or counters for speed control.

We can’t seem to find the equivilent Method Call in JAVA.
Anyone know if this exists or do we have to write our own averaging?

Never Mind. Found my answer. Not available in C++ and JAVA.

Read this follow-up post:

It’s not hard to do. A fairly simple change to the WPILib source code.

Thanks Ether. We’ll look into modifying the encoder and/or counter class(s).

You can use the TimerTask and Timer class, like we are doing. It will fire off a TimerTask’s run() method every x milliseconds.

Just curious: Have you ever measured the scheduling jitter? e.g. add code to toggle an IO pin at the start of the run() method and look at the pin on a scope.

I have, and the worst I have seen is 2 ms of jitter, measured on the cRIO using the System Timer. This was both running at a 40ms period and a 100ms period, where the run() method had a 1ms calculation time.

So for example, if I was running something with a 40ms period, It would actually run between 38 - 42 ms, but for the most part the cRIO was pretty good about keeping it constant 40.

All of the threads I run on the robot use the TimerTask class. The timer task does not preempt the running thread, if they are at the same priority so keep that in mind, it will affect your thread execution.

Two questions come to mind:

  1. Do you put code in each of your threads to monitor throughput margin and re-entrancy ?

  2. Are there other scheduling options in FRC Java which will provide concurrent processing of same-priority threads ?

Yes, I do, it is not necessary, but I just do out of habbit for all my embedded projects.

Especially when I run embedded code on an operating system that doesn’t guarantee timing, I always put code in in there to monitor the execution of each state.

Unfortunately, the SwawkOS doesn’t support :


so timing is only good to the millisecond from:


but milli second accuracy is good enough for anything we will need to do in FIRST for sure.

When you use the TimeThread task, it doesn’t run automatically, you need to create a separate scheduler which runs the task based on the periodicty set.

Something like:

this.executor = new java.util.Timer();
this.executor.schedule(new MyTimedTask(this), 0L, this.period);

There are different types of properties you can set on the scheduler. By default the scheduler doesn’t preempt the running thread so lets say you kicked off a TimedTask thread, and then you entered this block of code after:

  system.out.println("Hello World");

running as fast as it could, you would see that your timer task would not run at your set period, it would only get to run, between timesclices of the while loop that the cpu was free for an instance of time.

However, for JavaFRC you can use other properties of the Timer Task to control your concurrancy. For example you could use the scheduleAtFixedRate method of the TimerTask, which i believe does preempt the running thread to try and guarantee execution. (I never tested it however).

The better way to do it in a normal J2ME project would be to set the priority of all of your threads. Using the Thread.setPriority() and Thread.getPriority() functions and the Thread.xxPRIORITY static fields. You can have up to 10 levels of priorities. Where Higher priorities will pre-empt lower ones. However these methods are not supported by Squawk JVM for FRC, and all threads in FRC Java are created with the same NORMAL priority.

The best way to do it JavaFRC would be to avoid the need for preemption and try to ensure your code is optimized and runs in a period that is reasonable so that the CPU can be shared between all normal priority tasks. If all of your JavaFRC threads have relatively small execution times compared to its period, (and they should), then all your threads should have enough access to the CPU to run within its within its periodicity.

Java VM in general is not a real time OS, so timing is never truly guaranteed, no matter what we do. Most of the inconsistencies arise when you have threads that are relying on preemptive behavior, instead of cooperatively giving up CPU time.

Hope that helps,

  • Kev

Thanks for your informative post. Let me take issue with the point above.

It’s not good enough for accurately measuring the speed of a frisbee-shooting wheel using the counts/time method.

It’s not good enough to measure the linear speed of a frisbee as it exits the shooter on your robot.

Your very welcome!

I am not sure what you mean by this. This is a “Too each his own” problem. How accurately do you think you need to measure the speed of a wheel? And what would happen if you were off by a couple RPM? Can you over come this by other robust code?

I come from the defense world where if timing is inaccurate people could loose their lives. We obviously don’t have that problem in FIRST was my point and I believe our problems in FIRST can afford less accuracy in our software timers, which was the point I was making.

I have never observed good performance from the default encoder rate method but I have written my own encoder Rate codes with performance of +/- 3 rpm without averaging samples, where loop times were ~200ms, and that is way better than I need it for anything I need to do with a shooter wheel. What is your desired accuracy that calculating the speed of a wheel using time measured in milliseconds is not allowing you to achieve? What is driving this accuracy?

Here is some psuedocode from my 2013 robot giving the performance I stated above with a 500PPR encoder @ 2300 wheel speed. It is a class which extends the encoder, which is why it uses Super.

	public double getRPM()
		timeNow = System.currentTimeMillis();
		countNow = super.get(); //calls encoder.get method
		rate=(countNow-countBefore)/(timeNow-oldTime); //counts per millisecond

		//return average
		return rate*1000*60/PPR; //... rpm

Again how accurately do you need to measure this?

System.currentTimeMillis() is a software timer. To measure linear speed of the disc you need something which can provide you external readings first. If you had for example two IR line sensors in the path of the disc a set distance apart. Saved the time each IR sensor was crossed by the disc and then calculated the Velocity using distance over time, your saying the accuracy you would get from that wouldn’t be good enough because the time measured was in Milliseconds? Not good enough to accomplish what exactly?

What brand and model encoder is that?

That response didn’t really answer my questions above. But to answer yours information on the encoder in question can be found here:

It is an encoder I like to use on my personal projects. It won’t be the final encoder we use on our Robot this year. The final encoders we use this year will be a GreyHill 256 PPR.

Hope this helps,

In order to prepare a thoughtful analytical response, I needed to gather some technical information to clear up some ambiguities in your post.

But since you’re in a hurry, I’ll just ask you for clarification instead of taking the time to research it myself.

When you use the term PPR, what do you mean by that? Do you mean the same thing that US Digital does (see attachment)? Or do you use the term PPR to mean what US Digital calls CPR?

GrayHill specs their encoders in CPR, not PPR, and they define CPR the same way that US Digital does.

So is your “256 PPR” GrayHill encoder 256 CPR or 64 CPR?

And is your “500 PPR” Cytron encoder 500 CPR or 125 CPR?

If you don’t have this info, I will respond to your post without it, but my response will be in generalities and therefor less useful I would think.

Not a problem, I am not really in a hurry, I am just curious, and as build season winds down your insight and viewpoint may show something we are overlooking when solving this problem, or an oversimplification we are applying which may be inaccurate for some reason.

I believe I have all bases covered but I won’t know for sure until you share your perspective and reasoning. But I do appreciate you taking the time to formulate a complete answer… so thank you.

I do not see an attachment with your post so it’s hard for me to say how my definition lines up with US Digital however,

When I use the Term PPR, as in Pulses per revolution, I mean it in terms of the square wave being generated. A Pulse in my definition is a complete rising edge, width, and falling edge.

Therefore, if you are counting all rising and falling edges, a single pulse produces 2 counts.

If the encoder is quadrature, there are two digital outputs each generating its own pulse. Therefore if you were to count all rising and falling edges from a quadrature encoder, each pulse would return 2 counts but you have 2 pulses to deal with so the total is 4 counts.

So in my definition, a quadrature 500PPR, can produce 2000 counts per revolution when counting all rising and falling edges on both channels.

I believe GreyHill defines this as Cycles Per Revolution (CPR) which is another term used in industry but means the exact same thing as my PPR. I avoid using CPR because unless you are intimately familiar with the definitions you may confuse CPR to mean “Count Per Revolution” (or at least I would, and that is completely wrong).

So the GreyHill 256 Cycles Per Rotation encoder I mentioned would produce 1000 counts per rev if you were counting the rising and falling edge of each signal. or as I call it 256 Pulses per revolution (On a Single Channel).

Does that clarify things and answers your question?

Hope that helps,

I my haste to be responsive I forgot to attach it. Here it is. It’s just a screenshot of a portion of page 1 of the E4P datasheet, which is available here.

So I am going to assume that the 500 PPR Cytron that you mentioned is actually a 500 CPR (using the US Digital and GrayHill definitions of CPR).

You said you were controlling 2300 rpm wheel speed with that 500 CPR Cytron encoder, but you didn’t mention how you were decoding the signal, so it’s not clear how many counts per rev you were getting. I’ll just assume for the moment that you were using 4X quadrature decoding, so you’d be getting 2000 counts per rev.

You said you were polling the counts and computing the speed every 200ms. 2300 rpm is 7.7 revs every 200 ms. That’s 15,333 counts every 200ms. So it’s no wonder you are getting a clean speed signal: you are averaging the elapsed time of 15,333 counts to get your speed. That introduces lag. Lag in the sensor signal is generally not a good thing for closed loop control; it limits how fast you can make the response without causing oscillations.

Consider what would happen if you did this: use only one channel of the encoder, and configure the FPGA to count only the rising edges of that channel and to report the period based on the elapsed time between the 126 most recent counts. You would use the GetPeriod() method of the Counter class to get the period. The FPGA polls for rising edges at ~153KHz, and uses a 1MHz timer to measure the period. At 2300 rpm you should get single-digit rpm jitter with this setup, and with only 1/4 of a rev lag instead of seven and a half revs. You could ask for speeds at a 10ms rate and get a fresh reading each time.

With a clean, noise free speed signal with minimal lag (as described above) it becomes possible to use a bang-bang wheel speed control algorithm. That provides the fastest spinup and recovery times. The code is so simple that it can be run at 10ms without sucking up CPU time. No tuning is required, and the iteration rate of the control algorithm can be quite sloppy without affecting the control.

Here’s a link to the use of micro-second timer to measure the exit speed of the frisbee from the shooter. A highly accurate feedback signal of frisbee exit speed would make it possible to tune the shooter wheel speeds to maintain consistent frisbee speed.

It’s getting late and this post is getting too long. We can talk about the other stuff later if you want.

Thanks. So yes My PPR definition is equivalent to US Digital’s CPR definition.


Correct, I am using 4x decoding

I think we both agree the hardware is counting all of the pulses with a max rate of 153 kHz. Lets assume for a second that the count was a continuous analog signal. I am simply sampling this analog signal every ~200ms and then calculating the velocity between the two points.

Sampling the signal does not provide any lag. If we were to place the analog signal over the sampled version, all of the peaks would line up.

I am introducing lag when I calculate the derivative of the signal in order to get velocity. Furthermore, I am introducing is a small bit of error, because I am approximating the derivative of the signal between the two points with a straight line, but that is standard procedure when performing discrete differentiation, and its a small error I can tolerate within my robust control.

As for the lag, my speed signal lags my position signal by ~200ms in this example. However lag is present for any discrete differentiation. You will always lag by at least one time sample. Even using the FPGA when calculating the rate you will need to wait at least one time clock (granted the time clock is much smaller, so lag is less, but below I will explain why I believe 200ms is tolerable.

note: I choose to run my loop at 200ms, I can run it much faster with approx +/- 6-10 rpm hits which is still acceptable for my control algorithms. It is less than 1% of my top speed.

So that covers lag, and no matter If we use a millisecond timer, or have a nano timer, I can’t get rid of lag as long as calculating velocity. I can shorten my sample time in order to reduce lag. I do not understand what you mean by average, because the points are not being averaged. Can you elaborate?

I have never been able to get a clean signal from the period() function. This is what lead me to write my own software rate function. Last year for our 2012 robot, I counted using on rising edges only on a 256 PPR encoder and saw RPM spikes of +/- 50 RPM in some instances. It wasn’t something I could tolerate. So my solution was to average the data, and I averaged the last 10 samples to get a better signal.

I know other teams have also experienced similar issues, and in one Post, I read that NI recommends setting an averager on the FPGA using its API.

I haven’t looked into why the getPeriod() function produces a signal so noisy, but I suspect it is because FRC chooses to do division in software instead of hardware, and divide by a constant number, however, that code can be delayed in software causing the division to be inaccurate. FRC has had trouble producing a clean getPeriod() signal for years.

Simply using the getPeriod() method won’t help because I don’t think it will produce the clean signal you mention, without averaging (which introduces lag), for reasons I have mentioned above and what we have personally experienced.

But my original question was why do you think a millisecond timer is not good enough accuracy for FIRST, and what is driving you to higher accuracy requirements than a millisecond timer.

Seems like you would like to do bang-bang control. And typically, bang-bang has oscillation around its setpoint because you just have on off control so in order to remedy this oscillation, you wish to sample as fast as you can, and update the control law quickly so that you could reduce oscillation.

But a couple of things here, why do i need to ask for a speed signal every 10 milliseconds? The Driver Station Packets are sent every 20 milliseconds, so any calculations done faster than that, are simply being thrown out and not making its way to the robot. Unless you run your code in a different thread that runs at a faster loop than teleop Periodic for example. Is this what you are assuming?

Secondly, You are ignoring inertia and coulomb friction. Unloaded my motors can react quickly, where it makes sense in doing calculations rather quickly. However, once I start adding inertia to my motor, I.E arms, gears, wheels, its reaction time reduces drastically. So even though you are calculating a command signal every 10ms, or 20ms, those signals are not actually moving the motor, because the motor with inertia has a reaction time of say 50ms or 100ms. For a shooter wheel its reduced when your at speed, but for position control, you must always overcome coulomb friction every time you start up. Why calculate a signal every 10ms when your not using it? My drivetrain doesn’t react in 20ms, I wish it did however :).

As for the control of bang-bang, It is a good way to control velocity of the wheel for its simplicity, but you need to manage the oscillations around the setpoint. A better method might be to use a simple proportional controller to reduce oscillation, but then you must deal with steady-state error. Going to PID means you must deal with integral windup and derivative noise. Every control method has its pros and cons for the applications.

But how fast is a fast recovery? And if you are sampling every 10 milliseconds, why do you need a timer that has better resolution than 1 millisecond?

If I can have a bang-bang which recovers in hypothetically 1ms but oscillates around its set point so much that back to back shots vary, vs a PI controller which takes a little longer to get up to speed but is stable around its set point, which is the better controller?

Further more, what is the point of doing all that work, at that fast of a rate, if the mechanism I have feeding the shooter can only send a disc though it every 500ms at best or every 1 second on avg? See where I am going?

To each is his own. We use a PID class that finishes computation in 1-2ms. using the system clock and provided recovery times just under 1 second last year, and about 2.5 seconds from stop.

My opinion is nothing I have seen thus far in FIRST requires a faster software timer, especially since everything we do has an end result to move some physical mass, which couldn’t react faster than milliseconds anyway.

I would love a nanosecond timer but its more of a want, I don’t see the requirement for one.

Would be interesting to see if teams actually do this on the fly. This problem is not completely controllable since you are not actively controlling all variables which control muzzle velocity. Compression, surface friction, and wheel contact time all play a huge role in muzzle velocity, but you would only be controlling wheel speed. Furthermore, there is some point you reach diminishing returns, because the faster your wheel, the lest time they are in contact with the disc, so less energy is transferred.

Sampling the signal does not provide any lag… I do not understand what you mean by average, because the points are not being averaged.

Averaging causes lag. Speed is your process variable, not counts. In the code you posted, sampling the counts every 200ms and dividing the difference by the difference in time produces a speed signal which is the average speed over the past 7.7 revolutions, not the instantaneous speed at the moment of sampling. It’s effectively a low-pass filter, which introduces lag. That average speed lags the instantaneous speed, if there was any acceleration taking place during the past 200ms.

If instead you use the WPILib GetPeriod() method, this happens:

  1. the FPGA returns the elapsed time (measured with the FPGA’s 1MHz clock) between the most recent N+1 counts detected, then

  2. WPILib divides that elapsed time by N.

You can then use this in your code to calculate speed. Using this method, you don’t need to average the speed over seven and a half revolutions to get a clean signal. One-quarter of a revolution will given you a clean speed signal, with less phase lag.

I can shorten my sample time in order to reduce lag.

Sure you can. But your computed speed will get noisier, due to the 1ms resolution of your timer and the discrete nature of the counts.

I have never been able to get a clean signal from the period() function.

The default value in WPILib is to set up the FPGA to return the elapsed time between the most recent 2 counts (i.e. N=1). Have you ever changed that default value? If not, I can understand why you couldn’t get it to give you a clean signal at high speeds.

I haven’t looked into why the getPeriod() function produces a signal so noisy

I have. At high speeds with many CPR, the FPGA 1MHz timer and 153KHz polling frequency aren’t fast enough to measure the elapsed time between just 2 counts without introducing excessive jitter. That’s why you need to configure the FPGA to return the elapsed time between the N+1 most recent counts, instead of the most recent 2 counts, when using an encoder. If you use a one-per-rev counter, the FPGA default value N=1 works well.

why do i need to ask for a speed signal every 10 milliseconds? The Driver Station Packets are sent every 20 milliseconds, so any calculations done faster than that, are simply being thrown out and not making its way to the robot.

The speed signal comes from the sensor on the robot, not from the Driver Station. The only time the 20ms would come into play is when the driver changes the speed setpoint, which is typically a step change, not something that’s continuously changing every 20ms.

As for the control of bang-bang, It is a good way to control velocity of the wheel for its simplicity, but you need to manage the oscillations around the setpoint… You are ignoring inertia

You’ve got that backwards. I’m not ignoring inertia, I’m depending on it. Inertia plays a critical role in making bang-bang speed control work cleanly. Bang-bang speed control likes a high inertia load, a fast control loop, and low phase lag in the speed feedback. Compared to the approach you are using, GetPeriod() gives a smaller phase lag for the same signal-to-noise. And bang-bang code is so simple that it’s not a problem to run it at 10ms. It also has the fastest spinup and recovery time.

I would love a nanosecond timer

GetPPCTimestamp(). It’s not nanosecond but it’s better than millisecond.

I haven’t been able to find that implemented in java.

Thanks for clearing that up. Agreed,It’s lag due to the derivative of one timestamp. In my case its 200ms. If using the 1MHZ timer at best it would be 1e-6 seconds, which would be nice if I can get it to work using your suggestions.

I have not, for Java the function which configures the counters averager is not part of the API so I have to re-compile the WPIJ library. I have not gotten around to it yet, but it is on my to do list to play around with it after we bag the robot. Right now the default is 1 from the counter class.


Any recommendation for a new averaging value?

Not correct, the code is set up so that the periodic functions (i.e autonomous periodic or teleop Periodic waits for a new driver station packet before executing. So any code within these blocks will only run once every new DriverStation packet which is 20ms. If you have code which samples your signal at 10ms, but then only drive your motor in the teleopPeriodic block, you are wasting half of your calulcations because they are not being used to command the motor. The 20ms comes into play anytime you have code within any of the periodic blocks. Its a Pitfall.

If you want to run your motor faster than 20ms, you must place the drive code in a different loop, thread, etc that does not wait for new DS packets.

Here is a snipit from the WPI Library showing the wait:

        if (nextPeriodReady()) {
          didTeleopPeriodic = true;



  private boolean nextPeriodReady()
    return this.m_ds.isNewControlData();

You are taking what I said out of context. The statement of inertia and bang-bang control were totally separate. I was simply stating that performing calculations at such a fast rate (i.e 10ms) may be unnecessary because there are other factors to be aware of… such as if your going to be calculating a signal every 10ms, make sure you run in a separate loop from the periodic functions, or else your control gets chopped to 20ms, and you loose half of your calculations instantaneously… secondly the inertia on your motor might not allow the motor to react within 10ms, so commanding it every 10ms is unnecessary. Essentially… when considering the physical constraints, it may reduce your need to calculate control signals at such high rates, and maybe slower rates are just as exceptable because your motor can only react in 50ms, or 100ms.

Not available on Java. The timer class in Java has getFPGATimeStamp() (returns ms) and getUsClock(), although the UsClock is deprecated I am not sure why.