Our team is working on sensor fusion for the limelight + encoders. A critical factor in fusing those values in the latency in the vision. I’ve seen different estimates of the latency. So as far as I know, the components in that stream are:
Limelight Pipeline.
Network Tables.
Rio Loop Timing (i.e., where in the ‘read’ loop are you when the data becomes available).
I found one location that says something like this:
Total latency (Photons → Robot) : 21-25 milliseconds
-Pipeline Latency: 3-7 milliseconds
-NetworkTables → Robot latency: .3 milliseconds
-(NT limits bypassed to instantly submit targeting data.)
But it sure seems like a lot of things are missing in that 21-25 milliseconds. I’d like to quantify it a little better, and reduce it where possible. I’m sure teams have already done this. Anyone have some starting points? For instance, how do you ‘bypass NT limits’.
One way I had considered measuring the total latency was to compare location based on odometry while driving toward a target while calculating the ‘distance’ from the ty value reported by the limelight. If you know your drive velocity, then you can calculate how far behind your actual location the limelight is and back out the latency. This would give you an experimental baseline.
However, this would include the Rio loop timing, along with any inconsistencies from your encoders (for instance if you’re using cancoders).
I am not sure if these values have been considered yet, so here they are.
The default Flush rate for NT is 100ms(10Hz). If using PyNetworktables, that rate is 50ms(20Hz).
Thus the latency either can introduce is between 1-99ms for NT and 1-49ms for PNT.
Both of these options have a “Flush” feature that immediately sends all cached data. I know that PNT claims this feature is “rate limited”, whatever that means. I do not know if the same is true for NT. I also am not currently aware if Limelight uses the default NT flush rate, or a modified rate. I have a PM into @Brandon_Hjelstrom to see if he can shed some light on the subject.
A really useful feature that would help with determining that total system latency would be a “System Clock”. It may be possible to set up a time server on one of the devices on a robot network. Ideally this would be the Rio, but by no means has to be. For instance, I have a vision co-processor that has a UPS on it that also contains a RTC.
If one device could be set up as a Time server, then all devices on that system (Robot) could have their clocks synchronized, thus making timestamping of all activities much easier.
This could be set up in a test environment to help figure out the latency values, and then those values could be used on a competition bot without the need for the time server to be there as well.
This can be kicked up to 10ms, and I think the limelight libraries should be using the techniques which get the latency down to being… I wanna say at least sub-ms. Peter explained the details to me at one point and I’ve since forgotten them but… it’s small.
The pipeline delay is the biggest factor from everything I’ve seen.
Keep an eye out for NT4 this coming year. built into the spec is functionality to sync clocks across servers and clients, allowing the coprocessor to report samples and observations WRT the FPGA timestamp.
I need to see if I can do that with PNT. I know there is a setting for the interval. I just don’t know if it will be stable or not. One way to find out
NTP and PTP are a lot higher complexity (e.g. require additional services running) and require additional open ports beyond what’s open on the radio firewall. NT4 builds in a very simple time synchronization mechanism that occasionally measures the RTT to calculate a time offset.
The NT4 timestamps are integer microseconds. They are synchronized to the server, which in robot networks is the robot code, so it makes sense to synchronize to the FPGA’s microsecond clock. UTC isn’t a good choice here for several reasons:
UTC time is not necessarily correct when robot code starts, as the wall clock is set by NetComm when the DS connects, which may be many seconds after the robot code starts
The FPGA timestamp is a lot faster to read than UTC time (as it’s just a FPGA register read, no syscall required)
FPGA timestamps are used for many purposes on the Rio, including notifiers; you can get FPGA timestamps for things like DIO changes by using DMA
FPGA timestamps are significantly smaller for most robot purposes–a microsecond timestamp will run for about 70 minutes within 32-bits before needing to expand to 64-bits, so will run most of the time for robot purposes in 32-bits. UTC timestamps need 8 bytes for microsecond resolution (assuming use of 32-bit unix time instead of 64-bit).
So the current version of NT (NT3) has both an automatic “background” flush to network and a manual flush function call. Both are rate-limited to 10 ms (e.g. trying to flush more frequently than this is ignored). The approach to getting low latency is fairly straightforward: set the automatic flush to be fairly high (e.g. 100 ms or more), and call the manual flush immediately after setting your data values. This gets the latency down to sub-ms range as long as you aren’t trying to manually flush more frequently than every 10 ms.
I’ve not yet finalized all the aspects of how this will work for NT4. There’s currently not a global flush mechanism (instead, there’s per-value settings), but that creates a challenge when a user might want to get low latency across multiple values. There does need to be some constraint to prevent teams from unnecessarily consuming airtime on wifi connections.
Honestly, it really doesn’t matter what time source is used as long as all devices can be in sync. Heck, make up your own time, let’s say Boogaloo time. As long as everything is synchronized down to the millisecond, you can call it what you want and you can source it from where you want, as long as everything is in sync.
@Peter_Johnson , thanks for the suggestions on minimizing latency. I will definitely change my coding to take that approach. It’s simple, and easy to implement.
We were able to estimate this latency by sighting a target, turning off the Limelight LED, toggling a PH controlled LED (unknown latency there), and seeing how much time there was between the toggle and the corresponding update to the RIO network table entry (determined by a NT callback). 10-15 ms sticks in my head as being close.
Since the Limelight is custom code, nothing to prevent them from disabling the rate limiting in the NT updates
@Peter_Johnson Thanks for the tip of setting the default update rate to something high so that explicit NT flushes are less likely to get rate limited. Going to try that…