Unix Timestamp is off

Hello,

So I’m working on a vision processing project for my team this summer with the raspberry pi. I have been pretty successful at getting the pi and its server working however I’ve run into some trouble finding the latency. I’ve been using Unix Timestamps to try and figure out the difference between when the packet was sent and when it was received.

The problem is that the Rio timestamp seems to be off a few seconds. My latency reports around negative 1.5 seconds which I know is impossible because the robot moves around and doesn’t seem laggy at all. The Pi’s timestamp is almost exactly right but the rio seems to be about 2 seconds slower than an online unix timestamp I checked.

I am programming in LabVIEW and using the basic “Get date and Time” vi.
The Pi is running python and is using time.time() to get the timestamp.

I’m pretty new to this so any help would be awesome.
Thanks

The Rio is probably just behind, simple as that. it’s pretty much never been connected to the internet since it was manufactured and hasn’t had a chance to sync with network time servers. I’d recommend running a NTP server on the driver station computer or the raspberry pi to make sure everything’s synced up.

The roborio does not have a battery backed real time clock. It does receive time from the driver station, but I suspect that is not a tightly synchronized exchange like ntp is. I suspect that is the source of the time offset.

Have you thought about other ways to synchronize, such as having one of them generate a digital clock pulse?

If anyone has the urge to write an RTC library for the rRIO then I’ll throw parts towards the effort. It’s on my todo list.

There are quite a few ways you could go about solving this, some of which have been mentioned in this thread. But before you do that though, I’d stop and consider if you actually need to solve this problem. You can get a very far way just assuming a fixed latency. Once you start trying to calculate the latency, you get into the very difficult problem of trying to determine the actual amount of latency - Even if you capture the timestamp as soon as you load the image into memory, there’s still ~100-150ms of latency in USB webcams in general. And while there are solutions for synchronizing timestamps, it’s not a trivial problem.

FWIW, on 1678 we have just done a fixed latency compensation (usually assuming 200ms latency) or no latency compensation at all. We could probably get better results from sending over the timestamp, but that’s difficult and doesn’t provide very much more benefit, so we’ve usually had better uses of development time.

Unless you already have vision alignment working pretty much perfectly (converging in a few seconds), there are likely much easier ways to get better performance out of your vision system than doing latency compensation.

Some sources of latency are easy to measure and compensate for; others are not. On a small hardwired LAN you are talking no worse than a few milliseconds to transmit a packet between the Pi and RoboRIO (and you should use UDP if you really care about delays of this magnitude). Assuming you don’t care about this, you don’t need to worry about absolute timestamps - your RoboRIO can assume the packet was sent and received ~instantaneously, and can timestamp it using the hardware timer upon receipt.

The two much bigger sources of latency are (a) image capture time and (b) image processing time.

(a) is inherent to most webcams; it can take tens to a couple hundred ms to expose the image, encode it, and transmit it over USB or Ethernet. In general unless you are using an (expensive) computer vision camera with hardware triggering (or an Android phone :)) you have no way of knowing precisely when the image was captured. But, as Wesley says, in general the latency of this part is constant and you can compensate for it by running some experiments to measure it.

(b) is trivially easy to compensate for, and can also account for tens to hundreds of ms depending on the sophistication of your processing. Timestamp the image when you first receive it, check the CPU time again when you are done processing, and take the difference of the two.

How does one ‘compensate for’ latency such as this? Notionally, I can only think of using angular/linear velocities to predict actual position vs measured. Is there a more straightforward way?

How does one ‘compensate for’ latency such as this? Notionally, I can only think of using angular/linear velocities to predict actual position vs measured. Is there a more straightforward way?

254 actually had a good presentation on this, as well as vision in general.

It essentially hinges on keeping a log of robot positions and computing the difference between the robot’s pose (position + rotation assuming no turret) at the time the picture was taken on the camera for vision, and the current time. Then, you can just use some simpleish trig to compute the corrected target rotation and power.