We use OpenCV and Python on a Raspberry Pi to do vision processing, and the script itself works wonders. However, we transmit data over NetworkTables(using the wonderful pynetworktables library; go check it out!), and that intrinsically adds some latency, and when you add Ethernet lag over that, the latency renders our autonomous simply useless. I’ve seen latencies over 300ms, and at the speed that a robot moves, after 300ms, we’re probably way past the goal.
Now, keep in mind, I’m no network engineer, but the way I solved this is by adding an out-of-band signal(using an Arduino in my case) and triggering it from the Pi on startup. A loop runs on the roboRIO which resets an internal time offset every time the DIO line connected to the Arduino is driven high, essentially creating a synchronized timer on both processors. Now that we have shared time, lag compensation is easy.
When sending data from the Pi, I also put the current Pi time and a rough estimate of target velocity across the image(calculated according to last position and time of target) along with coordinate data. When reading data on the roboRIO, the NetworkTables timestamp from the Pi is subtracted from the synchronized timer, and from that we get the amount of total latency between the RIO and the Pi.
Finally, when we multiply the latency by velocity and add that to the coordinate data, we can get a pretty accurate estimation of actual current target location, provided that our robot is rotating at a constant speed.
I know that this may not seem like a lot, but this completely fixed all of our autonomous problems, and was relatively easy for the huge amount of gain. Hopefully, this will help a team out.