Running GRIP Generated Code on Roborio

GRIP’s new code generation feature and WPI’s new vision-related features are very useful. However, having used GRIP on the driver station with about a quarter second of latency, is the latency better/worse if you run the code on the roborio?

Depends on image size, driver station computing speed, and a little bit of luck.

For reference, the GRIP pipeline bundled with this years vision examples generates a Java class that processes a 640x480 image in about 80ms. I can run the same GRIP file in about 12ms on my laptop with an i7 6700HQ with an overall latency probably around 150ms on average.

So what you’re saying in the above examples is that, all things included, running on roborio produces 80ms latency vs. 150ms on the powerful laptop?

In general. I should note that running vision code on the roborio is resource intensive since it’s basically pegging a CPU core, which basically precludes you from running another resource-intensive thread.

The latency estimate I gave was very pessimistic (assumed 100ms response time for NetworkTables and a slow image stream). Again, the actual processing only takes about 12ms for me, with all other latencies being due to the network.

The best solution would be to use a coprocessor for doing vision, if you can.

Alright, sounds good! Thanks so much for the help!

You may also want to consider running the camera at a lower resolution or limiting where you do the processing. 320x240 is 1/4 as many pixels, and if that is good enough to identify the elements in your image, …

Greg McKaskle

Thanks for the help everyone! I was able to get this working last night, but there is still one issue mentioned in this thread. If you have encountered this or know how to solve it, check it out.