View Single Post
  #4   Spotlight this post!  
Unread 12-22-2015, 09:15 AM
Greg McKaskle Greg McKaskle is offline
Registered User
FRC #2468 (Team NI & Appreciate)
 
Join Date: Apr 2008
Rookie Year: 2008
Location: Austin, TX
Posts: 4,748
Greg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond repute
Re: More Vision questions

I'd highly recommend setting up a test for latency. I've done it using an LED in a known position. The roboRIO toggles an LED and you measure how long before the camera and vision system sees the LED change and the roboRIO is notified.

A simpler test of this is to make a counter on the computer screen. In LV, just wire the loop counter (i) to an indicator on the panel. delay the loop by 1 ms. Then point the camera at the screen. Place the source image and the display of the captured image side-by-side and take a picture of it -- cell-phone camera or screenshot. Subtract the time in the display from the time in the source for an idea of latency in the capture and transmission portions of the system.

The reason for this test is to learn what affects the latency and how to improve it. Camera settings such as exposure and frame rate directly determine how long the camera takes to capture an image. Compression, image size, and transmission path determine how long to get the image to the machine which will process it. Decompression and your choice of processing algorithms will determine how long it takes to make sense of the image. Communication mechanisms back to the robot determine how long it takes for the robot to learn of the new sensor value.

An Atom-based classmate is really a pretty fast CPU compared to a cRIO or roboRIO. Plus, Intel has historically done quite a lot to help image processing libraries efficient on their architecture.

Any computer you bring to the field can be bogged down by poor selection of camera settings and processing algorithm. Similarly, if you identify what you need to measure and what tolerances you need, you can then configure the camera and select the image processing techniques in order to minimize processor load and latency.

Also, you may find that the bulk of the latency is in the communications and not in the processing. The LV version of network tables has always allowed you to control the update rate of the server and implemented a flush function so that you could shorten the latency for important data updates. Additionally, the LV implementation always turned the Nagle algorithm off for its streams. I believe you will see much of that available for the other language implementations and you may want to experiment with using them to control the latency. Most importantly, think of the camera as a sensor and not a magical brain-eye equivalent for the robot.

Greg McKaskle