|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
|
|
Thread Tools | Rate Thread | Display Modes |
|
|
|
#1
|
|||||
|
|||||
|
Re: More Vision questions
Even if you do want to process every frame, the idea is that you can deal with latency by saving a snapshot of the relevant robot state at the time the image is captured, do your processing on the image to obtain some result, and then use the saved state, result, and current robot state to obtain a modified result that is synced up with the present.
|
|
#2
|
|||
|
|||
|
Re: More Vision questions
I'd highly recommend setting up a test for latency. I've done it using an LED in a known position. The roboRIO toggles an LED and you measure how long before the camera and vision system sees the LED change and the roboRIO is notified.
A simpler test of this is to make a counter on the computer screen. In LV, just wire the loop counter (i) to an indicator on the panel. delay the loop by 1 ms. Then point the camera at the screen. Place the source image and the display of the captured image side-by-side and take a picture of it -- cell-phone camera or screenshot. Subtract the time in the display from the time in the source for an idea of latency in the capture and transmission portions of the system. The reason for this test is to learn what affects the latency and how to improve it. Camera settings such as exposure and frame rate directly determine how long the camera takes to capture an image. Compression, image size, and transmission path determine how long to get the image to the machine which will process it. Decompression and your choice of processing algorithms will determine how long it takes to make sense of the image. Communication mechanisms back to the robot determine how long it takes for the robot to learn of the new sensor value. An Atom-based classmate is really a pretty fast CPU compared to a cRIO or roboRIO. Plus, Intel has historically done quite a lot to help image processing libraries efficient on their architecture. Any computer you bring to the field can be bogged down by poor selection of camera settings and processing algorithm. Similarly, if you identify what you need to measure and what tolerances you need, you can then configure the camera and select the image processing techniques in order to minimize processor load and latency. Also, you may find that the bulk of the latency is in the communications and not in the processing. The LV version of network tables has always allowed you to control the update rate of the server and implemented a flush function so that you could shorten the latency for important data updates. Additionally, the LV implementation always turned the Nagle algorithm off for its streams. I believe you will see much of that available for the other language implementations and you may want to experiment with using them to control the latency. Most importantly, think of the camera as a sensor and not a magical brain-eye equivalent for the robot. Greg McKaskle |
|
#3
|
||||
|
||||
|
Re: More Vision questions
Thanks for all of the replies everyone!
You were all very helpful. I think we're going to wait until kickoff to see exactly what type of vision processing is required (more simple, or more complex). However, I'm really liking RoboRealm and the Labview vision solutions. As for latency (again, depending on the challenge), we might just have a button on the joystick that will "auto-aim" (process the single frame), rather than constantly processing. I would still really like to hear a bit about OpenCV, but I'm getting the feeling that it will be a bit more complicated. Has anyone used a Raspi and the USB interface to transfer the data to the RoboRIO? (Preferably with LabView). I'm not sure how you would ready from the RoboRIO's USB port. |
|
#4
|
||||
|
||||
|
Re: More Vision questions
Quote:
There a quite a few papers talking about similar solutions: http://www.chiefdelphi.com/media/search/results/2036425 http://www.chiefdelphi.com/media/search/results/2036426 I would not use USB to transfer data. You're going to end up having to emulate another device to get that to work correctly though it's an interesting thought. USB is really meant for peripherals and not co-processors. Ethernet or one of the myriad of serial interfaces would be better. |
|
#5
|
||||
|
||||
|
Re: More Vision questions
Quote:
I thought I remembered someone talking about using USB to transfer data, but maybe they were talking about the other serial interfaces. With the ethernet, you would use UDP or something similar I'm guessing? |
|
#6
|
||||
|
||||
|
Re: More Vision questions
Quote:
UDP or TCP depending on your tolerance for latency and importance of the data arriving, etc... |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|