Go to Post Sometimes I think all the Game Hint does to us is make us terrible listeners. - Chris is me [more]
Home
Go Back   Chief Delphi > Technical > Programming
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
Closed Thread
 
Thread Tools Rate Thread Display Modes
  #1   Spotlight this post!  
Unread 21-12-2015, 12:46
Jared Russell's Avatar
Jared Russell Jared Russell is offline
Taking a year (mostly) off
FRC #0254 (The Cheesy Poofs), FRC #0341 (Miss Daisy)
Team Role: Engineer
 
Join Date: Nov 2002
Rookie Year: 2001
Location: San Francisco, CA
Posts: 3,064
Jared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond repute
Re: More Vision questions

Quote:
Originally Posted by jojoguy10 View Post
These are great replies! Thanks!

For those of you that run vision off of the driver station:
1. The laptop needs to have enough "horsepower", correct? It can't just be a simple "classmate-like" laptop?
2. Was there a lot of lag introduced between what the robot saw and reacting to it (since the image had to be transferred over the network, processed, then sent back)?
1. It totally depends on what you are doing. The more complicated the vision code, the more you stand to benefit from a more powerful laptop (CPU speed is the operative specification here).

2. There was typically about a 100-300ms lag between the start of image capture and receipt of the processed result on the robot when I last did this in 2013. Some of this is due to transmission time in both directions, some is due to processing time on the laptop, and some of it is because image capture itself is not instantaneous (an issue that affects all processing methods).

That amount of lag can either be disastrous or a non-issue depending on how you are using vision. As a mental exercise, compare the following two approaches for turning your robot to face a vision target:

Approach 1:
Code:
while(true) {
Capture camera frame
transmit frame to laptop
detect target in image
compute a drive turn command to place the target in the center of the image
send command back to robot
execute command
}
Approach 2:
Code:
while(true) {
Capture camera frame
Record robot heading from gyro at the moment the frame was captured
transmit frame to laptop
detect target in image
Send the heading angle to the target back to the robot
Add recorded gyro heading at time of image capture to the heading angle sent back by vision code
Compute drive turn command to turn to new target angle
execute command
}
Which approach would you expect to be more robust to variations in latency?
  #2   Spotlight this post!  
Unread 21-12-2015, 15:39
jojoguy10's Avatar
jojoguy10 jojoguy10 is offline
Programmer Alumni
AKA: Joe Kelly
FRC #2990 (Hotwire Robotics)
Team Role: Alumni
 
Join Date: Jan 2013
Rookie Year: 2010
Location: Stayton, OR
Posts: 275
jojoguy10 is a glorious beacon of lightjojoguy10 is a glorious beacon of lightjojoguy10 is a glorious beacon of lightjojoguy10 is a glorious beacon of lightjojoguy10 is a glorious beacon of light
Re: More Vision questions

Quote:
Originally Posted by Jared Russell View Post
Which approach would you expect to be more robust to variations in latency?
That makes sense. I guess we don't need to process EVERY frame. We could just have a button that has the robot process the current frame and send the robot the correct commands to line up with the goal.

Thanks!
__________________


  #3   Spotlight this post!  
Unread 21-12-2015, 16:47
Jared Russell's Avatar
Jared Russell Jared Russell is offline
Taking a year (mostly) off
FRC #0254 (The Cheesy Poofs), FRC #0341 (Miss Daisy)
Team Role: Engineer
 
Join Date: Nov 2002
Rookie Year: 2001
Location: San Francisco, CA
Posts: 3,064
Jared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond repute
Re: More Vision questions

Quote:
Originally Posted by jojoguy10 View Post
That makes sense. I guess we don't need to process EVERY frame. We could just have a button that has the robot process the current frame and send the robot the correct commands to line up with the goal.

Thanks!
Even if you do want to process every frame, the idea is that you can deal with latency by saving a snapshot of the relevant robot state at the time the image is captured, do your processing on the image to obtain some result, and then use the saved state, result, and current robot state to obtain a modified result that is synced up with the present.
  #4   Spotlight this post!  
Unread 22-12-2015, 09:15
Greg McKaskle Greg McKaskle is offline
Registered User
FRC #2468 (Team NI & Appreciate)
 
Join Date: Apr 2008
Rookie Year: 2008
Location: Austin, TX
Posts: 4,748
Greg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond repute
Re: More Vision questions

I'd highly recommend setting up a test for latency. I've done it using an LED in a known position. The roboRIO toggles an LED and you measure how long before the camera and vision system sees the LED change and the roboRIO is notified.

A simpler test of this is to make a counter on the computer screen. In LV, just wire the loop counter (i) to an indicator on the panel. delay the loop by 1 ms. Then point the camera at the screen. Place the source image and the display of the captured image side-by-side and take a picture of it -- cell-phone camera or screenshot. Subtract the time in the display from the time in the source for an idea of latency in the capture and transmission portions of the system.

The reason for this test is to learn what affects the latency and how to improve it. Camera settings such as exposure and frame rate directly determine how long the camera takes to capture an image. Compression, image size, and transmission path determine how long to get the image to the machine which will process it. Decompression and your choice of processing algorithms will determine how long it takes to make sense of the image. Communication mechanisms back to the robot determine how long it takes for the robot to learn of the new sensor value.

An Atom-based classmate is really a pretty fast CPU compared to a cRIO or roboRIO. Plus, Intel has historically done quite a lot to help image processing libraries efficient on their architecture.

Any computer you bring to the field can be bogged down by poor selection of camera settings and processing algorithm. Similarly, if you identify what you need to measure and what tolerances you need, you can then configure the camera and select the image processing techniques in order to minimize processor load and latency.

Also, you may find that the bulk of the latency is in the communications and not in the processing. The LV version of network tables has always allowed you to control the update rate of the server and implemented a flush function so that you could shorten the latency for important data updates. Additionally, the LV implementation always turned the Nagle algorithm off for its streams. I believe you will see much of that available for the other language implementations and you may want to experiment with using them to control the latency. Most importantly, think of the camera as a sensor and not a magical brain-eye equivalent for the robot.

Greg McKaskle
  #5   Spotlight this post!  
Unread 22-12-2015, 09:23
jojoguy10's Avatar
jojoguy10 jojoguy10 is offline
Programmer Alumni
AKA: Joe Kelly
FRC #2990 (Hotwire Robotics)
Team Role: Alumni
 
Join Date: Jan 2013
Rookie Year: 2010
Location: Stayton, OR
Posts: 275
jojoguy10 is a glorious beacon of lightjojoguy10 is a glorious beacon of lightjojoguy10 is a glorious beacon of lightjojoguy10 is a glorious beacon of lightjojoguy10 is a glorious beacon of light
Re: More Vision questions

Thanks for all of the replies everyone!

You were all very helpful. I think we're going to wait until kickoff to see exactly what type of vision processing is required (more simple, or more complex).

However, I'm really liking RoboRealm and the Labview vision solutions. As for latency (again, depending on the challenge), we might just have a button on the joystick that will "auto-aim" (process the single frame), rather than constantly processing.

I would still really like to hear a bit about OpenCV, but I'm getting the feeling that it will be a bit more complicated. Has anyone used a Raspi and the USB interface to transfer the data to the RoboRIO? (Preferably with LabView). I'm not sure how you would ready from the RoboRIO's USB port.
__________________


  #6   Spotlight this post!  
Unread 22-12-2015, 09:41
marshall's Avatar
marshall marshall is online now
My pants are louder than yours.
FRC #0900 (The Zebracorns)
Team Role: Mentor
 
Join Date: Jan 2012
Rookie Year: 2003
Location: North Carolina
Posts: 1,193
marshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond repute
Re: More Vision questions

Quote:
Originally Posted by jojoguy10 View Post
Thanks for all of the replies everyone!

You were all very helpful. I think we're going to wait until kickoff to see exactly what type of vision processing is required (more simple, or more complex).

However, I'm really liking RoboRealm and the Labview vision solutions. As for latency (again, depending on the challenge), we might just have a button on the joystick that will "auto-aim" (process the single frame), rather than constantly processing.

I would still really like to hear a bit about OpenCV, but I'm getting the feeling that it will be a bit more complicated. Has anyone used a Raspi and the USB interface to transfer the data to the RoboRIO? (Preferably with LabView). I'm not sure how you would ready from the RoboRIO's USB port.
It's not a Raspberry Pi but close enough: http://www.chiefdelphi.com/media/papers/3147

There a quite a few papers talking about similar solutions:
http://www.chiefdelphi.com/media/search/results/2036425
http://www.chiefdelphi.com/media/search/results/2036426

I would not use USB to transfer data. You're going to end up having to emulate another device to get that to work correctly though it's an interesting thought. USB is really meant for peripherals and not co-processors. Ethernet or one of the myriad of serial interfaces would be better.
__________________
"La mejor salsa del mundo es la hambre" - Miguel de Cervantes
"The future is unwritten" - Joe Strummer
"Simplify, then add lightness" - Colin Chapman
  #7   Spotlight this post!  
Unread 22-12-2015, 09:44
jojoguy10's Avatar
jojoguy10 jojoguy10 is offline
Programmer Alumni
AKA: Joe Kelly
FRC #2990 (Hotwire Robotics)
Team Role: Alumni
 
Join Date: Jan 2013
Rookie Year: 2010
Location: Stayton, OR
Posts: 275
jojoguy10 is a glorious beacon of lightjojoguy10 is a glorious beacon of lightjojoguy10 is a glorious beacon of lightjojoguy10 is a glorious beacon of lightjojoguy10 is a glorious beacon of light
Re: More Vision questions

Quote:
Originally Posted by marshall View Post
It's not a Raspberry Pi but close enough: http://www.chiefdelphi.com/media/papers/3147

There a quite a few papers talking about similar solutions:
http://www.chiefdelphi.com/media/search/results/2036425
http://www.chiefdelphi.com/media/search/results/2036426

I would not use USB to transfer data. You're going to end up having to emulate another device to get that to work correctly though it's an interesting thought. USB is really meant for peripherals and not co-processors. Ethernet or one of the myriad of serial interfaces would be better.
Thanks! The last two links don't show anything. Are they searches of keywords?

I thought I remembered someone talking about using USB to transfer data, but maybe they were talking about the other serial interfaces. With the ethernet, you would use UDP or something similar I'm guessing?
__________________


  #8   Spotlight this post!  
Unread 22-12-2015, 09:46
marshall's Avatar
marshall marshall is online now
My pants are louder than yours.
FRC #0900 (The Zebracorns)
Team Role: Mentor
 
Join Date: Jan 2012
Rookie Year: 2003
Location: North Carolina
Posts: 1,193
marshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond repute
Re: More Vision questions

Quote:
Originally Posted by jojoguy10 View Post
Thanks! The last two links don't show anything. Are they searches of keywords?

I thought I remembered someone talking about using USB to transfer data, but maybe they were talking about the other serial interfaces. With the ethernet, you would use UDP or something similar I'm guessing?
They were supposed to be white paper searches for "vision" and "opencv". There are a couple of good examples out there.

UDP or TCP depending on your tolerance for latency and importance of the data arriving, etc...
__________________
"La mejor salsa del mundo es la hambre" - Miguel de Cervantes
"The future is unwritten" - Joe Strummer
"Simplify, then add lightness" - Colin Chapman
Closed Thread


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 09:03.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi