View Single Post
  #9   Spotlight this post!  
Unread 21-12-2015, 12:46
Jared Russell's Avatar
Jared Russell Jared Russell is offline
Taking a year (mostly) off
FRC #0254 (The Cheesy Poofs), FRC #0341 (Miss Daisy)
Team Role: Engineer
 
Join Date: Nov 2002
Rookie Year: 2001
Location: San Francisco, CA
Posts: 3,077
Jared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond repute
Re: More Vision questions

Quote:
Originally Posted by jojoguy10 View Post
These are great replies! Thanks!

For those of you that run vision off of the driver station:
1. The laptop needs to have enough "horsepower", correct? It can't just be a simple "classmate-like" laptop?
2. Was there a lot of lag introduced between what the robot saw and reacting to it (since the image had to be transferred over the network, processed, then sent back)?
1. It totally depends on what you are doing. The more complicated the vision code, the more you stand to benefit from a more powerful laptop (CPU speed is the operative specification here).

2. There was typically about a 100-300ms lag between the start of image capture and receipt of the processed result on the robot when I last did this in 2013. Some of this is due to transmission time in both directions, some is due to processing time on the laptop, and some of it is because image capture itself is not instantaneous (an issue that affects all processing methods).

That amount of lag can either be disastrous or a non-issue depending on how you are using vision. As a mental exercise, compare the following two approaches for turning your robot to face a vision target:

Approach 1:
Code:
while(true) {
Capture camera frame
transmit frame to laptop
detect target in image
compute a drive turn command to place the target in the center of the image
send command back to robot
execute command
}
Approach 2:
Code:
while(true) {
Capture camera frame
Record robot heading from gyro at the moment the frame was captured
transmit frame to laptop
detect target in image
Send the heading angle to the target back to the robot
Add recorded gyro heading at time of image capture to the heading angle sent back by vision code
Compute drive turn command to turn to new target angle
execute command
}
Which approach would you expect to be more robust to variations in latency?