I’m currently using a tan table I compiled from a C program I wrote and stuck that in my robo code. Then I use the angle between the camera and the light and use my look up array to find the tan value of that angle. Then, my code will subtract the height of the camera from the height of the light and that number will be divided by the tan value. Voila, distance from the rack! I was just wondering how every one else finds the distance. Maybe something more creative?

I don’t find the distance, I just use a PD loop on the tilt angle to home in on the correct distance.

if(TILT_SERVO > 20)

{

// drive towards the light

}

else

{

// score

}

K.I.S.S.

The CMU camera reports a blob or size.

If the object is far away the blob size value will be small. If the object is close to you, the blob will be a large value. A blob is the green light.

Since the green light is a fixed object size, and your camera FOV (field of view) is a fixed size, you can mark out calibration points on the floor to determine a relationship of depth perception.

For example setup your camera and light, look at the size number the camera reports.

If your ten feet away, record the size of the blob, mark out set distances on the floor with masking tape. 0 feet, 1, 2, 3, 5, 10, 20 feet. Move camera to each distance. Record camera values.

You should get a fairly linear relationship in distance to size of blob the camera is reporting back to the RC.

Once you have this distance calibrated, you now can convert these values to a distance measurement.

Example maybe 100 “blob size” values equal 5 feet, you should also be able to calculate your cameras resolution value.

1 pixel = so many inches… This will be your robot accuracy.

Word of caution. Calibration with this procedure will change if you:

- reteach the camera to a different color or RGB signature.
- re focus the camera lens be twisting the lens.

If you do any of the above, you must repeat your calibration procedure to obtain this highest point of accuracy.

If tilt_servo < 220

drive

else

at rack

We have a short lookup table that turns camera tilt into desired travel distance. If the tilt is too close to horizontal for the range of the table input, we know we’re “very far” and pretend the destination is six feet in front of the robot. If the tilt is too close to vertical, we know we’re “too near” and pretend the destination is six inches *behind* the robot.

The table was filled in empirically, by placing the robot exactly where we wanted and putting zero in the table for that tilt angle. Then we moved it back until the tilt angle changed and put the measured distance in the table. Repeat as desired, with some interpolation and lazy entries at the extremes. There’s actually a range of about five tilt angles that we call “close enough” to score; they all have zeros in their associated table entries.

This is how 1281 did it last year. It was accurate to about ±5cm for us. We calibrated just like you did. We got blob sizes at various known distances, then figured out a matching function.

Nitpick: The blob size decreases proportional to an inverse quadratic, not linearly. Although I suppose for a range of distances it can be approximated by a linear decrease.

I haven’t used it yet, but I think the formula works out like this

We take a cross-section of the field

The camera is looking at a light

We know that angle, it’s the tilt of the camera.

We also know the height of the light compared to the height of the camera, it’s a constant.

The tan(camera_tilt_angle) = height_of_light / distance_from_rack

that’s it.

I think…

finding the distance is a bit unnessisary, and just provides extra load for your program. We just use the angle the camera is pointing as mentioned earlier in this thread.

if (pid_isDone(&robot_dist) == 1) {

auto_subr = rsm_scoring;

}

Team 1286 never really used a distance table, just the tilt angle position in counts (0 to 254). I found that the tilt angle position was about 170 counts at the starting position. Near the rack, with the camera locked on the counts increased to about 192 counts. So, the main part of the program drove forward, slowly, let’s say 160 forward pwm counts (we have a tank tread design and lot’s of friction) the entire 15 seconds. Next , a lookup table was used to subtract from the pwm counts as tilt angle changed. So, let’s say the table at 170 counts gave 0 (no change) and at 192 counts gave 15 counts (enough to stop the robot).

As the robot moves forward, the tilt angle position in counts subtracts more and more from the forward pwm counts (160, 159, 158 to 145) which stops the robot.

BTW, you can use the location of the light in the raster to adjust the servo position to be more accurate. That is, if you take the servo positions, you can add or subtract the pixel location of the light (scaled) to have a real value.

You can even do this without trig, although this becomes less accurate as you get closer to the edge of the screen.

I did the research last year. I don’t have the code in front of me, so I can’t give details.

How do we find the distance from the rack? Our driver observes the light rays being bounced off the robot, field, and rack, and reacts based on this information.

But seriously, I tried a tan lookup and had no luck; it kept returning very negative numbers, and it was ship date when I started working on it, so I didn’t bother to troubleshoot–I scrapped it.

Auton doesn’t seem that important anyhow.

JBot

If tilt_servo < 220

drive

else

at rack

Ha-Ha! Thanks for distilling a what appears to be complicated thing to understand down to it’s essence!