I was wondering how everyone else was finding the distance to the target using the camera? I am using the algorithm mentioned in the paper Position Determination that I have posted below. My problem is that I have to keep on changing the effective focus length depending on my distance from the target to keep measurements accurate. Does anyone know of any better methods or what I doing wrong with my current method?

Page 9 of the vision white paper includes an equation for calculating the distance using the pixel width, actual target width, camera view angle, and some trig.

what i did was find the height of the bounding box in pixels at different known distances. theyre inversely proportional, so the k constant is d*h. d= k/h. i got about 15000 for k but you should run a few trials. it works at angles and is accurate within at most three inches.

Thank you very much for the new formula, but I was wondering, does the “k” constant stay the same when the robot is not directly in front of the target but at an angle? I was also wondering why other teams I talked to had all this crazy math and where using other images libraries like javacv?

Yes, the height will change based on where the target is in the image. Ideally you’d measure right in the center where the distortion is the least … but if not you’ll have to apply some form of distortion to accommodate the border areas.

You should not need to change the focal length once its set. If you can’t figure this out post some images with some known distances and we can help check what the issue may be.

I think you’ll find that this is the same technique as what is in the white paper, but rather than use the constants about the camera and the target and work through the relationships, this measures it as a black-box system, determines the relationship, and uses field-data to model it. When systems get complex enough, this is a very good approach. Since the math involved is just a bit of trig, it should be pretty easy to show how to compute k and to compare it to the measured value.

I initially ran into problems when testing the code used in that whitepaper because of skew moving from side to side. I changed the code to use the Y value instead of the X, so you won’t run into problems with skew when moving from side to side on the field. If your robot relies on getting up close to the target you may still encounter problems with Y-axis skew, but since we’re shooting balls and maintaining range, and since we have a fairly high mounted camera, we don’t forsee Y skew becoming too much of a problem.

I have this as a separate VI right inside my image processing loops.

Basically what this does is:

(Actual width of the target in ft * Y Resolution of camera image) / (The processed rectangle’s height in pixels) / 2 / (the tangent of the camera’s FOV/2)

Be very careful with the number you’re using for FOV. The spec sheets all give you the FOV in degrees, and I used radians in my labview.

EDIT: I accidentally have width*x_res writte in that image, that should be y_res if you’re using the height of the rectangle

So far it’s generally accurate within a few inches. Tweak the FOV number a little though, if you need. And keep in mind that FOV will change depending on if you’re using the Axis 206 or the Axis M1011 camera.

Since we’re using the height of the rectangle, you will notice increased error if you’re driving up really close to it, since your angle will start getting funky, but the alternative is using the width, and that’ll get thrown off by moving left to right at all. What we figure is since we’re shooting we’re not too likely to be up really close to the target anyways, so it’s better to make that tradeoff.

Also, I accidentally have width*x_res writte in that image, that should be y_res if you’re using the height of the rectangle.

This wouldn’t really work very well though unless you’re directly in line with the target. Being at all to the left or the right would mean both x and y lengths are skewing, and not necessarily proportionally.