Our team is working on implementing vision processing, and we have rotation calculations down, but are unsure about how to approach calculating distance. I have seen that some people use a scale factor as a direct ratio between height of the target and that seems easy enough, but I have heard that this doesn’t have the best properties (from 254). They recommend using “line - plane intersections”, but I have had trouble finding some examples of this. Where would I be able to find examples of these calculations? And are there any other better ways to calculate distance?
We are using an ultrasonic sensor to determine range from the target, since I believe using the vision itself to determine range is less reliable/accurate. That said, if you are still interested in doing it with vision, GeeTwo gave a good answer to this in another thread:
Last year we calculated distance and angle based on the x/y position of the target in the image and it worked pretty well.
If you know the FOV of your camera in degrees, pixel resolution of the image and the real-world dimensions of the target, then you can compute the distance.
This page should get you going. Especially the “Measurements” section about half-way down.
Identifying and Processing the Targets
Another way to calculate distance is to look at the height of the object within a the camera’s field of view. Very close to the tower the vision target is at the top of the image, and as you back up it lowers to the middle of the image.
You can either use an empirical function to determine distance (beware, it’s not linear though) or use geometry. Our team uses geometry and it’s been pretty successful. We typically calculate within 1-3 inches of the actual distance.
This method is also easy to calibrate. Although the pixels-to-distance function isn’t linear, the error in the calculation is. Finding the slope of the line allows you to adjust the angle constant in the formula.