Possible Wrong Wiring in FRC Vision Tracking Example

In the “Rectangular Tracking - 2013.lvproj”, there is the “distance.vi”. I believe either something is wired incorrectly there or this post is incorrect on which camera axis resolution you should use.

As from the post, it states that if you are trying to compute distance based off of the height of the sides of the rectangle, then you should use the Y axis resolution which is 240.

However, in the distance.vi (Attached to this post), it is obviously using the X Axis resolution of 320. The code still looks like its comparing against the height of the rectangles so this causes me some confusion.

Is the post wrong, the code wrong or am I just reading this wrong?

Bad FRC Programming.PNG


Bad FRC Programming.PNG

That is a good question, and clearly that expression needs more documentation.

The pixel size of the camera resolution needs to match the view angle. So if the datasheet gave a vertical field-of-view, we’d use that. If it used a diagonal, we’d use that. Best I could read the data sheet, it was width that they specified. They list it on page 49 as horizontal angle of view.

Greg McKaskle

Based off of what I can infer, the field-of-view is the same in any direction, up, down, left or right. The only difference is that we go further left and right that up and down with the camera aspect ratio. However, that shouldn’t change anything since we know how much further left-right we go verses up-down.

However, if there is an actual difference (stating that I am incorrect with what I said above which I completely can be), then if we are using the width of the image, then we should be rating our distance based off of the horizontal length of rectangles, not the height of them.

Haha ya, more info as to what they are trying to accomplish here would help out a lot. Right now I’m not really wanting to change their example code but it does seem like something is either misplaced or at least not the whole story is being told.

The things that need to match are the object size in pixels and in units, and the camera view in pixels and the fov angle. If the pixels weren’t square, then we shouldn’t switch between height, width, or diagonals. The rest of that is some unit converstion from ft to inches.

The fundamental relationship being used is that at a given plane parallel to the camera sensor, the (size in pixels)/(size in inches) is relatively consistent for all objects in that plane. This means that knowing the physical width or height of an object in the scene lets you calculate a width and height of all other objects in the plane based on their pixel size. This relationship is also true for the visual boundary of the image. The 240x320 image now has a physical size. And that is where you can use the optics of the camera to determine the distance to that plane.

Greg McKaskle

Just to test out your theory, I created an image that was 320 x 240 pixels. In the center, I created a hollow box that was 72 x 72 pixels. I arbitrarily chose that this would be a 3ft x 3ft box. If I run the equation from the previous post using the width of the image and the camera’s horizontal angle of 48 degrees (closer to the 206 camera’s real-life view angle), I get the following:

(((320/72)*3)/2)/tan(48/2) = 14.97ft

Now, if I do the same thing but with the height of the image, this does not come out to be 14.97ft. However, if I change the view angle to 36.95 degrees, then I have a match.

(((240/72)*3)/2)/tan(36.95/2) = 14.97ft

This I believe proves that the vertical view angle of the cameras is different from the horizontal view angle.

However, I also believe that this shows the importance in the above calculation that if you use the height of the image, use the height view angle, if you use the width of the image, use the width of the view angle. the key point is to use the right combination and not to mix them! However, I see in the code that we are using the X resolution of the camera and the horizontal view angle of the camera so we should be good to go.

However, the previous post listed in the first part of the tread is incorrect since it is using the Y resolution of the camera but the horizontal view angle of the camera. Just an observation.

Presentation1 (Custom).jpg


Presentation1 (Custom).jpg

I think we are making progress here, but I’m a bit confused. The datasheet for the camera mentions one angle. I believe one camera at least made it clear that this was for the horizontal direction. The code in the picture at the top of the thread uses the X resolution and the X view angle.

Do you now feel that the example is correct or incorrect?

Greg McKaskle

Sorry Greg,

Yes, the code in the picture at the top is correct from what I can tell. The example of what you are suppose to do in the mentioned “post” (the one that is a hyper link) that is in the first post of this tread is incorrect however since it uses the y resolution of the image but to the x view angle of the camera.

Ah. I totally overlooked my old post. Sloppy on my part. We are in agreement. The pixel size and angle need to agree.

Greg McKaksle