![]() |
pixel granularity with axis camera
We were reading the vision whitepaper on camera aiming to figure out whether the camera can be used for distance. The document implies things are fine.
However there is an arthmetic error in the document. On page 9, it says Quote:
One of our team's mentors recalculated and concluded: Quote:
|
Re: pixel granularity with axis camera
That is correct, the white paper has an error in calculating the example distance, but the formulas are correct. Using the correct value of 5.7 gives a distance of 13.1 ft. As mentioned in the paper, the actual lens view angle is a bit different from the data sheet. That calibration was not based on the example data. The example coded in LV is not surprisingly correct when dividing by two, and gives the correct distance value.
The calculations in the camera size section are not impacted by the previous error. To elaborate a bit, the point where a 2" element is 2 pixels wide is where the field of view is 320 pixels and 320" wide. Half of 320 is 160" or 13.3 ft. 13.3 / tan( 47 / 2 ) is a bit over 30 ft. Using the equations from the paper, it predicts that at 95" from the target, the target will be 93 pixels wide. At 96", it will appear 92 pixels. So it would appear that we are in mathematical agreement. This observation calculates the expected error term of the distance at around 8 ft, or 95 in, from the target is plus/minus 0.5 inches. I don't really believe that is a problem. At 27 ft, it looks like error term is plus/minus 6 inches. I'm not going to claim that is ideal, but I would expect the variability of the balls and other mechanical shooter elements will likely be similar. I hope that helps explain things a bit. Greg Mckaskle |
Re: pixel granularity with axis camera
If this granularity is insufficient, you might try a few oversampling techniques to get sub-pixel resolution - I find that dithering works very well in these scenarios.
Sparks |
Re: pixel granularity with axis camera
If you care more about precision than performance, you don't have to use the bounding box width. You can determine a location, say near the vertical center of the bounding rect, and use edge detection on the original image to find the distance between the vertical edges of the original monochrome image. That measurement can give subPixel accuracy. Though keep in mind that you can't make something from nothing. The accuracy improvement achievable and the technique is described in the Vision Concepts manual -- C:\Program Files\National Instruments\Vision\Documentation\NIVisionConcepts. chm. Chapter one covers edge definition.
As usual, it is easiest to experiment with this using Vision Assistant. Take your color image, extract the luminance plane, and use the Edge Detector(Simple Edge Tool, First and Last, with a large edge strength of like 200). The edge strength depends on the brightness of your ring light. Draw a line across the rectangle, and the graph will help you determine the actual strength that differentiates the edges. The X values are now subPixel and more accurate than the bounding box. Greg McKaskle |
| All times are GMT -5. The time now is 08:08. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi