Go to Post If I run into some of your team's student coaches/drivers on the field of battle someday, I still might ask 'em what they're doing after high school. :) - Ken Patton [more]
Home
Go Back   Chief Delphi > Technical > Control System > Sensors
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
Reply
Thread Tools Rate Thread Display Modes
  #1   Spotlight this post!  
Unread 11-01-2012, 23:02
Jeanne Boyarsky Jeanne Boyarsky is offline
Java Mentor
FRC #0694 (StuyPulse)
Team Role: Mentor
 
Join Date: Jan 2010
Rookie Year: 2010
Location: New York
Posts: 97
Jeanne Boyarsky has much to be proud ofJeanne Boyarsky has much to be proud ofJeanne Boyarsky has much to be proud ofJeanne Boyarsky has much to be proud ofJeanne Boyarsky has much to be proud ofJeanne Boyarsky has much to be proud ofJeanne Boyarsky has much to be proud ofJeanne Boyarsky has much to be proud ofJeanne Boyarsky has much to be proud of
pixel granularity with axis camera

We were reading the vision whitepaper on camera aiming to figure out whether the camera can be used for distance. The document implies things are fine.

However there is an arthmetic error in the document. On page 9, it says
Quote:
The target width measures 2 ft wide, and in the example images used earlier, the target rectangle measures 56 pixels when the camera resolution is 320x240. This means that the blue rectangle width is 2*320/56 or 11.4 ft. Half of the width is 6.7 ft, and the camera used is the M1011, so the view angle is ~47 ̊, making Θ equal to 23.5 ̊.
Half of 11.4 feet is 5.7 feet, not 6.7 feet which implies what follows is also inaccurate.

One of our team's mentors recalculated and concluded:
Quote:
Even with the best possible image, one has to be no more than 95 inches from the target for one inch of distance change to result in a one pixel change and no more than 68 inches for a 2 pixel change.
Thoughts? Is this discussed anywhere? (I did try searching.)
__________________
Team 694 mentor 2010-present, FIRST Volunteer and Co-organizer of FIRST World Maker Faire Tent
2012 NYC Woodie Flowers Finalist
2015 NYC Volunteer of the Year
Reply With Quote
  #2   Spotlight this post!  
Unread 12-01-2012, 19:18
Greg McKaskle Greg McKaskle is offline
Registered User
FRC #2468 (Team NI & Appreciate)
 
Join Date: Apr 2008
Rookie Year: 2008
Location: Austin, TX
Posts: 4,748
Greg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond repute
Re: pixel granularity with axis camera

That is correct, the white paper has an error in calculating the example distance, but the formulas are correct. Using the correct value of 5.7 gives a distance of 13.1 ft. As mentioned in the paper, the actual lens view angle is a bit different from the data sheet. That calibration was not based on the example data. The example coded in LV is not surprisingly correct when dividing by two, and gives the correct distance value.

The calculations in the camera size section are not impacted by the previous error. To elaborate a bit, the point where a 2" element is 2 pixels wide is where the field of view is 320 pixels and 320" wide. Half of 320 is 160" or 13.3 ft. 13.3 / tan( 47 / 2 ) is a bit over 30 ft.

Using the equations from the paper, it predicts that at 95" from the target, the target will be 93 pixels wide. At 96", it will appear 92 pixels. So it would appear that we are in mathematical agreement.

This observation calculates the expected error term of the distance at around 8 ft, or 95 in, from the target is plus/minus 0.5 inches. I don't really believe that is a problem. At 27 ft, it looks like error term is plus/minus 6 inches. I'm not going to claim that is ideal, but I would expect the variability of the balls and other mechanical shooter elements will likely be similar.

I hope that helps explain things a bit.

Greg Mckaskle
Reply With Quote
  #3   Spotlight this post!  
Unread 13-01-2012, 02:28
Sparks333's Avatar
Sparks333 Sparks333 is offline
Robotics Engineer
AKA: Dane B.
FRC #1425 (Wilsonville Robotics)
Team Role: Alumni
 
Join Date: Feb 2004
Rookie Year: 2003
Location: Wilsonville, Oregon
Posts: 184
Sparks333 is a glorious beacon of lightSparks333 is a glorious beacon of lightSparks333 is a glorious beacon of lightSparks333 is a glorious beacon of lightSparks333 is a glorious beacon of lightSparks333 is a glorious beacon of light
Send a message via AIM to Sparks333
Re: pixel granularity with axis camera

If this granularity is insufficient, you might try a few oversampling techniques to get sub-pixel resolution - I find that dithering works very well in these scenarios.

Sparks
__________________
ICs do weird things when voltage is run out of spec.

I love to take things apart. The fact that they work better when I put them back together it just a bonus.

http://www.ravenblack.net/random/surreal.html
Reply With Quote
  #4   Spotlight this post!  
Unread 14-01-2012, 09:58
Greg McKaskle Greg McKaskle is offline
Registered User
FRC #2468 (Team NI & Appreciate)
 
Join Date: Apr 2008
Rookie Year: 2008
Location: Austin, TX
Posts: 4,748
Greg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond repute
Re: pixel granularity with axis camera

If you care more about precision than performance, you don't have to use the bounding box width. You can determine a location, say near the vertical center of the bounding rect, and use edge detection on the original image to find the distance between the vertical edges of the original monochrome image. That measurement can give subPixel accuracy. Though keep in mind that you can't make something from nothing. The accuracy improvement achievable and the technique is described in the Vision Concepts manual -- C:\Program Files\National Instruments\Vision\Documentation\NIVisionConcepts. chm. Chapter one covers edge definition.

As usual, it is easiest to experiment with this using Vision Assistant. Take your color image, extract the luminance plane, and use the Edge Detector(Simple Edge Tool, First and Last, with a large edge strength of like 200). The edge strength depends on the brightness of your ring light. Draw a line across the rectangle, and the graph will help you determine the actual strength that differentiates the edges. The X values are now subPixel and more accurate than the bounding box.

Greg McKaskle
Reply With Quote
Reply


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 02:27.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi