View Single Post
  #9   Spotlight this post!  
Unread 17-02-2006, 13:17
Keith Watson Keith Watson is offline
Registered User
FRC #0957 (WATSON)
Team Role: Mentor
 
Join Date: Feb 2006
Rookie Year: 2006
Location: Wilsonville, OR
Posts: 112
Keith Watson is just really niceKeith Watson is just really niceKeith Watson is just really niceKeith Watson is just really nice
Re: Camera Cycling And Angle

Quote:
Originally Posted by gnirts
This post is clear up until the last sentence. What proceedure are you advocating? Are you saying that instead of a straight camera servo value to target distance calculation you should use the target center value in pixels?
The camera servo value gives you an angle between the robot and the camera center. The camera lock algorithm does not gaurantee that the camera center is aligned with the target center. In fact it would be rare that it ever is. The pixel information provides the angle between the camera center and the target center. So now you have both the angle from the bot to the camera center AND the angle from the camera center to the target center.

Quote:
Originally Posted by gnirts
If you are:
Does this make a difference if the error tolerance is 0 or 1?
Is this more accurate? What is the math behind it?
I do not understand the error tolerance question. Using both angles mentioned previously is much more accurate than using just the first one. The accuracy can be calculated.

The bot to camera center calculation is a straight interpolation of the form:

bot to camera angle = pwm * (a/b) + c

This is the same formula as 2 points define a line. To derive the constants use two different pwm values (spaced far apart) and measure the angle of the camera.

The camera center to target center formula is similar:

camera to target angle = target pixel * (camera fov angle / camera pixels) - center pixel

Then combine:

bot to target angle = bot to camera angle + camera to target angle

Quote:
Originally Posted by gnirts
I guess the advantage would be that the target does not have to be in the center of the camera field of view to calculate distance.

Any other advantages?
You have already though outside of the box and determined a way to use the new information. Not only do you have a more accurate distance calculation you have a more accurate target azimuth and elevation values. It only requires a few extra calculations in the code to greatly increase the accuracy. You will have a better understanding of the system. You can quantify the accuracy of different parts of the system.
__________________
Keith Watson - Professional Software Engineer
No relation to "Kevin" Watson, who created the camera tracking code.