During the search mode, the camera does it’s first sweep, and then when it moves back to make it’s second sweep with a higher angle, the camera catches a glimpse of the light, but not enough to lock, so it restarts the search function from the beginning. This continues over and over, and the camera never locks! Any suggestions? Thanks.
Another questions as well. The tilt angle when taken does not vary enough to provide enough information for the trig functions to properly give distance. The angle intervals are too far spaced out. Any suggestions? Any more accurate ways of finding distance? Thanks again.
We had the same problem. A very simple way to fix this is to change the values the camera goes to for pan and tilt when it begins a new search. Where the code is for the new search check, we changed our code to this…
We commented out the old values and just changed the search to continue from the current servo positions (you will notice a temporary “freeze” of your camera when it outputs the same values for a second time after it finds and loses the target in the sweep).
Denz, the math and the algorithms show there are different things which effect accuracy. The camera can be mounted at the max robot height and still give plenty of accuracy if the math is understood and some undocumented things are known. In determining what part of the vertical vs distance accuracy you are referring to here is a list of things to consider:
Height of the camera center: max allowed is near 60".
Height to the center of the target: see the rules. It is 130".
Degrees per servo step: measure this experimentally. You can derive a simple formula to convert pwm to degrees.
Camera sensor size in pixels: the vertical amount is in #define IMAGE_HEIGHT of 240 pixels.
Camera field of view in angles: undocumented. This can be experimentally measured. People have reported 34-36 vertically, ours was 35 degrees.
Target centroid x,y in pixels: undocumented. These are the mx, my values in the T_Packet_Data_Type struct.
“allowable error” in pixels. This is in the #define TILT_ALLOWABLE_ERROR_DEFAULT and the default is 6 pixels. You can read tracking.h and tracking.c to see what this does. The camera stops moving when the target center is within that many pixels of the camera center.
The allowable error can be modified to have the camera center closer to the actual target center.
The camera pwm values can be used for a coarse angle measurement then the target centroid pixels can be used for fine measurement.
We decided to use blob size, which is effective to an extent. The camera stays at a tilt angle of 38 degrees for about 6 feet before it changes to anything else. I have the tilt error set at 2 pixles, maybe I should adjust it to 1? Anyhow, my math was correct for sure, and I converted degrees into radians and everything (I’m a math guy, don’t worry). But the problem is the tilt angle changes very little or big distances. We’re using blob size for now, but if I can get the distance to work, that would be great!
Thanks Donut, I will try your suggestion, it looks like it works.
Thanks alot for everyone’s help, lol being a first year programmer is pretty hard!
The camera zero accuracy is 0.8 degrees per pwm step. But of course is subject to the deadzone when the target center is near the camera center. But the target center (in pixels) does not use that deadzone. Camera pixels (vertical) have an accuracy of 0.14 degrees per pixel. So if you want accuracy use camera pwm to get bot to camera center angle, then target centeroid pixels to get camera center to target center.
This post is clear up until the last sentence. What proceedure are you advocating? Are you saying that instead of a straight camera servo value to target distance calculation you should use the target center value in pixels?
If you are:
Does this make a difference if the error tolerance is 0 or 1?
Is this more accurate? What is the math behind it?
I guess the advantage would be that the target does not have to be in the center of the camera field of view to calculate distance.
The camera servo value gives you an angle between the robot and the camera center. The camera lock algorithm does not gaurantee that the camera center is aligned with the target center. In fact it would be rare that it ever is. The pixel information provides the angle between the camera center and the target center. So now you have both the angle from the bot to the camera center *AND * the angle from the camera center to the target center.
I do not understand the error tolerance question. Using both angles mentioned previously is much more accurate than using just the first one. The accuracy can be calculated.
The bot to camera center calculation is a straight interpolation of the form:
bot to camera angle = pwm * (a/b) + c
This is the same formula as 2 points define a line. To derive the constants use two different pwm values (spaced far apart) and measure the angle of the camera.
The camera center to target center formula is similar:
camera to target angle = target pixel * (camera fov angle / camera pixels) - center pixel
bot to target angle = bot to camera angle + camera to target angle
You have already though outside of the box and determined a way to use the new information. Not only do you have a more accurate distance calculation you have a more accurate target azimuth and elevation values. It only requires a few extra calculations in the code to greatly increase the accuracy. You will have a better understanding of the system. You can quantify the accuracy of different parts of the system.