# Calculating Angle to Target

So after getting some rudimentary rectangle detection working, I’ve found some big drawbacks in the “simple” way of turning one direction or the other until facing the center of the target rectangle. The big problem is (at least for our test robot) it takes a significant amount of power to the motors to start moving, and once you get moving, your moving quite fast and tend to overshoot the target; leading to the robot swinging wildly back and forth.

So what I’d like to do is determine the angle the robot needs to turn from the each image, and monitor a gyroscope to determine when to slow down and stop (using periodic, low framerate, images to update the angle to target).

To do this, we’ll need to no the angle between our current and desired facing; eventually I came up with an idea I’d like to field here:

The camera has a fixed field of view; regardless of your distance from the target, the left edge of the image is a fixed number of degrees from the center, so it should be possible to determine the needed rotation by:
r = x * a

I do understand the edges of the image are a bit fish-eyed (or perhaps the whole thing) so it may not be quite this simple, but should be easy enough to measure.

What does CD think… is this a good approach, or is there a better way of doing this that I’m overlooking?

We used that approach in 2006, and will be doing so again this year. It works well.

Just keep in mind that the camera image will have a significant lag.

That’s exactly the approach used in the example code from a couple of years ago (Breakaway?). Use the camera to determine how far to turn, then use the gyro to execute the turn.

The camera and vision processing has enough delay to pretty much guarantee that you’ll overshoot the target direction if that’s all you use.

I’m glad to hear my idea is not unique (and therefore not impossible/ innacurate). I’m a bit curious if the image is close enough to flat to assume the deg/pixel value can be treated as a constant and still be accurate, or if just the edges should be treated with a different rate, etc.

Also, according to the datasheet the M206 has a field of view of 54 deg (assuming linearity here) is that accurate enough value to use?

Sorry if I’m asking for a handout a bit here, I’m just trying to avoid going overboard on gathering a ton of sample images and doing a ton of trig only to find out I could have just taken 54/320.

There is definitely some barrel distortion, and there exist algorithms to try and account for it.

That said, if you have images getting processed at 15, 30, 60 frames per second*, it doesn’t much matter - as you turn closer to the center of the object, you will receive new azimuth information which should correct for prior errors.

(*Our current prototype tracking code is running at 150 frames per second…can anyone beat that? It’s a shame the camera can’t go past 60 )

I would start linear, and as a last resort, you should be able to piece-wise-linear model it with probably just a few table lookups.