How to know your robot angle compared to vision taget


I am part of a rookie team and i am starting to do some vision tracking but i am unsure how to find the angle of the robot relative to the vision target. i am using opencv.

How to use Vision to line up with target, not just angle towards the target?

My idea is to use fitLine and fancy math.


Also, if your goal is to find how aligned you are with the target, you don’t need the angle since the distance between the targets will be lower the farther away you are and the angle you are relative to the targets.


The vision ScreenSteps have a lot of content to help you with vision:

Maybe next year, your team should invest in a LimeLight


It is possible to figure out where something is by looking at where it is in the image and looking at its size. It will get smaller as it gets further away. also, assuming that you have your camera mounted above what you’re trying to recognize, it will get further up in your image.
Similarly, the more to the right or left it is, the greater the angle.

Figuring out the relationship between those things is really a matter of experiment because the attributes of your camera, where it is placed on the robot, and how it is aimed will play a part. Luckily, it can all be figure it out experimentally. Place the object at a bunch of different points relative to your camera and do a bunch of measurements. (remember that you need to do a semi circle around your camera and not assume that just because something is 10 ft in front of the robot that it is also 10ft in front of the camera.)

However, you probably do not need to do all the math or the experimentation. If your camera is pointed directly at where you want the thing to be and the thing is to the left of that, then turned the robot to the left until it is in the center. Ditto on the right. If it is smaller than you want it to be, then drive forward. Lather, rinse and repeat.

That is one of the uses of a PID loop.

One suggestion: computer vision is not 100% accurate. You probably want a second feedback mechanism if you’re doing something like placing a hatch panel just to verify that you are, indeed, where you need to be. That mechanism could just be a human player watching the video or a proximity sensor or feelers.


thanks for the great advice!


First of all I want to say make sure you have an electronics breadboard put together and you can deploy code to the roboRIO, control motors, use some of the more basic sensors like limit switches, potentiometers, and encoders. But as long as your team has those aspects under control…

It may be possible to find the angle with just the image but the way we have done it is as follows.
Get a good gyro like the navX that does not drift as much as some of the others. Know the horizontal field of view of your camera. I’m not familiar with opencv but you want to find the center of the target in the image and do a bit of math so if the center of the target is in the center of the image, targetX = 0. If the center of the target is on the far right of the image (which it probably wouldn’t be identified in that case but you’ll get my point) targetX = 1. If the center of the target is on the far left of the image, targetX = -1. And everything in between.
Then multiply targetX by half the horizontal field of view and you have the angle to the target relative to your camera.
Add that to the gyro angle and you’ll have the angle to the target relative to the field except that it may not be in the range 0 to 360. To fix that do targetAngleRelativeToField = targetAngleRelativeToField % 360. ‘%’ means modulus.


If you are using OpenCV, the solvePNP function will tell you everything you need, and more. However, be cautious, because it has to get very accurate inputs in order to get a good answer. You have to make sure you are actually getting the accurate locations of the corners of the vision targets. Our team will be working on this ourselves over the next week. In the past, I’ve had pretty good results, estimating camera pose to an inch or less using pretty cheap hardware, but that was in test conditions, not actually driving a robot. We’ll see soon how well that translates to the real world.

If you don’t want to use solvePNP for some reason, you can also do some of the math yourself. It’s all computations and similar triangles. solvePNP is a sort of “out of the box” solution, if your inputs are good.

It might be, though, that you don’t need a really great answer. SolvePNP, which gives an estimate of 3D position and orientation, might be more than you need. When all is said and done, you are probably trying to get on the midline of the portal (i.e. put the portal directly in front of the robot) and then make sure that the target stays directly in front of you as you approach. If that is all you are doing, then all you need to know is that the left target and the right target are equidistant from the line. Putting it a different way, do you actually need to know the angle, or do you just need to know if it is greater than zero, less than zero, or approximately zero? If your targets are off to the left, rotate the robot in that direction. Repeat until the robot is centered.


There are plenty of methods of going about finding the angle to a target relative to a camera, but I actually think, for 99% of FRC cases, it’s actually a completely unnecessary thing to do, and I’ll explain why.

Rather, what if you got the position of the center of the image relative to the width of the image from -1 to 1, where if the center was all the way to the left it would be -1, in the center 0, and to the right 1, and then tuned your PID using that as your error? A much simpler, yet equally (if not more) effective.

I’m in the process of writing a paper about this very topic. I’ll be sure to update this thread when I finish it.


When it comes to target angle, there are really two different angles to consider.

There is the “relative heading” of the vision target (which is a measure of how far off the centerline of the robot the target is). You can use this to turn the robot towards the target.

Then there is the “orientation” of the target, which is a measurement of the angular difference between the robot’s primary axis, and the target’s primary axis. This angle would be zero if the target and robot are parallel (regardless of relative heading)

The two angles get used differently.

If you only have the relative heading, (by simply measuring the position of the target on the camera screen) you can get TO the target, but when you do, you may be badly aligned, thus hitting the cargo hold/rocket with the corner of your bumper.

If you also have the target orientation, you can use that information to also move sideways (with an omni drive) to ensure that you hit the target squarely on. Orientation angle can be determined from the amount the target is distorted/skewed by being closer on one side vs the other.

Here is an animation we created a few years ago for FTC that uses target orientation to plan an optimal approach. It used Vuforia to to target localization.


You might try this whitepaper from our team:

One of the sections goes into the math of finding the angles. But the short version is (as someone said earlier): solvePnP() will give you the relevant angles.


Hi David

Have you used the solvePNP function on the Raspberry PI with the new FRC image?

Interested in performance - do you think it will be usable in competition?




I have not. I did use OpenCV on a raspberry pi for the experiments I mentioned earlier, trying to find a square target on a piece of paper from a distance of 2-3 feet. The results were usually within a few inches of the actual distance. That should stay pretty consistent, because the accuracy should be determined by OpenCV, not anything to do with the raspberry pi or different platform. What may be different is the use of low sensitivity and retroreflective tape, versus what I was doing which was finding a printed green square on a white background. I think the real key is getting the exact location of the corners.

We should be getting to this problem shortly, perhaps as soon as this weekend. if I get results, I’ll post them. (And, for some reason, software always takes longer than estimated, so maybe this weekend is optimistic.)


Limelight is the biggest waste of money a team could spend


No it’s not.


Limelight is a great option for many teams, especially new teams, those with few programming members or mentors, or just teams who’d prefer to spend their time working on other software tasks. However, I think there’s also definitely something to be said for writing your own vision processing software – my team does every year and it’s a great way to teach computer vision skills, for which there’s a pretty huge job market at the moment. We also like the challenge computer vision gives us, and in a game where it’s almost certainly going to be a pretty key feature, the bragging rights of being able to say “we did that ourselves” is also great.


I have to disagree. Why do you say that?