PhotonVision and tags heights

Tags heights on the 2023 field are all at normal heights for robot cameras. Doesn’t a very low azimuth for targets result in large “run over rise” magnification of camera position/tilt errors into large distance-calculation errors?

We’re having a team design discussion on ideal PhotonVision camera placement height.

Why did field design incorporate this non-ideal height configuration of the tags (at least seemingly for distance calcs). Were they trying to accommodate aiming over pose determination? Or is PhotonVision somehow “immune” from this effect?

Thanks.

You don’t need to use the height delta to compute range with alriltags – you can use 3d mode, which gives you target 3d position and orientation relative to your camera. Though it does require camera calibration.

PhotonVision must be using delta-height internally for that, no? I do understand that they’re also using target size, but how do they handle the inherent “low azimuth” problem for trigonometric distance calculation?

Why FIRST chose such a minimal use of AprilTags is a good questions. And one can only speculate. Possibly because it is the first year using them. But, the limited use is disappointing. Some additional tags and placements other than those only at robot height and only on the ends of the field would have been helpful for more comprehensive us in localization.

Many here were disappointed when FIRST announced the switch from the 36h11 tags to the 16h5 in part because 16h5 offers only 30 unique tags, in contrast to the over 500 in the 36h11 family. But, as we now know, it didn’t even come close to using the 30 available.

Edited to add: I’ll defer to those more knowledgeable in the workings of 3d use of AprilTags, but I believe instead the size and perceived shape (i.e. the tag does not appear square when viewed from any position not directly orthogonal to the center of the tag) are used.

2 Likes

Nope! OpenCV: Perspective-n-Point (PnP) pose computation

On mobile so not particularly long winded, but basically you can use the tag corners in the image plus your cameras focal length to figure out where the tag must be located to have been observed at a given set of corner locations in the image. A basic 1D version of this would be using the width/height of the tag in pixels to calculate distance

2 Likes

You’re referring to size. Yes, that can be used. The question is how does photonVision mix that with horizontal angle to improve the estimate. Shallow azimuths don’t help much (and zero azimuths produce zero info), is the point.

Why does that happen if you’re using solvepnp?

As for Photon, it doesn’t care about the horizontal angle? I’m not sure I understand.

1 Like

The answer is in the link that @thatmattguy posted. The solvePnP algorithm doesn’t really have any internal concept of “size” or “angle”. It’s just trying to find a transformation that best maps points in 3D space to pixels on the FPA.

FYI, I think you’re misusing the word “azimuth”. I suspect you actually mean elevation.

2 Likes

I think you’re right, I meant elevation. Thanks! I think I see what you are saying, that photonVision tries to solve distance and angle “all at once”. But I don’t think you can escape the fact that there is (compass) angle information only in the shape, and distance information only in the size and elevation data. So that very low elevation results in very low (or negative) contribution to distance determination.

Well, here’s a thought experiment: what does elevation mean?

Photonvision has no way of knowing what “level” is, with respect to the floor, or gravity. So, what if you tilt your camera up? The apriltag is now lower in the frame. What does this change?

What is the purpose of the camera and target height entries in this parameter list then?: Using Target Data - PhotonVision Docs

PhotonVision takes camera tilt into account in the same API: Using Target Data - PhotonVision Docs

I think there is some confusion between the two different ways of estimating your position. The one that you’ve discovered uses trigonometry to figure out only the XY position of the target. And like you said, it does actually depend on the height difference. The second one solves the perspective and point problem that I linked above to give you the full 3D position and orientation of the target relative to the camera. The latter is probably the one that you want to be using this year

1 Like

My suspicion is that these are not separate and distinct but are instead “mixed” by PhotonVision. And that is my question. Will a PV camera position that maximizes the horizontal distance/angle between robot camera and aprilTag provide any advantage (added accuracy) in the distance-calculation sense? Thanks.

In what way are they mixed, exactly? But to answer your question, not really that I can imagine?

(Photonvision dev here btw)

Does PhotonVision use two different ways of calculating distance depending on which part of the API is used?

1 Like

Yes. One will just return distance value based on trigonometry, the other will return full 3D pose using solve PNP / 3D pose estimation.

To my understanding, you can represent the way your perspective warps the shape of an April tag as a set if matrix transformations. You can use these transformations to compute your pose.

I’m not sure why you’re fighting this so much…

Again I’m not super knowledgeable on how the guts on this function, but think about what an April tag looks like when your close vs far away. If you’re further away the tag shrinks and the points get closer together.

Is it really minimal? To me it feels like a pretty good number of vision targets… 4 per side means most of the time you have at a minimum one in view.