PhotonVision Beta 2023: AprilTags

Arducam with OV5647 Pi camera interface (model B00350) works fine albeit somewhat slowly. Cable management is a problem.

Just a warning before you order anything. Make sure the resolution goes as low as you’d like, especially if you are going to use more than one. We bought a couple ELP cameras last year for our Jetson and discovered that their lowest resolution/framerate combination used so much USB bandwidth that we had trouble using more than one at a time.

1 Like

For my OV9281, the best performance requires MJPEG encoding, which encodes rgb anyways. Accessing it via OpenCV will give you a frame with 3 identical color channels.

I’m curious if keeping a strided view into the frame would give us any performance benefits. I imagine any amount of time saved preventing the copy would be peanuts compared to the time required for AprilTag detection.

1 Like

Yes, but not on PV.
Of all the configurations I have tested, it provides the highest FPS and detection rate.
That said, the version I used was a MIPI-CSI version, but it attaches as a V4L2 device, not a PiCamera.

It is asstounding how well it sees the tags. Even when the displayed image looks almost black, it can still identify the target.
I just wish my Orange Pi 4 LTS had the same Pi Camera MIPI-CSI connection, I would love to test the performance with that!

This might do the trick.

FIRST just changed the tag size and family…

Time to redo all the testing. Related…


PV will change the size and family on the beta, but that and other features ready at this moment will likely be the LAST release compatible with 2022 WPILIB. We’re having issues backporting the new related WPILIB features.

Expect both the last 2022 beta and the first 2023 beta in the next few days.


How bad is this for the development effort?

Color cameras are magic.

There’s quite a lot of possible variability based on the silicon fab, Bayer mask deposition, and even how well the optics are matched to the sensor. Theoretically, a lot of the calibration should be done by the manufacturer and burned into the camera ROM, so that things “just work” for the end user. Obviously, the quality of that depends on how diligent the manufacturer is. If you really care about color, you’ll want to calibrate it yourself, using a calibrated color target and the raw, undeBayered data (definitely accessible with the Raspberry Pi HQ camera-- not sure about others).

1 Like

Where’s a good place to get the 1h5 to print? Thanks.

Now have an upper bound on number of tags in use - 15 per half of the arena.

Should be easier to get higher frame rates at lower resolution to reduce blur; maybe even 340x240 or so?

Is bit error rejection already in the code or is that what was referred to as being removed from the library because nobody wanted to take the CPU time to do it? Will that be a user adjustable parameter?


And HOW are we to properly size and print these tags? They are a blury mess!

Actually if you use the PS file (easily converted to PDF), it is actually essentially the correct size (was for me at least).

Thanks, I will give that a shot.

[EDIT] BINGO, worked like a charm!

With tthe Tag16h5 tags, I can extend my detection to 17 feet reliably. This is at 640x480.
With previous tags I was limited to about 12 feet. No other tweeks were made to achieve this. Further testing may yield improvements.


How’s your false positive rate?

So far, in my basement workshop, non-existant. More testing to come.

Setting a threshold of 0 for hamming distance (how many error bits were corrected) for this tag family might be a good idea to help reduce false positives, as there’s simply not as many error correction bits available.

Part of the reason I may not be seeing false positives is not that they’re not there, but in my filtering for specific tags the false positives may drop out. For instance, I only search for tag ID #5 in the results.


That totally helps, yeah. At least for me…
Working on testing hamming distance limiting now as well


Did someone say max error bits and decision margin sliders in a new photon beta?? Also 16h5 support

(Edit) to expand on the above: max error bits is the max number of bits the detector needs to correct in the tag, 0-1 seems good for 16h5 because it’s so small and so there’s so little info to reconstruct from. Decision margin cutoff changes how much “margin” the detector has left before it rejects a tag; increasing this rejects poorer tags. Without it, this happens lol