PhotonVision 2023 Official Release

PhotonVision wordmark stacked next to the photonvision logo on a blue background

After months of hard work and beta testing, the PhotonVision team is excited to announce the 2023 release of PhotonVision!

We’ve focused on adding support for AprilTag detection and increasing the stability of our software.

You can view instructions for installing the latest version of PhotonVision here. We recommend all users completely reimage/reinstall PhotonVision their coprocessors for this release due to the amount of changes from previous releases.

New features include:

  • AprilTag detection and best-in-class pose estimation support that allows you to detect and ignore ambiguous tag measurements, along with more thoroughly documented camera calibration system to help you get up and running with accurate tag pose estimation as fast as possible.

  • PhotonLib updates to support getting your field relative robot pose, and other convenience methods to use with AprilTags. These will help ensure your team has a smooth and easy transition over to AprilTags.

  • RobotPoseEstimator class that will combine multiple tag measurements using different “strategies” so you can get your field relative pose in only a few lines of code.

  • Completely rewritten GPU acceleration support. This brings GPU acceleration for retroreflective tape vision and now AprilTags to the Pi 3 and, for the first time ever, to the Pi 4. This also unlocks support for GPU acceleration on many new global shutter cameras, thanks to libcamera.

  • Mini PC and Orange Pi 4 / 5 support. Using a Mini PC (Beelink N5095) or the Orange Pi 5, you can do AprilTag detection at 720p@30FPS with 30ms latency. See our hardware page for more information.

Additionally, we’ve:

Take a moment to review our comprehensive documentation, and if you have any questions, feel free to post here on ChiefDelphi (please use the PhotonVision category) or join our Discord!


We want to thank everyone who contributed to PhotonVision in any way. This is a volunteer-run project, and community code contributions, beta testing, and feedback have been invaluable in making PhotonVision what it is. We’re always trying to improve, so if you have any issues or questions, please let us know on Chief Delphi or our Discord

We understand that the build season is extremely busy for students and teams, but if you do run into issues with PhotonVision, please file an issue on our GitHub or let us know on Discord or Chief Delphi. Even if you don’t have the time to troubleshoot or debug, a bug report (especially with logs, which you can export from the settings page) is extremely helpful for us.

We wish you the best of luck in the 2023 season!

PhotonVision by the Numbers

  • 351 files changed and 41,288 new lines of code since 2022

  • 37 total contributors (all time)

  • 10 new contributors in 2023 :tada:

  • ~3,200 total beta downloads

  • ~750 unique documentation page visits / day in December


Thanks. Will use.

1 Like

Excited to see all the awesome things teams do with the new apriltags and Photon this season!


Does it support Limelight2?

This is the real question.

Limelight 2, yup! We already provide a limelight image in our releases page. If you mean limelight 3… should also be easy, but we haven’t had the chance to yet. Limelight 3 is just a CM4, and we know Photon works on CM4, so in theory it’s as easy as flashing the raspberry pi image. We’ll see once LL3’s start shipping

1 Like

This is great to hear! Thank you!

Nice! Thanks

Will photonvision be adding something to compete with limelight 3’s new learning based vision?

1 Like

The best part about Photon is it’s open source! If you wanna develop some sort of ML pipeline in your own fork you totally can, and I would personally consider it for upstreaming. I haven’t personally had experience with the subject before, but I know @mdurrani834 's team did a couple years ago

What’s the value proposition for it compared to more traditional methods (i.e., what useful things can it enable on an FRC field that other methods can’t)? I think that would determine whether it’s worth expending development effort to add to PhotonVision.

1 Like

Does it work with labview?

1 Like

The raw NT data is accessible from LabVIEW, but there is no LabVIEW vendordep (so no pose estimator class and utility functions).

1 Like

There was one team that had success a year or two ago, but nothing mainlined yet. Our packet structure is at least documented for you to reimplement, though. On a plane rn but I’ll send the link when I find it

EDIT: found it! Photon Vision Lib for LabVIEW


It is designed to be run on a co-processor such as a raspberry pi, among others.

Messing with our (open source!) canned Photonlib Apriltag examples and the field vision sample images, and getting some really awesome looking results! The two pose estimates from each tag are so close they are literally overlapping, LOL. Visualization is using the awesome AdvantageScope


Yes. The PhotonVision LabVIEW code has been updated to format the new data packet. Use this repository — GitHub - jsimpso81/PhotonVisionLabVIEW: PhotonVisionLib for LabVIEW


Awesome work thus far. @mdurrani834 is there any development plan for new features to be added to future releases?

In general we try to prioritize doing support and fixing bugs in the season. Since reliability is so important during this period we try to avoid big changes, and right now there’s nothing super big on the roadmap, although if there turns out to be something seriously needed that’s missing then that could change.

One exception: there was a new image streaming system and an alternative AprilTag detector that can be a little faster that are mostly working but got cut from this release due to reliability issues. I wouldn’t be surprised to see those in the nearish future.


Anything specific in mind? Usually we will add features that are game relevant, like hue inversion.