PhotonVision 2022 Official Release

After months of hard work and beta testing, the PhotonVision team is excited to announce the 2022 release of PhotonVision! These have been contributed by people from all over the FRC community, and support for all of PhotonVision is provided continuously by volunteers on our Discord server and here on Chief Delphi. If you need any help please ask on either of those platforms!

We’ve focused on adding a ton of new features, including ones designed to help in the 2022 game:

Colored Shape pipelines


The new “colored shape” pipeline type allows users to detect objects based on their shape. You can choose between circle, triangle, and polygon detection. Just click on the new Pipeline Type drop-down next to your pipeline name and select “Shape”. In 2022 we expect this feature will mostly be useful for ball detection. The colored shaped pipeline type is documented on this page and this page.

Multi-target grouping


Because the 2022 targets include many separate pieces of tape, PhotonVision now supports grouping an arbitrary number of targets. PhotonVision has always supported grouping two targets (this makes it work with 2019-style targets).

PhotonLib improvements
PhotonLib will now send the corners (in pixels) of the target minimum area rectangle’s corners, which can enable more advanced processing of detected retro-reflective tape segments.

Documentation Improvements


The documentation has progressively improved over the past year, both to document new features and changes, and to better describe the best hardware selection, more troubleshooting information, networking information, and LED control. The documentation has also received some visual and organizational overhauls that should make it easier to find content.

Pre-made PhotonVision images for Raspberry Pi, Gloworm, LimeLight, and SnakeEyes
There are now pre-made images that you can flash directly onto an SD card or vision module that will work on a stock Raspberry Pi, a Gloworm or Limelight, or a Raspberry Pi with SnakeEyes.

PhotonVision on Romi
Since a little after last year’s release it’s been possible to run PhotonVision on a Romi. You can read the PhotonVision on Romi installation guide here.

PhotonVision on Limelight
It’s also now possible to install PhotonVision on the Limelight, which is useful if you want to take advantage of PhotonVision’s full multi-camera support or its faster processing at higher resolutions. The installation process is now documented here. A processing speed comparison table is reproduced below:

Resolution PhotonVision on Limelight, Gloworm, or Pi 3/Zero 2W✝ with Pi Camera V1 Limelight
320 x 240 90 FPS 90 FPS
640 x 480 85 FPS Unsupported
960 x 720 45 FPS 22 FPS
1920 x 1080 15 FPS Unsupported

Note: on the Pi Camera v2 PhotonVision can reach up to 120 FPS at 320x240.
:latin_cross: On the Pi Zero 2W, expect approximately 20-30% lower performance due to the lower-clocked CPU (1GHz down from 1.4GHz).

Networked device discovery


You can now view the IPs of your co-processor and potential RoboRIOs on your network from the PhotonVision web interface. The UI will now also display NetworkTables connection info, useful for making sure your co-processor can talk to your RoboRIO.

Offline update


We’ve made it easy to update PhotonVision without having to re-image your device. Just go to the “Settings” tab and click the “Offline Update” button and upload the latest JAR file from the GitHub releases page.

Feature matrix
There are a lot of features to keep track of, so below is a feature matrix that also serves as a comparison with the Limelight software, as of their initial 2022 release. We hope this will be useful to people who haven’t been closely following PhotonVision development!

PhotonVision Limelight
Retro-reflective tape tracking :white_check_mark: :white_check_mark:
Colored shape tracking :white_check_mark: *
Full multi-target tracking :white_check_mark:
Multi-target grouping :white_check_mark: :white_check_mark:
Multi-target outlier rejection :white_check_mark:
Target offset point :white_check_mark: :white_check_mark:
NetworkTables interface :white_check_mark: :white_check_mark:
WPILib “vendor dependency” interface :white_check_mark:
Vendor dependency with helpers for common calculations :white_check_mark:
Programmatic LED control (select hardware) :white_check_mark: :white_check_mark:
“3D” (PnP) target tracking :white_check_mark: :white_check_mark:
Built-in camera calibrator :white_check_mark:
GPU acceleration (select hardware) :white_check_mark:
Secondary driver camera :white_check_mark: :white_check_mark:
Arbitrary pipelines on multiple cameras :white_check_mark:
Python scripting support :white_check_mark:
GRIP support :white_check_mark:

* The Limelight can theoretically track colored shapes, but as of this posting, it does not have the capability to detect specific shapes like circles, triangles, and polygons.

Conclusion
We’re really excited that teams will have access to these features, and we have even more new things on the way. If you just want to follow the project more closely (or contribute) then we’d love for you to join our Discord!

We’d also love to hear requests for new features in this thread!

33 Likes

Are python and grip support on the photonvision roadmap?

GRIP is effectively unmaintained, so we don’t expect to support it. If there is real demand for Python scripting (readers of the thread speak now!) then it may become a priority. It’s definitely a nontrivial feature so developer time (which we’re already always short on) is going to be the limiting factor, and the reason why we’d like to see a lot of demand before proceeding. We also think that about 90% of current FRC vision processing use cases are covered by what’s built into PhotonVision, and if users are at the point where they need something more then they’re probably also experienced enough to deal with something like WPILibPi to deploy their custom code. Could be wrong about this, so user feedback is key.

8 Likes

I remember people recommending the Pi camera v1 because it works better with PhotonVision. I don’t really remember his name but i think one of the devs verified it too. Is there other things to consider than FPS or is the new PhotonVision works better with Pi Camera v2?

Here’s a quick rundown on the Field-Of-Vision tradeoffs to consider.

2 Likes

Speaking of Python, I’ve just cut a new release (2022.1.4) that adds RobotPy support! Thanks to VCubed (idk your CD tag) for pushing this through.

2 Likes

Is there anyway to integrate this with LabView?

Everything is on NetworkTables so that won’t be a problem. It might not be as easy as using PhotonLib, but it should be fine, especially for basic things like just getting the yaw from a target.

4 Likes

Do you have an issue with PiCams? I have tried 2 different pi cams a 1 and a 2 with 2 different cables on 2 different pi’s and they see everything in a red hue. Anyone else having this issue

I was told to try the pi cam at a lower fps to help with exposure time. But I hadn’t tried that.

Hue is affected by white balance, which you can adjust by changing the “Gain” slider (yes, this name is confusing if you don’t hover over the slider and look at what the tooltip says—we will fix this in the future).

There is a thread here were they talk about this issue. I also am experiencing this issue. You can adjust the gain slider on the input tab to correct the color hue, however the image is still rather dark and colors are not super prominent. In the current version of photon you won’t be able to get the exposure much higher. The developers are aware of the the issue and I believe they will be making some changes in future releases.

I am getting an error when attempting to install photonlib to VS code. Has any one else encountered this lately? We had installed it on other laptops a few weeks ago.

Command ‘WPILib: Manage Vendor Libraries’ resulted in an error (network timeout at: …://maven.photonvision.org/repository/internal/org/photonvision/PhotonLib-json/1.0/PhotonLib-json-1.0.json

1 Like

Are you at a school or on a network that might be blocking that location?

It’s possible. I pulled the json file from another laptop and built the code. It seems to be ok now.

Hi,

We have been testing PhotonVision on our Limelight 2 with mostly nice results but have a couple questions:

  1. What is necessary to reliably save and retain pipeline settings in the Limelight? We’ve tried clicking the yellow Save button and also exporting and re-import but still the pipeline settings (we only have one pipeline defined) seem to frequently get lost for reasons that are unclear to us.

  2. is 1280x720 resolution supposed to be supported on Limelight 2? When we select it the target pitch and yaw angles output by PhotonVision are incorrect.

  3. Is there currently or planned any way to specify a cropped ROI within the image? This would seem valuable both to speed up processing and eliminate chance of false target detections in impossible areas (e.g. ceiling / floor).

Thanks!

These are both bugs. Pipeline settings should save automatically (not settings on the “Settings” page; with those you need to click the save button because you don’t want to autosave in the middle of changing your IP). The second one is possibly a result of an incorrect FOV adjustment on our end (that video mode has different binning that affects FOV, which we do adjust for, but it’s possible we got this wrong).

This is very doable and seems like it might be a good candidate for the next slate of features we implement.

1 Like

Thanks for the prompt reply! Is there any recommended workaround for the settings issue (e.g. SCP or REST call from roboRIO push the settings file automatically on every startup)?

You could certainly try to SCP from the roboRIO, but this isn’t an ideal solution.

Someone else just reported a data loss bug where one of their config files got corrupted when the battery voltage was very low—this can up in logs, so if you could click “Export Settings” on the “Settings” page and send the ZIP file here that would be super super helpful. We’ll also try to get someone to see if they can reproduce this.

Thanks. That certainly sounds like it could explain it as we seem to notice upon restarting the entire robot so maybe there is a brownout condition. We’ll export and send the logs next time we notice it.

If we were to SCP the settings, what is the target file / path that contains the settings?