PhotonVision Offseason Development 2024

image

Hi There!

The PhotonVision Team would like to welcome you to the first Offseason Development thread!

This year, we’re looking to document the great work the community of developers is doing to update and improve PhotonVision in prep for next year. We’re hoping to make some periodic updates as we hit major milestones, and field some Q&A while we’re at it.

What’s happened so far…

Champs 2024 & Early Offseason

This year, we had an awesome opportunity to present a talk at Championships. As a result a bunch of developers were able to meet in person, get to know each other a bit better, and talk to teams about their experiences this year.

Between the feedback thread, conversations on the discord, and our in-person chats, we started to form what our summer development priorities could be.

Finally, in early June, we hosted an open call on the discord server to talk through the development priorities with the team, and get a rough assessment of peoples’ availability.

All meeting notes are available here.

While things are always subject to change, here’s some projects that are bubbling to the top of the priority list:

Early Summer PR’s So Far

Data packing & unpacking

Since day 1, PhotonVision chose an architecture where all timestamp/target data for a frame was packed into a single byte array in network tables. This ensured coherency between different pieces of data all associated with the same frame from a camera. And there’s no intent to loose that.

However, maintaining the code to correctly pack the data into an array on the PhotonVision side, and unpack it in the roboRIO it in multiple languages has proven difficult, to say the least. C++ was broken for the vast majority of the season (and no one noticed???)

A lot of different solutions exist for doing this, but all come with tradeoffs. Some bloat the number of bytes, and most are too slow to be used on a roboRIO in a ~20ms loop.

The current path is to do code generation: A single “source of truth” json file will define the contents of the data packet, and a script will use that json file to generate the Java, C++, and Python implementations for packing and unpacking.

Update all the things

PhotonVision relies on a lot of stuff - Notably, a bunch of npm packages for the front end, WPILib (camera processing, network tables, 3d geometry), and Raspberry Pi / Orange Pi linux images. An important activity is getting these things updated early in the summer, and then (ideally) keeping them stable between fall and the build season.

In the process, a few tweaks to front-end are in the works to give the user a bit more feedback as to what’s happening behind the scenes.

Camera Lost Indicator

Previously, if the coprocessor had a runtime issue where a camera failed to provide a video frame, the net behavior was to keep showing the user the last good frame. This made it hard to tell whether the issue was network latency, something inside the camera, or PhotonVision itself.

This has been made a bit easier to tell, as “PhotonVision Running but camera stopped talking to me” now produces a specific “Camera Lost” frame.

Docs into Source

Documentation is definitely lagging, and is a key priority for the summer and later this fall. We’ve already burned through one apriltag-only FRC season with examples that still reference retroreflective targets, and this needs to change.

Doc updates will likely take place later in the fall, as we want to minimize churn induced by summer development.

For now, a pre-step is to move the documentation into the mega-repo to help make it more obvious when a code change needs a doc update.

No More Arbitrary 0-100% Exposure

Supporting arbitrary cameras is… hard. Really hard. We’ll-be-stomping-on-bugs-till-the-day-we-retire hard.

In an effort to reduce some of this pain, we updated the UI to no longer use an arbitrary 0-100% scale for exposure - rather, the underlying driver is allowed to report its range, and the UI will allow users to directly interact with the number.

This will hopefully reduce some pain teams saw this year where the slider was either doing weird rounding, or didn’t allow the full range of exposure to be selected.

In addition, we’ll want to revamp the docs for what cameras are recommended going forward. This will primarily be a function of what cameras play nicely with orange-pi and raspberry-pi linux drivers (which does change year to year), and what cameras developers have tested hands-on (which also changes year to year).

Can I help make changes too?

Probably more discussion for a future post but…

  1. Yes!
  2. Start by setting up a development enviornment, make sure you can rebuild and run the latest release
  3. Find an issue or fix something that’s been annoying you! Fork the main repo, push, then PR your change back!
19 Likes

Sounds great! Just out of curiosity, are there any plans for more machine-learning technologies? (like bumper detection)

1 Like

Yup! Two main areas of progress on that front:

ML - What’s In Flight?

Allow multiple object-detection models

At a minimum, we’d like to get away from hardcoding exactly one object detection model, and allow users to upload multiple and select between them (or maybe even run multiple in parallel).

This would still be tied to OrangePi hardware, though.

Support Object Detection Pipelines on more hardware

Moving past the orange-pi RKNN hardware assumptions will be a bigger update, but there’s multiple other good options available. While PhotonVision doesn’t want to try to “boil the ocean” and support everything, picking a few alternate hardware options and helping funnel people into them is definitely a goal on the roadmap.

The Gaps

Model training remains mostly a “you just gotta know what you’re doing” exercise.

I’m not entirely sure this will change before the 2025 season, and there’s still many “how should PhotonVision make this easier?” discussions needed. At a minimum though, I’m personally hoping we can beef up docs so the enthusiastic team can at least know how to convert into the RKNN hardware format from something else that’s common (yolo?)

3 Likes

First of all, I want to thank all the PhotonVision developers for putting so much work into a system that is so helpful to so many of us.

About the documentation: could the object detection docs be made more explicit about how to use the detection in code, what properties are available, etc.? The feature itself is great, but it was pretty confusing to get it up and running and connected to the code. Again, thank you!

1 Like

Yeah they certainly could be! If you see a good way to make things more clear please take a stab at it and open a PR against the main photonvision repo (which now has our docs in it too)

This is also good because we can now compare message hashes, instead of photonvision versions, between photon-lib and your camera. This means that for free we can detect when the interface definition changes, and force users to upgrade only when you actually need to. This idea was borrowed from rosmsg

I mean there isn’t much to do on the PV side. For the 2024 season my team just used the detection box corners to find midpoint and plugged those into a homography to get the actual distance to the note. I linked the code if you want to take a look.

How did this manifest?

1 Like

I believe they are referring to this issue

Basically the packet item order was incorrect for C++ giving bad multitag results

2 Likes

Furthermore:

(thanks @ArchdukeTim)

If the trendline holds for 2024, there were 0 C++ teams.

Now, in reality, I know there were at least a few. But, at least for the 2024 season, the Venn Diagram of “Uses PhotonVision” and “Uses C++” was apparently two non-overlapping circles.

Not sure where this data is pulled from, but we definitely used photonlib with C++.

Also, not sure if its related but rio-side multipnp seemed to be broken this season (running the same co-processor and code as last year with 2024 photonlib produced gibberish results). My best guess was faulty calibration/intrinstic data being sent/read over NT.

2 Likes

The data is from all of WPILIB I believe. We also used C++ with photonlib and did multitag on coproc.

That bug basically made it so all of the data past the multitag part of the packet was corrupted. So on Rio coproc would def have been broken

We used C++, but we also switched to Limelinght when we were getting incorrect data. It explains why when talking to some Java teams at comps and they were happy with PhotonVision; we were thinking we were doing something wrong.
Guess we still could have been, but this would explain explain the bad results we were seeing.

This graph is 4 years old. Here’s data from 2024 usage reporting:
2024-lang-usage-percent-projected
2024-lang-usage-totals

Year Java LabVIEW C++ Python C# Kotlin Unknown Totals
2016 1526 1116 435 33 1 0 3 3114
2017 1873 979 428 43 6 0 2 3331
2018 2280 862 408 64 0 0 2 3616
2019 2595 709 371 77 0 0 8 3760
2020 1527 288 158 31 0 4 0 2008
2022 2479 277 249 46 0 2 0 3053
2023 2833 181 216 53 0 1 0 3284
2024 3099 93 176 92 0 13 1 3474
2 Likes

Tyler’s got the better graph (sorry, CD 502’d me for a half hour and I couldn’t update with the better one).

The issue was avoidable depending on which versions you were on, so it’s possible it worked.

The punchline I was trying to hit: We need a better way of supporting the multiple PhotonLib languages internally. I’m hopeful Matt’s serialization PR referenced above helps that effort, while still playing to the strengths of the current development team (which makes C++ - wrappered-to-python difficult still).

Huh… that sudden jump in Kotlin usage is surprising. I wonder if there are any underlying causes.

Id love to help with documentation in anyway. PM me if you want/need the help. (Please be YPP Compliant so only 18+ non student please or respond in the thread)

2 Likes

Hey! We don’t DM people as a rule (for those reasons, and to make sure discussions are searchable), but stop by our Discord server or feel free to pick up a GitHub issue and start making PRs to get some feedback!

3 Likes

The ROMI robot supports a toggle of read/write file system to prevent corruption of the SD card when power is cut without running a shutdown command. Will this years PhotonVision support read/write support?

2 Likes

Have you experienced any problems with SD card corruption in 2024? In prior years, this had been a problem, but changes to the way that the settings are stored seem to have fixed it.