Hi There!
The PhotonVision Team would like to welcome you to the first Offseason Development thread!
This year, we’re looking to document the great work the community of developers is doing to update and improve PhotonVision in prep for next year. We’re hoping to make some periodic updates as we hit major milestones, and field some Q&A while we’re at it.
What’s happened so far…
Champs 2024 & Early Offseason
This year, we had an awesome opportunity to present a talk at Championships. As a result a bunch of developers were able to meet in person, get to know each other a bit better, and talk to teams about their experiences this year.
Between the feedback thread, conversations on the discord, and our in-person chats, we started to form what our summer development priorities could be.
Finally, in early June, we hosted an open call on the discord server to talk through the development priorities with the team, and get a rough assessment of peoples’ availability.
All meeting notes are available here.
While things are always subject to change, here’s some projects that are bubbling to the top of the priority list:
Early Summer PR’s So Far
Data packing & unpacking
Since day 1, PhotonVision chose an architecture where all timestamp/target data for a frame was packed into a single byte array in network tables. This ensured coherency between different pieces of data all associated with the same frame from a camera. And there’s no intent to loose that.
However, maintaining the code to correctly pack the data into an array on the PhotonVision side, and unpack it in the roboRIO it in multiple languages has proven difficult, to say the least. C++ was broken for the vast majority of the season (and no one noticed???)
A lot of different solutions exist for doing this, but all come with tradeoffs. Some bloat the number of bytes, and most are too slow to be used on a roboRIO in a ~20ms loop.
The current path is to do code generation: A single “source of truth” json file will define the contents of the data packet, and a script will use that json file to generate the Java, C++, and Python implementations for packing and unpacking.
Update all the things
PhotonVision relies on a lot of stuff - Notably, a bunch of npm packages for the front end, WPILib (camera processing, network tables, 3d geometry), and Raspberry Pi / Orange Pi linux images. An important activity is getting these things updated early in the summer, and then (ideally) keeping them stable between fall and the build season.
In the process, a few tweaks to front-end are in the works to give the user a bit more feedback as to what’s happening behind the scenes.
Camera Lost Indicator
Previously, if the coprocessor had a runtime issue where a camera failed to provide a video frame, the net behavior was to keep showing the user the last good frame. This made it hard to tell whether the issue was network latency, something inside the camera, or PhotonVision itself.
This has been made a bit easier to tell, as “PhotonVision Running but camera stopped talking to me” now produces a specific “Camera Lost” frame.
Docs into Source
Documentation is definitely lagging, and is a key priority for the summer and later this fall. We’ve already burned through one apriltag-only FRC season with examples that still reference retroreflective targets, and this needs to change.
Doc updates will likely take place later in the fall, as we want to minimize churn induced by summer development.
For now, a pre-step is to move the documentation into the mega-repo to help make it more obvious when a code change needs a doc update.
No More Arbitrary 0-100% Exposure
Supporting arbitrary cameras is… hard. Really hard. We’ll-be-stomping-on-bugs-till-the-day-we-retire hard.
In an effort to reduce some of this pain, we updated the UI to no longer use an arbitrary 0-100% scale for exposure - rather, the underlying driver is allowed to report its range, and the UI will allow users to directly interact with the number.
This will hopefully reduce some pain teams saw this year where the slider was either doing weird rounding, or didn’t allow the full range of exposure to be selected.
In addition, we’ll want to revamp the docs for what cameras are recommended going forward. This will primarily be a function of what cameras play nicely with orange-pi and raspberry-pi linux drivers (which does change year to year), and what cameras developers have tested hands-on (which also changes year to year).
Can I help make changes too?
Probably more discussion for a future post but…
- Yes!
- Start by setting up a development enviornment, make sure you can rebuild and run the latest release
- Find an issue or fix something that’s been annoying you! Fork the main repo, push, then PR your change back!