Second fastest way would be to donate a LL3 to a PhotonVision developer.
Haha yes we would super appreciate hardware to help build out our coprocessor test suite so fewer bugs make it to prod. Even that isnât a guarantee tho; I unfortunately must also pass classes and graduate college
In what is most definitely not a blatant effort to increase my CD statistics, we would love to hear what kind of awesome things youâve been doing with Photon in the last week! Feel free to share whatâs working well, a new favorite feature, or areas that have been pain points recently.
First off, we used photonvision last year and loved it, thanks so much for this amazing software. As of right now, weâre struggling to get the latest version (2024.1.2) on our Orange Pi 5. Weâve tried reflashing with the image included in the latest release onto our Orange Pi, but continue to get the same version # (2023.4.2) displayed when we open up the photonvision service at photonvision.local:5800. Is there a step in the process weâre missing?
Hey Folks!
Update from the team: A bug was discovered which was causing the .jar files deployed with photonlib to be 10x oversized. We donât expect this to play nicely with a lot of RIO setups, so in an abundance of caution weâve removed the broken releases.
2024.1.4 is the new latest release - please update your robot projects to use it.
How does that work for multiple cameras on the same module? Do we get one estimated pose from all cameras or one pose per camera?
You get one multi-tag pose per camera, and must use those with your relevant drivetrain pose estimator.
Thanks to the insanely hard work of the following Discord users (+ more), Photon has a MVP using YOLOV5s-based object detection on 2024 NOTEs on Orange Pi 5! This would not have been possible without everyone pulling together and cooperating on writing and debugging embedded code and figuring out how to train models from broken instructions.
The model above was trained on images from a collection of Roboflow NOTE datasets. A particular shoutout to nisala and the folks over at https://getbaseline.app/ who stepped up unsolicited to donate compute resources for model training and conversion. Our progress over even the last 12 hours wouldnât have been possible without them, as well as the following Discord users:
alex_idk
moto moto
js & asid61 & craig for testing code
All code is public and licensed under the GNU GPL V3 like normal, and all models we train will be released likewise. Artifacts are published to our Maven server by the JNI repo and consumed by the main photon repo in that pull request.
Photon code: Add RKNN / Object Detection Pipeline by mdurrani808 ¡ Pull Request #1144 ¡ PhotonVision/photonvision ¡ GitHub
RKNN JNI code: GitHub - PhotonVision/rknn_jni: Java wrapper around rknn converted yolov5 model
Model conversion is still something of a dark art known only by alex_idk at this point, but docs on that process are in flight. Stuff probably isnât quite ready for general consumption, but as always drop by the discord and say hi! We are always happy to get more testers to find bugs for us.
Shoutout to @thatmattguy as well for all the JNI code and other things!
Looking forward to seeing how teams use this this season.
Is there any resource on how to make this available for Limelight 3?
This specific ML stuff is only possible because of the RKNN accelerator present on rockchip rk3588 (and other similar family) chips. The Piâs CPU will never be able to do this in any semblance of real-time-ness with this particular model, at least. OpenCVâs DNN might be able to be coerced into using opencl with the pi gpu to accelerate things a little bit that way? Worth digging into if you have bandwidth. Iâve got an opencv DNN PR open rn
And that same PR working on my âââCool Pi 4bâââ with rk3588s ($120 on Aliexpress rn)! This is with dual cameras (Logitech C920 and Lifecam HD-3000) over a gigabit ethernet switch directly to my windows laptop, exposure/video mode settings in the screenshot. I was able to get it working without too much trouble, but the Opi5 is already ubiquitous and better supported.
PhotonVision v2024.2.0
Whatâs Changed
First big feature update of 2024! Brings (still incubating!) object detection using RKNN with a hard-coded YOLOv5 model trained on notes, and an (also new) LimeLight 3 image. Now that features work weâll be changing gears to focus on documenting new things. As always please reach out on Discord with journalctl logs if you encounter any issues!
- Add RKNN / Object Detection Pipeline by @mdurrani808 in Add RKNN / Object Detection Pipeline by mdurrani808 ¡ Pull Request #1144 ¡ PhotonVision/photonvision ¡ GitHub
- Add LL3 image by @BytingBulldogs3539 in LL3 by BytingBulldogs3539 ¡ Pull Request #1166 ¡ PhotonVision/photonvision ¡ GitHub
- And other changes included since v2024.1.4:
- Load libquadmath on Windows by @mcm001 in Load libquadmath on Windows by mcm001 ¡ Pull Request #1163 ¡ PhotonVision/photonvision ¡ GitHub (Mrcal calibration fixes for Windows)
- Bind-mount repo in image builder by @mcm001 in Bind-mount repo in image builder by mcm001 ¡ Pull Request #1157 ¡ PhotonVision/photonvision ¡ GitHub (reduces total image size again)
- Update spotless by @rzblue in Update spotless by rzblue ¡ Pull Request #1162 ¡ PhotonVision/photonvision ¡ GitHub
- [photon-lib java] Make targeting classes extend ProtobufSerializable by @ArchB1W in [photon-lib java] Fix classes with protobuf support not "announcing it" and as a result causing crashes with AdvantageKit by ArchB1W ¡ Pull Request #1156 ¡ PhotonVision/photonvision ¡ GitHub
The nitty gritty
RKNN is the neural network accelerator hardware built into rockchip CPUs using on SBCs like the Orange Pi 5. Weâre able to use this hardware to massively accelerate certain math operations like those needed for running ML-based object detection. The code for that lives here, and publishes to Maven. Our pre-trained note model lives here, and NeuralNetworkModelManager deals with extracting that to disk. If a file named ânote-640-640-yolov5s.rknnâ and labels.txt
does not exist at photonvision_config/models/
, itâll be extracted from the JAR â otherwise, the one already on disk will be used. This technically allows power users to replace the model and label files with new ones without rebuilding photon, though this feature is still largely untested, too.
Iâve seen latencies of 20-25ms with our current model, although further performance gains seem possible with model updates. Note that the note detector runs at 640x640 pixels â images larger or smaller are âletterboxedâ up/down to this resolution. Picking a camera resolution with a width of 640px seems reasonable.
Full Changelog: Comparing v2024.1.5...v2024.2.0 ¡ PhotonVision/photonvision ¡ GitHub
Got this from their discord server YOLOv5 to RKNN | Kaggle
Any way to run these models on x86?
No, ML is only supported on Orange Pi 5s (anything with an RK3588) at the moment.
Would I be able to run both an object detection pipeline and Apriltag pipeline on one OrangePi?
Our testing shows that running object detection on one camera does not significantly affect performance of an AprilTag pipeline on another camera on the same Orange Pi. Do note, you canât do detection and AprilTags on the same camera (nor would you want to, you need color for detection).
You shouldnât need a color camera, a grayscale model would just need to be trained. The note is such a distinct object on the field so its not hard to detect them.