Q: What does this require from me?
A: Uploading a jar using PhotonVision’s “offline update” feature.
Q: What cameras can be used with this?
A: Any USB/CSI color camera should work.
Q: What’s the performance of this?
A: My unit can get 60+ fps with multiple cameras at once. Looking for beta testers to validate my numbers.
Q: Do I need to train my own model?
A: You can if you want to, but I have many models ready to use, including YOLOv5n, YOLOv5s, and YOLOv8n, for 640x640, 960x960, for NOTES detection and CUBES/CONES as well.
Q: How do I interface with it via code?
A: You use the same class you use for AprilTags. The tagId field is used for the class of the detected object, and the ambiguity is used for its confidence.
We tried this a few days ago and were very satisfied with the results. The latency is very low and the model can recognize partially hidden Notes and sometimes stacked ones, too. Notice the low FPS is because the LifeCam we used is capped at 30FPS.
Everyone is welcome to use the latest beta release of the fork, available here.
Just flash your Orange Pi 5 with the included image, or use offline update to upload the supplied linuxarm64 jar.
Model Conversions
The jupyter notebooks for training and converting YOLOv5 and YOLOv8 into RKNN models are now available, links in the discord server! A lot of work was put into both parts, so please share your feedback:)
This ML works using the NPU that’s present on the RK3588S chip that’s on the Orange Pi 5 and similar boards. RKNN is supported only by rockchip devices, so it cannot be supported on a PC or a Raspberry Pi.
I am trying to use this. I installed your image on my Orange Pi 5 and it seems to work and can detect the notes. However, when I try and access the data from the roborio, I get an error saying that v2024.2.7 does not match v2024.2.6. However 2.6 seems to be the latest version available on your repository. Is there a way to fix this?