Our team is beginning to start on auto code, and given the two cargo colors, we wanted to use some sort of vision pipeline. We’ve used limelight in the past (2019) to line up to a vision target, but I’ve been reading a bit about PhotonVision and Axon. I haven’t used either, so I would like some input as to what may work best (or what may be easier to work with). Any input would be greatly appreciated. If it is relevant, we code in C++.
The language used should be irrelevant since afaik both systems use networktables to so java or c++ should both work.
As for your question, PhotonVision doesn’t take long to set up (similar to limelight) but of course programming it to actually be useful may vary. I have not set up Axon yet, but I’ve heard it’s not easy to set up on windows, and being the only linux user on my team, I’ve been busy with other stuff that we haven’t tried it yet. There is also the image annotation and training phase and that depends on how much data you have and how fast your computer is.
My suggestion is start with PhotonVision, and if it isn’t working for you then try Axon.
WPILibPI pipeline on a Raspberry Pi written in C++ works well. It’s frankly not that difficult if you know C++, which you do. I find that things like PhotonVision and Axon abstract a lot (what they’re designed to do), but in this case the abstraction can be more harm than good.
Respectfully, I disagree. WPILibPi requires writing vision pipelines in OpenCV. Unless you are already proficient in OpenCV, PhotonVision (or for that matter Limelight, although open source is generally better) abstracts OpenCV pipelines in a way that makes them very simple to tune, edit, and interact with. You can also install PhotonVision on a limelight (similarly to WPILibPi) very easily, which means you don’t have to switch hardware to take advantage of color detection. Axon does something completely different than PhotonVision, OpenCV, or Limelight, in that it is for machine learning. It’s recommended to use a Coral TPU with axon, and given the difficulty of aquiring one right now it may not be a good idea.
I also respectfully disagree. We use limelight for goal detection, and they excel at things like that. The problem with systems like that for ball detection is that, based on my experience writing a ball detection algorithm, you need more control over the algorithm, which you can only get by using OpenCV. This frankly isn’t very hard, and I believe it would be far more work to try and tune a system like that than to just write it custom.
That’s fair. I don’t have the experience to know if this would work well, but have you looked at PhotonVision’s option to detect colored shapes? That seems like it would work.
We moved to Axon and are waiting for out TPU. Maybe we should have worked on photon more but we found it was pretty good at identifying the colors for pipelines but picked up bumpers as balls as it wasn’t so good with the shapes.
Have you tried adjusting the fullness option in limelight countout filtering? This would effectively differentiate rectangle targets like numbers or the goal (fullness ~100%) from round targets like balls (fullness ~78%). On top of that, they should have an aspect ratio close to 1 - which is also drastically different from bumpers. I haven’t had a chance to test it this season, but I was able to track the balls from 2019 as well as the cubes from 2018 using these limelight options (with team 2605).
Another option could be to explore the use of python scripting (Mentioned by Brandon here) to create your own pipeline using the limelight platform built specifically for tracking the game pieces.
Thanks for all the replies! Since we don’t have a RaspberryPi currently, we will begin by testing with the Limelight. Hopefully, we can get a Raspberry Pi to test photonvision or potentially the WPIlib Pi pipeline (though we don’t have any experience with OpenCV) within the next couple weeks.
Hi! Just wanted to note that you can install photon visioN on a limelight pretty easily. All a limelight is under the hood is essentially a camera, LEDs, and a raspberry pi. It took about 20-30 min (downloading time) when I did it earlier this season. The great thing about this is you can try PhotonVision’s colored shape pipeline feature to see if you don’t need to write your own OpenCV pipeline or use ML.
Here’s the guide for PhotonVision: Installing PhotonVision on a Limelight - PhotonVision Docs
If someone has a resource or input on how to install WpiLibPI on a Limelight (it should be possible I think, since a Limelight is a raspi under the hood), definitely chime in!
That’s very good to know, thank you!
This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.