I was looking at 4481’s post, and they made the switch to PV because with it, they could calibrate their camera better (less reprojection error?). Thoughts?
Limelight’s software has its own advantages, like using the google coral for object detection and easier camera offsets and visualization. As far as tags go limelight’s software has worked very well, while pv might have some advantages limelight’s software is great.
Right now there are benefits to each, and there’s potentially benefit to running both on the same robot. I will say however that photonvision seems to be making vast improvements, and may be the de facto in a few years.
What would be your reasons to run LL on said robot running both? Just curious not arguing against it…
Photon for apriltags, LL + Coral for object detection.
They are both pieces of software trying to do very similar things. If you don’t have LL hardware the choice is easy.
If you do have LL hardware then there are several points to think about:
-
How experienced is your team? Photonvision is incrementally harder to set up but for some teams that really matters.
-
What features do you value? Photonvision offers a lot of top end performance things like a faster API, but some find that harder to use than the simple but slow NT structure.
-
There is some advantage in designing for only having 1 hardware choice all of Photonvision’s options. Default calibration and megatag working well even at distance are nice. Meanwhile PhotonPoseEstimator and Camera simulation are also amazing features
We switched to photon this year and are liking it. Was curious on things LL might do better as we still have them. Never got to object detection with them as we had switched before trying it.
Soon after they made the switch, Brandon released the new LL OS with apparently much improved calibration. There’s no data on how good it is or what level of accuracy to expect from a Limelight, however. Maybe someone has tried the new calibration?
PV has treated me well, but it’s best to use what you’re familiar with.
I have some thorough testing data here:
https://drive.google.com/drive/folders/1Av5txlrkcIjCsnPCeR7cmILKfKBOy4mZ
I do wonder why this is a thing that the camera needs to do and not something better suited to on the robot. Homogenous transformations aren’t terribly complicated and could probably be applied to a lot of other systems to let you reason about where your robot is… I’m envisioning a system where sensors and mechanisms have 3d transformations so you can identify where a thing is happening. Maybe a space to explore if there’s use for it in the off season.
That’s reassuring to know, I’ll try to rest it out to see how much the update helped. Side note - thanks for all of the research and testing you’ve done, specifically for vision and coprocessors, it has been very useful.
You’d be surprised. Having a visualizer for people to confirm that their 3D camera pose is correct is a big help.
It really depends on what you want to do with vision. This has been said before, but both systems have their advantages and disadvantages. But with that said, Photon Vision can have faster object detection, to my knowledge at least. (I don’t know what teams have done for limelight past 35 FPS) My team got the object detection to 49 FPS constantly, and it works well. Global shutter is also a big plus. Cheaper as well, actually.
But this far in the year, your probably better off trying to stick with whatever your using now.
Having used PV for AprilTags in 2023, we kept that this year. The PhotonPoseEstimator has been very reliable for us.
We wanted to do object detection this year, so we bought a LL and Coral. Then PV added ObjectDectection on the Orance Pi 5. In the end, we found the PV worked better for us than the LL+Coral.
Our current rig is a single Orange Pi 5 with an OV9281 global shutter cam for April Tags and a LifeCam for Object detection. Results have been very solid and reliable.
Hi. Do you ever observe the problem described at PhotonVision OjectDetection camera sometimes missing on boot - Technical / PhotonVision - Chief Delphi ?
I’m wondering if we should seriously look at Limelight + Coral when we have time to do so.
My team has been seeing some form of that, but we suspect that its because were using 2 cameras of the same model, and its confused. We havent had tome yet to switch cameras out and fix things, but we should be able to do so soon.
Thanks. We gave up on using two lifecams, and deleted the photon vision configure file, and we still see the problem.
I am not convinced anymore that having two lifecams confused the system. I think that there is a serious underlying issue with object detection pipelines.
This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.