Limelight vs PhotonVision

Is there really any point in using a limelight camera? I get it’s very good and all but for like 400$ what is the point? should we just get one or two OV9281 and then run them off an orange pi?

1 Like

With Limelights you mainly get ease of use, you pay more but get a system that just works, with minimal setup and convenient hardware. While PhotonVision can offer higher performance for a lower price, it requires fairly significant setup especially the first time you use it, monopolizing your time during the preseason or possibly even more valuable build season time. I also don’t think PhotonVision has an out-of-the-box apriltag model comparable to LimeLight’s MegaTag2 (Someone correct me if I’m wrong) unless you create a custom vision pipeline using 2D tag tracking to use the gyro for pose determination.

Overall, PhotonVision is great if you have time and programmers to throw at it, Limelight is great if you want a vision system working ASAP with minimal work.

2 Likes

i agree completely, they are not comparable . limelight can’t touch the effectiveness and accuracy of photon in our experience

3 Likes

Megatag2 appears to be pretty bad actually, I’ve seen a lot of posts complaining about totally random issues. Megatag2 is more stable than PV multi tag, but because you need to spend so much longer calibrating it to get good results it kind of evens out.

LL is easier to get going, but it’s not going to be very good without some legwork. Installing PV on an Orangepi5 is very easy, the problems are just powering it (you need to actually wire it up) and stability concerns with PV. I’ve definitely had PV crash more than LL, but I know many teams that, once set up, have no issues.

Personally I find the pricetag of the LL to be too high for its components, and the customer support / technical details are basically non-existent, so I prefer PV. But if you have money and not a lot of programmers, LL is better.

You can find my test results and resources here: Coprocessor - Google Drive

3 Likes

(Disclaimer, 555 only uses Limelights and hasn’t tested the results of megatag2 after the 2024.9 release)

When released, Megatag2 was pretty bad while rotating (which is most of the time) because of the latency involved with sending your gyro measurements to LL with no timestamp (NT3).

Limelight’s 2024.9 release supposedly fixes (or at least hopefully mitigates) this by using NT4’s “getAtomic()” method for timestamped gyro measurements.

@gopherguts2 While PhotonVision doesn’t currently have the same structure of sending timestamped gyro measurements, that doesn’t necessarily imply Megatag2 is better than whatever PhotonVision has. There is also currently a PR open for it that hopefully will be getting merged soon.

3 Likes

Our team used PhotonVision with OrangePi 5 in 2023 but gave it up in favor of Limelights in 2024 because of numerous and significant position tracking issues we had with PhotonVision that plagued us during and after the season (even one of the PhotonVision promotional videos they posted within the past year showed these tracking errors in the video). Our team hasn’t really looked at PhotonVision again since like January so it’s possible those issues were resolved at some point, but overall we found Limelights were the far more stable and consistent solution.

Also, of the teams that did successfully use OrangePis for vision in 2023 (and we spoke with a number of them at worlds that year), most of them were running one OrangePi per camera due to the performance limitations and framerate demands of accurate vision processing. At least one of those teams even had an additional OrangePi with no cameras, just for taking all the inputs from the other OrangePis with cameras and doing the position computations (in order to free up resources on the RoboRIO).

2 Likes

While you can run 1 Opi5 per camera, it’s not a big performance hit to run 2 cameras on one board. It’s by no means necessary. LL has about half the processing power of an Opi at the end of the day.

1 Like

Others have said the right things so to drive the point home:

How valuable is your time?

2 Likes

Sure OrangePi is more powerful but software optimizations come it to play too when comparing PhotonVision with Limelight, even in spite of the hardware differences. I will agree though, when we used PhotonVision in 2023 we also used two cameras on one OrangePi (though we also had two 20k RPM server fans to keep it from throttling). It’s definitely doable, just perhaps not optimal.

My observations about one OrangePi per camera were just what we had observed from other teams we thought had done vision better than us that year. When it comes to FRC vision there’s “good” and there’s “usable” and it really just depends where your team draws that line.

1 Like

so follow-up question, I am the only programmer for my team and will be the only programmer from my team, would you say just buying the limelight and forgetting about it would be worth it not to cause me the stress of balancing programming the rest of the robot (and frankly probably doing electrical too)?

1 Like

also is it worth it to buy the google coral thing?

Buying the Coral is only useful if you want to do note detection (mostly for the purpose of automatically aligning/intaking a note)

We used a google coral this year and it is extremely simple to set up, especially since Limelight releases their own detection models. If any teams that did note detection by color and not CV maybe they’d have a more informed opinion on its viability but I know it can be done.

This really depends on the financial status of your team and your willingness to tinker with photonvision and the hardware a tiny bit more. You could also argue the hardware for PV is more future-proof since the OrangePI (if you’re going for that) is (AFAIK) substantially more powerful than any hardware limelight is currently packaging.

I personally love Limelights for their ease of use but their also pretty expensive given the hardware inside.

If you’re the only programmer, get a LL. But do know that vision processing isn’t a must for most games, and you should make sure you fundamentals like PID and odometry should be good before you need to integrate vision. Point-to-target should be your goal to start with, and LL is good for that.

You have four months to learn almost everything you’ll need to know about programming and electrical. Go back to this pertinent question by taking an inventory of what the team wants to accomplish in programming and electrical and compare that to an inventory of what you know.

Do you need to be fluent in swerve programming, CTRE drivetrain accurate odometry, PIDF controllers (with and without profiles), Choreo, PathPlanner, chasing game pieces, driving, logging, LED signaling, etc.? Are you already fluent? What’s vision to be used for and do you already know much about it? Do you need to or have the time to devote a couple of hours a day for the rest of the year to this?

Everything takes longer than you expect. LimeLight claims Easy but I’ll point out that my team consumed days trying to handle pose ambiguity. If you have expectations of getting to World’s again then it’s hard (impossible?) to get in enough hours to do so. If you want to field a credible robot that is better than last season and that might get picked to advance in your state then hard work could get you there if you pick the right subjects (battles) to learn about and do well.

1 Like