Limelight OS 2023.1 Performance Tests on LL2+

Update: I have a LL2+ and I am able to replicate Brandon’s 44fps example, but the range is awful. Making the black level offset high completely disappears my image. The lighting in my house is not super bright, but I have a set of very overpowered LEDs in one fixture that are more that bright enough to work with other cameras with good performance. On other coprocessors, I have had no trouble detecting tags from 24 feet away at 640x480 and no downscaling, and 1280x720 scrapes 32+ feet (the length of my dining and living room).

Here’s a link to al of my testing:

A sneak peek at the performance in that doc if you don’t want to click a link:

Now, this is with the same target that I used in my other tests - a 16h5 tag of official size, with no lamination or other coating - AKA non-gloss. Maybe Brandon is using an official target or a shiny vinyl target of some kind.

I will update the testing doc with the results of Photonvision running on the Limelight Tuesday night or Wednesday. I also need to get my hands on stock Pi 3 and Pi 4 to repeat the same battery of tests with Photonvision.

I will not make any definitive conclusions about LL OS until Brandon publishes official specs on how to perform tests, because these results are so far pretty poor compared to the other coprocessors I have tested and well below the marketing numbers for the LL.

Also please note that my tests are benchmarks to be compared against other tests in my collection, and your results might be different, as Brandon mentioned in his original post. The biggest thing is that while my house is well-lit, team shops and fields are definitely brighter, so the maximum range (when limited by ambient brightness) can be increased maybe 20 or 30% depending on the settings. However, I would not expect dramatically better performance than what I’m getting out of the box, and I certainly would not expect 40-50fps unless you tank your range to under 8 feet.

I have no plans to test 3D localization performance, as that performance is a massive mess of how many targets are in view, which depends on where you are on the field and how much range you have… that’s a statistics problem for sure.

A graph comparing framerate among different Coprocessors, from my big folder:


Note that the LL bar has ~8% less range than the other processors listed.

For now, if your team is looking at purchasing a Limelight, I strongly recommend looking at other options until the firmware is dramatically improved or performance numbers for the LL3 are released - with range information. It’s clear that Limelight OS 2023.1 struggles to get comparable performance to Photonvision in its current state unless I have completely screwed up tuning - which is completely possible, and I hope someone can point me in the right direction if that’s the case.


I have completed tests with Photonvision. In most tests on the LL2+, Photonvision provides higher framerate and higher range. LL OS performs better at very low resolutions with high decimate values, most likely because it seems to do better with darker images. Photon prefers lighter images.

I have updated the testing document linked in the original post with the full test results.

Processor roundup updated chart: