What does the detector downscale do? In the limelight docs it says “Increasing detector downscale will always increase pipeline framerate. It will decrease effective range, but in some cases this may be negligible. It will not affect 3D accuracy, 3D stability, or decoding accuracy.”. But when I increase the detector downscale, the fps in the resolution+fps selector stays the same (22 fps).
In computer vision “downscale” refers to combining of pixels - effectively reducing the resolution (and sometime pixel magnitude, depending how you do it (i.e. max vs mean)).
Less data to process → faster processing rate.
The effective range is likely referring to the resolution and thus the ability to resolve detail (thus requiring the image capture to be closer to the fixed size object (tag)).
Depending on your application and accuracy down scaling (or the degree of how much you downscale) may impact accuracy.
An an aside: in other areas of science downscale can refer to increasing the resolution, so it can all get very confusing when you are using computer vision in an application where the term has opposite meanings.
Where do I check how much the fps has increased?
There may be something else in the pipeline that is acting as a bottleneck.
Limelight isn’t my forte, so others will have to chime in or you will have to read the documentation.
oh thank you I didn’t see that