Limelight Neural Net Specifications

Hello everyone,

I’ve been trying to train a detection model for Limelight 3. It works reasonable well without quantization, but I’ve read that quantization can lead to a significant improvement to model fps and latency.

I’ve played around with tensorflow/tflite_model_maker with efficientdet_lite. I tried dynamic range and float16 quantization, which causes the limelight pipeline to break and basically require reimaging the limelight. But I haven’t tried int8 quantization yet.

Earlier I used yolov8 instead of efficientdet_lite which limelight does not support (I learned this when it blew up a pipeline). And now it seems that some quantization settings also break the pipeline. Does anyone know the exact specifications and requirements for tflite models on the limelight? I am unable to find this information anyware apart from needing to be a tflite model.

Thanks so much.

Ah so that explains why our limelight bricked when I tried one of the models today…

The limelight uses a Google Coral, which has very precise restrictions on what types of models you can run on them. You can read more on their model compatibility page, but generally, you have to train a lightweight model like EfficientDetLite0/1 or MobileNetSSDV2, quantize it, and then recompile it for the Coral.

2 Likes

Thanks so much for the link. It looks like the model has to be 8bit integer quantized, which explains why float16 and dynamic range broke the pipeline.

1 Like