Hello everyone,
I’ve been trying to train a detection model for Limelight 3. It works reasonable well without quantization, but I’ve read that quantization can lead to a significant improvement to model fps and latency.
I’ve played around with tensorflow/tflite_model_maker with efficientdet_lite. I tried dynamic range and float16 quantization, which causes the limelight pipeline to break and basically require reimaging the limelight. But I haven’t tried int8 quantization yet.
Earlier I used yolov8 instead of efficientdet_lite which limelight does not support (I learned this when it blew up a pipeline). And now it seems that some quantization settings also break the pipeline. Does anyone know the exact specifications and requirements for tflite models on the limelight? I am unable to find this information anyware apart from needing to be a tflite model.
Thanks so much.