@veg Thanks! We are getting similar performance using last year’s wpilibpi version on an rPi 4b with a corral. Getting between 14-16 fps. We are labeling and training with Roboflow to TF(TF Record) then converting to TFLite.
We did test a bit with Teachable Machine, but it appeared it will only create classification models, is that correct?
Using the basic inference python script from last year you can easily test the model on your device and see the labeled bounding box detection in action.
I was wondering if labelling could be partially automated using a tool like this, or maybe using a basic blob detection routine. But then I think, if an algorithm could reliably auto-label the images, there would be no need to train a net.
Automated labeling is available. In reference to @joseppi’s post, you can use trained projects on Roboflow for labeling. Whether its your own project, or another one that is publicly available on Roboflow Universe:
Auto-labeling solutions are all using a trained model in the background. Whether it is a trained model that is exposed/known to the user, or a “hidden” model that the user can’t identify.
By the way, for full disclosure, I work in Developer Experience at Roboflow.
I’m seeing something unexpected here. I am writing an app to run various .tflite models, and coded it to run either with or without a Coral attached. On an RPi 400 with 4 meg of RAM I am getting about the same performance with or without the Coral ~ 7.5 - 15 fps.
I am not running the Coral at 2X clock speed, so it probably can still beat the Pi. But this makes me wonder if teams without a Coral could still think about running ML models.
Have you benchmarked without a Coral, and if so, do you see anything similar?
Our team has not been using a ML approach this season, so I have not carried this project any further. It should be trivial to take the information that is being imprinted onto the video stream and publish it to the network tables, but I don’t have a working setup at the moment to be able to test this here.