Hey Brandon, is there an ETA for the pre-trained object detection model? The documentation says Game-specific models should arrive mid-season, but it was unclear if that is middle of (whole/build/competition) season.
If you are trying to align an intake on a game piece and not locate them on the larger field you may be better served with a python opencv color pipeline. That is what our team decided to use. We thought it would be both faster and more reliable than an ML pipeline.
further question, will the object detection model be usable just through the limelight+usb coral or will the .tflite models be released to the community?
@Tom_Bottiglieri It’s the only thing we’re working on at this point and we really want it out before the weekend. Future models certainly won’t take as long as this one.
@darthnithin You can run it wherever you want. We will only have quantized coral-optimized models available.
Most have indicated that they only care about cones and cubes, so this model is going to focus on high-quality cone and cube detection (no robots, no tipped cones). We can split some of the data out and train a model that also detects tipped vs upright cones if there is demand for this.
Thanks @Brandon_Hjelstrom IMHO, tipped cones is a “nice to have”. If it delays a model for cones and cubes, or reduces their accuracy, I would leave it out.
@Brandon_Hjelstrom we connected our Limelight 3 to a D-Link network switch and the Ethernet lights didn’t turn on at all. When we connected it to the radio’s second port it worked fine. The Limelight 2+ does work with our network switch completely fine. We tried setting a static IP and it didn’t help.
If i’m not mistaken one must create a Regular tensorflow model in the process of creating an edgeTPU model anyways. Why not release them? This would allow many teams to explore unique vision solutions.