Did Madtown use Google Teachable for their machine learning classifer?

On the limelight page, they tell you to use Google Teachable to train a classifier model to detect game objects. I know that this option is very convenient and simple, but I was wondering if there is a possibility that Madhouse on the 2023 game used something better than that such as a custom machine learning algorithm for their classifier. My team recently bought a google coral, so we’d very much like to consider such possibilities.

Cheers guys

1 Like

I’m pretty sure they just used the pre-trained model that limelight provides on their website (same as 4414 in the offseason)

3 Likes

Odd, I have always assumed it came after the competition since I saw teams talking about what algorithms to use, which shouldn’t be the case if it’s already made.

I am aware of the prebuild model, but I’m more interested for in making it detect future game pieces.

I’d look into training your own tflite model

1 Like

Honestly, yeah, I feel like I should have tested making a model and compare it to the Google teachable before I post here, but it’s still good to know about any stuff to consider beforehand.

I can ask them at the Madhouse Botjoust this weekend.

13 Likes

Many thanks!

3 Likes

may have needed a /s

(@d2i-23 Sorry to be the CD grammar police, but it’s Madtown not Madhouse lol.)

The one that is bundled with Limelight works really well as is. My team (341) started testing with it during the offseason. It helped us finally hit our 3 piece auto. Video We are the red robot on the bottom left, near to the end of our path in auto we switch over to the Limelight on our arm running the pretrained tflite model. That model outputs an angle relative to the camera, (all the output is on the Limelight docs) and all we do is turn in that direction and drive until the beam break on our intake is broken. Then we switch back to our normal pathplanner path and do the same thing with the second gamepiece. (The announcer made a mistake when saying we have Lidar in the video, I think he meant to say Limelight as we don’t have Lidar.) The main shortcoming we have found with the pre-trained Limelight model is that it doesn’t tell you the gamepieces (cones) orientation, only where it is relative to you. This means that if you want to pick up cones automatically with AI, it requires some extra precision/accuracy. If you were to train your own model, I would include extra labels for the orientation of the cone.

3 Likes

Yes that is the thing I noticed too after working with it not long ago, especially about the cone orientation and the box accuracy being very questionable. This just affirms my belief that there’s no way that Madhouse used this.

I’m currently looking towards training my own but a thing that peaked my interest is the potential use of a GNN on the tips of a cone to determine the orientation plus a YOLO algorithm to detect the points for the GNN. This should allow for determining of the angle plus orientation similar to how the processes used to detect body motion for AI where there’s dotted lines all over the person and their motion determined by the formation of the dots (nodes) connected by edges. I view that as a more accurate solution to the orientation problem, but I will have to test it’s plausibility.

No Madtown used this, it’s accuracy is actually quite good it just requires some tuning. If you check out Madtown’s Behind the Bumpers, at 7:50 they mention how they use the pre-trained model. Also if there was a misunderstanding, the angle that we get is not the angle of the cone but rather the angle from the center of the bounding box of the gamepiece to the crosshair of the limelight.

1 Like

Oh I see then, my apologies. Do you know of any tips in tuning it?

I don’t quite remember all of the parameters that you could change, but one of the most important for us was cropping the area where we were looking for pieces. This way we were only trying to look for them roughly where we expected them to be and not across the whole field. The size of the image that is cropped can be changed during the match, so we had one mode for auto and one for our driver in teleop. Make sure you tune the brightness of the camera, and another important one is the sensitivity of the model (I forget exactly what this one was called sorry, but it is obvious from the Limelight web interface.) Tuning it for us was mostly trial and error until we got it right.

1 Like

Just as a note, Limelight will most likely have another pre-trained model for next year’s game pieces.

1 Like

Kinda cool seeing their autos. I havent seen their code but seems likr you can see their object detecfion kick in

1 Like

Yeah it is really cool to watch, Madtown does it really well. It’s something that I wish we had thought about earlier in the season because we were having issues with auto reliability all year. I think we were something like 0 for 30 on our 3 piece high link auto. It was so infuriating having the auto work perfectly on the practice field, and then miss every time during a match. AI is definitely something teams should keep in mind for next year to improve auto reliability.

Proper Pose Estimation is really the most important part. Theres a lot of good stuff out there about teams doing it… such as 6328. My guess is madtown is still doing that but then somehow switch into a feedback loop for the object.

Part of it too is how well designed their machine is. We could never get anything beyond 3 low working because of the amount of time it took us to score high. They drop that first cone mid in under a second, and can just smoothly drive backwards for object 2, with a very smooth spin around for object 3. They are able to drive slower through the whole thing which I’m sure helps with limiting odometry drift.

1 Like

We would have had the same problem as you, our arm is nowhere near as fast as Madtown. What allowed us to do the 3 piece high was that we copied 118’s 1 time use cube catapult. This means that we don’t have to waste time at the start of auto scoring the game piece we start with.

I played around with Google Teachable models a bit last season. They were super easy to create, but I could only find ways to make a classifier model (to tell you what objects are visible to the camera) but not one that would provide bounding rectangles or positional information for those objects. I was probably overlooking something, but without that capability it seems that Google Teachable would have limited application for game piece acquisition.

Is it possible to get Teachable to output position/bounding boxes?

1 Like

Yes because you need to train your own on tensorflow and specifically on the Google collab notebooks (tflite-model-maker does not work on windows). DW, it’s a bit of python code but the model itself is provided.

The data preparation of these model also requires a software to make these boxes, but it should be included in the link.