Query on Machine Learning Plans for 2023

What are other teams planning on experimenting with for Machine Learning on the bot this year?
What hardware do you plan to use?

Vision Plans for 2023
  • WPILibPi w Google Coral
  • Jetson
  • Google Coral Dev Board
  • Khadas VIM3/4
  • RockChip
  • Intel OpenVINO
  • Can’t Get Parts so Why Bother
  • Other Hardware

0 voters

I dunno what (if anything) 449 plans to do, but we do still have a Jetson lying around from a few years ago and it’d be a shame if we never used it.


Our programmers are experimenting with the Jetson TX we got a few years ago from FIRST Choice. They’re trying to do some machine learning relating to visual target recognition, but the details are beyond me (not being a programmer.)

1 Like

I’m curious to know if teams had success with ML in 2022. I know a lot of teams looked to do ball tracking with machine learning in 2022, but I wasn’t able to find many posts/threads of teams touting huge success with this sort of thing.

I built a ML model for the yellow balls in 2020 - it was fun and a neat little proof of concept, but it was pretty expensive to label + train (I outsourced the labeling - so expensive to label, resource/time intensive to train). Once I was done I realized I’d solidly over-engineered a pretty simple problem - finding a bright yellow object on grey carpet.

The 2022 balls were obviously different - the red/blue balls presented challenges with not matching other red/blue elements (bumpers). A lot of basic solutions of doing CV proved to be pretty mediocre (Pixy, RPi + PhotonVision, Limelight, etc.). A lot of this seemed due to a mix of calibration and differing lighting conditions, nuances with different cameras, the speed of available coprocessors, etc.

I started wondering if ML would have made for a more robust solution - but I’m not necessarily convinced. I imagined going to an event with an ML ball tracking solution and having an issue with tracking balls on a field due to different lighting conditions - which was an issue for a lot of teams this year. Changing the camera input and running the model against some images to get a level of confidence might be the best tuning route - but it seems VERY time consuming, compared to changing sliders in PhotonVision or Limelight and seeing the results in realtime. The idea of retraining the model at the event with new labeled data from video taken during a practice match or something seems unlikely (although obviously - that’s the last lever you’d pull).

I’m also not sure that running a ML model is computationally lighter than PhotonVision, so I’d still have concerns about running on something like a RPi and getting the performance we’d want. (Edit: This post talks about running an ML model on a RPi with/without a Coral - maxing out at 30fps with a Coral, which is better than the 3-5fps without, but still not great for a moving robot)

And finally - at the end of the day, it’s still just trying to solve the problem of finding a colored circle on grey carpet. I’m not sure ML would do this THAT much better than the existing CV algorithms available to teams in order to justify the cost of getting it all up and running.

I’d love to hear counter takes to this, success stories, or teams using ML for something besides ball tracking in 2022. ML is super cool - I wish I could teach students about how it works and get them interested in this sort of thing. It’s much more inspirational than changing sliders in a UI someone else built. The cost just seems very high, and the effectiveness does not seem better than simpler approaches.


Is that the only problem ML is capable of solving?

The ML worked pretty well (even though our driver didn’t use it)

We had Pi+coral solution. The model would detect a ball then we add python code to determine if it was red or blue and put it in NetworkTables.

I was hoping to find a way to get a more turnkey vision system that could do a opencv pipeline and ML.

Then the students could spend the time on making the model and the rest of the bot program.

In the real world my company uses it for a whole lot more. Not just vision, but anomaly detection and automatic systems remediation.

We both know it’s not. That’s why I ended the post about what other applications teams have had success with ML in FRC.

1 Like

I don’t know anything. I’m just here so I don’t get fined.


For those that say you can’t get parts… For about the price of a falcon 500 you can get a solution that gives you 40fps+ that is available.

I just tested it on my mini server based on a celeron N5095 using openvino and the integrated GPU.




A challenge I’d pose to teams is to try to use machine learning to identify system degradation.

In the past it didn’t make sense for most teams. Their comp bot saw so little wear that detecting things could be hard. But now that many teams aren’t building practice bots and are adding that wear onto their bots having a system for saying “this part is worn out” may be viable.

A first proof of concept I’d love to see would be detecting that tread needs to be changed. The metrics I’d start observing would be acceleration and in the case of swerve azimuth current draw.


It’d be simpler to have a wheel odometer with a service interval.

Simpler, but less-accurate. Plus, wheel tread isn’t the only part that teams probably want to monitor closely for wear/failure, and not all such wear/failures are going to occur at a constant, predictable rate.

This type of thing can probably be handled off bot and not in real time if you’ve collected the telemetry. Push that data into an analysis tool and see what it gives you.

Elasticsearch comes to mind, but I know there are many others.

Cars, airplanes, construction machines all have this type of thing. Inspect at X number of hours/miles/etc.

If we can solve it in FRC many industries would swarm over the students.

I appreciate anyone identifying other sources of parts for robots. I have no doubt that a performant system is achievable within the current cost limit. I never said it wasn’t possible.

That being said, that system is not a replacement for a Jetson Xavier NX or an AGX or an AGX Orin.

Nope. But I did find some in stock for $599. (NX)

I was also going out of my way to find a solution even low budget teams could utilize. I am not sure the exact specs, but I think the general compute of the intel system is faster (GPU not even close). Could be maybe used for logging, serving dashboards, other roborio offload tasks in addition to the ML.

Yes. It being simple is part of the point.

Build the tooling and infra around a simple task before tackling the harder one.

If you deploy your system you have some baked in evaluation metrics too.

Yeah, they are available now. I agree. Those aren’t dev kits though and contain the emmc variant of the card. They are more than usable though.

I love that you’re sitting here going “nah nah! I found one for $599!!! See!!!”… meanwhile the drive base is $699 without motors or encoders… swerve kits are something else…

I appreciate your viewpoint but I do not believe that removing the cost limit on computing devices would result in any drastic change to FRC.

Well the in-stock is the bigger metric these days. FRC should take market-price OR MSRP which ever is LOWER.

Lots of horsepower in a ryzen based mini-pc, too bad its not great for ML.