Using the Intel Neural Compute Stick 2 with Axon and WPILibPi

So I have the Intel Neural Compute Stick 2 all configured for use with the Raspberry Pi but the Python script that comes with Axon is configured for the Coral TPU. From what I could tell this is what I need to replace in order to get the Compute Stick working,


I would assume I would have to replace “Coral Edge TPU” with something else for the Compute Stick but I am not sure what I would replace that with. Does anyone happen to know and has anyone else used the Intel Compute Stick with Axon and the WPILibPi yet?

So, does anyone have an answer for this? I plan to try to use ML during the off season to learn it to hopefully apply it for next year.

I want to try to use the INCS2 myself to get more frames since the raspberry pi 4 I’m using outputs only 7 fps with Texas Torque’s images (1000+ images of 2022's game balls for vision), and the Google Coral TPU is out of stock anyways.

Side question: How do you initially configure the INCS2 to work with the Pi?

Side Notes:
I’ve read other chief Delphi articles about ML with a Pi and whenever the INCS2 comes up. the topic just dies off.
I’ve also read that team 834 supposedly did use the INCS2 with a custom ML solution, but I can’t find where they’ve released it, if they have released it. (Machine Learning Model is Bad - #21 by CAP1Sup)

We tried ML. I was the project leader for it. I’m going to be honest with you… ML isn’t worth it for FIRST. That’s the hard truth. It took me many hours to realize it. I would advise others not to walk in our footsteps.

Machine learning performance is great… but it’s not nearly as consistent as something like a Limelight. If you really want vision, I’d recommend a Snake Eyes hat from Playing with Fusion running on a Pi with PhotonVision.

The thing is that I really don’t see a use case that would justify ML. The time cost is just too much for something as minor as the ability to pick up balls autonomously. You’d be better to put your time into more complicated logic to better process gamepieces.

If you’re still dead set… well… I wish you the best of luck. We worked with AlwaysAi (when they were just a tiny startup). They were super helpful… we were extremely grateful for their help. They even trained a model for us.

Here is our 2020 code: https://github.com/FRCTeam834/EVS-2020
Here is a set of tools to help with annotation conversions: https://github.com/FRCTeam834/MLFlex
Here is a generator for creating projects with AlwaysAi: https://github.com/FRCTeam834/EVS_GUI

Like I said, I think it’s a dead end. Still, I wish you the best of luck and feel free to reach out to me if you have issues with the code. We have since moved to PhotonVision because of the ease of use.

Thank you for your reply!

I still do want to try out ML during the offseason to gain some experience with it. Then when kickoff rolls around, if next year’s game has different things to recognize and collect, the usage of ML may be useful. If that’s not possible, it’s at least good to retain knowledge about it, and since it’s an offseason project, I can put it on hold whenever I need to so I can focus on more important stuff at hand if needed.

In terms of PhotonVision, we are starting to switch from the limelight software to PhotonVision for targeting, so it wouldn’t also hurt to do also try detecting stuff with PhotonVision using a small camera (USB or Pi) during the off season as well. It’s anyways on the bucket list to learn more about PhotonVision and it’s features.

I would also like to note that the generator link does not work and sends a 404.

Don’t mean to revive a very old and dead thread but, the solution I ended up doing was writing my own application using Intel’s OpenVINO library and tools and in a similar fashion to the already made Python application that came with Axon. As many others have found out already, ML is currently not good enough for most use cases in FRC so far.

2 Likes