Any success with Axon?

Hi. Another WPILib developer would be able to describe how to install cscore and ntcore on Ubuntu better than I could (@Peter_Johnson maybe?). For installing the tflite runtime (runs machine learning model): refer to the pip install here: https://github.com/iCorv/tflite-runtime

The build process should be very similar to that used for the Pi, which uses cmake. See

Thanks guys I’ll look into it

I made some good progress trying to build cscore on the Jetson, but I’ve hit a wall:

Makefile:162: recipe for target 'all' failed
make: *** [all] Error 2

Here’s a quick summary of how I got this far:

  1. Install Java Corretto 11
  2. Set JAVA_HOME
  3. Install Jinja2
    sudo apt install python3-jinja2
  4. Install libx dev package
    sudo apt install -y libxinerama-dev
    sudo apt install -y libxcursor-dev
    
  5. Setup and run cmake
    mkdir build-cmake
    cd build-cmake
    cmake ..
    
  6. Run make
    make -j6

That error doesn’t give me enough info… can you run make without -j and see what the last set of compiler errors are?

Note you only need libxinerama, libxcursor, etc, if you’re building the GUI elements. For coprocessor use and to only build cscore and ntcore, you should use the following cmake options: -DWITH_GUI=OFF -DWITH_SIMULATION_MODULES=OFF -DWITH_TESTS=OFF -DWITH_WPILIB=OFF -DWITH_WPIMATH=OFF and potentially -DWITH_JAVA=OFF (if you don’t need the Java libraries). The Java libraries in particular can be a bit tricky, as for cscore we need opencv java packages installed to build the Java cscore.

Edit: it actually looks like Axon uses RobotPy components instead (since it is Python on the inferencing side), so you can avoid building allwpilib entirely and instead follow these directions:
https://robotpy.readthedocs.io/en/stable/install/cscore.html
https://robotpy.readthedocs.io/en/stable/install/pynetworktables.html

I did try turning off a bunch of the modules, but I missed -DWITH_GUI=OFF because it is not documented and I was getting errors with Glass.

I had success with these options!
cmake .. -DWITH_JAVA=OFF -DWITH_TESTS=OFF -DWITH_WPIMATH=OFF -DWITH_WPILIB=OFF -DWITH_SIMULATION_MODULES=OFF -DWITH_GUI=OFF

Now a little guidance on how to run the Axon vision script would be greatly appreciated!

As I was typing, I saw your edit about RobotPy. I’ll head over there and check that out…

I’m not too familiar with ML. Is this a workload that’s well suited to parallelization?

@tomjwinter we are having the same issues, team 3707 as A) (it finishes after 2 minutes, says it is done, but there are no checkpoints) we’ve tried multiple data sets, parameters. We’ve tried on windows and macs. We did notice, using dev tools in a browser that graphgl does not appear to be returning anything, see images:

Our team can train on one cargo (Red), but then the blue cargo also was picked up and labeled red. When we labeled both red and blue classes in the training set, still only red labels show up around cargos, no matter what the actual color is. We are stuck. Anyone can help?

I’ve had some success with using the python library for network-tables and populating it with the detected objects from nvidia detectnet examples.

It works at least for prototyping without having cscore.

(for using the Jetson nano or other SBC)

@nabgilby For the one project of ours that had that problem (finishes after 2 minutes but finds no checkpoints - which was trial #2 in my results chart above), we were able to fix it by going into Docker Preferences, Resource tab, and changing Memory from 8GB to 14GB and also changing Swap from 1GB to 3GB. This was running on a Mac. YMMV - it seems Axon is very touchy and often fails without explaining why or giving you guidance about what to fix.

1 Like

@JTHuskies Did you train your model with both red and blue cargo? If you only train on red cargo, the machine learning is just as likely to identify the objects because of their shape, rather than color.

1 Like

We created two classes in Supervise.ly. One for red cargo and one for blue. Both identified in each image. However in the Test, both cargos are labelled red.

Does Axon support GPU acceleration in linux?

Are you using Axon with this solution?

We are also prototyping detectnet on the Jetson with a lot of success. We’re training the model on our Jetson TX2 with the train_ssd.py script, not Axon. We found that Supervisely was not compatible, so we’re using CVAT.

1 Like

I think there two major thread going in here:

  1. Getting a model to use with ML (Axon, etc)
  2. Using that model on something other than a Pi and feeding the data to the robot.

I haven’t got a model trained yet with the cargo targets, just using the sample models on the Jetson.

I think in order for the Jetson to work the model needs to be converted to Tensorflow RT? I guess I am not sure yet on that part.

Maybe I should give up on the Jetson Nano. I have a usb coral.

I created a separate topic to discuss the Jetson for 2022.

1 Like

Just to update this thread - I’ve been experimenting with a bunch of different image sets for classifying the cargo this year and I have came to a few conclusions. I have finally got a model that I feel is finally performing to an acceptable level.

I’ve tested an image set of just 80 vs an image set of 270 - The performance difference in my case was massive (20% better in the larger image set) especially in changing light-levels and backgrounds. The model with more data was able to pick up on balls from a further distance and stay locked on at higher speeds. I did run into a lot of problems with color classification, so I decided to say screw it and tuned the model just for finding “cargo”. I then sample the pixels within a range inside my bounding boxes to determine if it is blue or red through just some basic pixel averaging and tolerancing. This also has helped me get rid of a lot of the false positives that I was getting before. One other thing I had to do was edit the run script to allow less confident bounding boxes to be drawn - In the script it filters out a lot of bounding boxes due to not being confident enough (I believe it removes everything below 50% confidence), but I like to include them in my testing because it seemed that confidence dropped when a ball was moving around so this was able to solve that issue at least for this specific model.

My testing parameters:

  • My image set was 270 images from a 120fps camera recorded at 640x480p resolution.
  • Epoch 225 was the sweet spot for me (Checkpoint generation set to 15)
  • Setting the percent validation vs percent testing towards maximum testing always provided a better model. in my case it would only train at 80% or less so I trained it at 80%.
  • Batch size was 40
  • Time to train was only about 30 minutes.
  • Again only one label for cargo. No red/blue specific labels.

The labeling took a while but this task can be easily split up between a few team members on Superviserly and I can see our model probably needing about ~1000 labeled images before it becomes “good enough” for competition.

4 Likes

@tomjwinter tnx, this unblocked us. We are using the ML videos which already have tags and ran an agent in supervisely to spit to images (Playing Field | FIRST).

We made it through the whole cycle, got the model running on the rpi and like @JTHuskies , we are detecting balls but the blue balls are detected as red_cargo. We didn’t have much time to debug this yet. Also, our precision in Axon with the playing field data after building the models with the suggested parameters is only .32. Perhaps this is because that data is also tagging reflections and there are not many of those objects tagged in the data?

Is anyone else using the playing field example data and did you have problems properly recognizing blue balls?

1 Like

@ kiwi0401 thank you for all the information on your experimentation a couple of questions:

  • what was the precision value axon gave you on your model?
  • are you tracking reflections from the plexiglass?