WPILibPi-Romi missing Machine Learning Dependencies

I imaged my Romi with the latest WPILibPi_image-v2021.2.1-Romi
When I try to run a python script that uses edgetpu and PIL, the vision console states that the modules are not installed.
Yet, the latest release states “Added machine learning dependencies to image (#186)”, and the update includes a commit that re-adds installation of the edgetpu module.
Did the release only add machine learning dependencies to the non-Romi image, but not the Romi image? Last September on the non-Romi image I had no problem using these libraries.
Thanks in advance!

Hi!

There’s a new inference script in the works. It works right now, but it doesn’t yet output data over NetworkTables. I’m going to get to that on Monday. Check it out here: uploaded.py

Please let me know if this script does not work for you.

Thanks for the script! But in the Vision Status Console Output, I’m getting the following error
Traceback (most recent call last):
File “uploaded.py”, line 236, in

        tester = Tester(config_parser)

      File "uploaded.py", line 97, in __init__

        experimental_delegates=[tflite.load_delegate('libedgetpu.so.1')])
      File "/usr/local/lib/python3.7/dist-packages/tflite_runtime/interpreter.py", line 207, in __init__

        custom_op_registerers_by_func))

    ValueError: Could not open 'model.tflite'.

Well, you’re going to need to upload a trained model. If you have a .tar file, just upload it through the Pi’s web interface, and make sure you flip the “extract tar/zip” switch.

Yeah, I had already uploaded a model.tar.gz file containing model.tflite and map.pbtxt, and I had checked the extract tar/zip switch.
On further inspection, it appears that even though the switch was checked and the uploaded model_tar.tar.gz file contained both files, the files were stuck in a “model_tar” subfolder. Seems odd? The script worked when I modified it to open “model_tar/model.tflite” instead of “model.tflite” and open “model_tar/map.pbtxt” instead of “map.pbtxt”
Thanks for your help!

hmm, that’s really weird. How did you train your model? was it with SageMaker? Or did u happen to use the new Axon tool that is almost just not quite ready for release?

No, I didn’t use Axon. I used an edgetpu compiled ImageNet model, and I also tried a Google Cloud trained model that worked last September with the old WPILibPi image. Both had the upload error. Newest problem is that although the model loads, the code can’t access the camera. My console output is as follows:
Connecting to Network Tables

Starting camera server

CS: USB Camera 0: Connecting to USB camera on /dev/video0

Cameras: [{'fps': 30, 'height': 120, 'name': 'rPi Camera 0', 'path': '/dev/video0', 'pixel format': 'mjpeg', 'stream': {'properties': []}, 'width': 160}]
160 120 DIMS

Starting mainloop

Image failed

That message loads repeatedly (every 5 seconds) on the Vision Console.
So self.cvSink.grabFrame(self.img)
can’t return a camera frame. Interestingly, when I kill the inference.py script, I can view the camera stream in the web UI (port 1181). But when the inference.py script is running, the camera can’t load the stream. Is there some conflict between the web UI camera streaming and the .py script? It’s wacky since I didn’t have this problem last September. Issue with the updated WPILibPi image or new inference script?

Definitely new inference script. It is just a PR afterall. TBH, I would think the continue would allow for the code to idle until the camera connects. Maybe I should add a sleep in there.

This code works on my Romi, so I am open to a PR for my PR if you find a solution in the meantime.

I resolved the error message, it was a problem on my end. Let me know once you have a script that works with Network Tables. Thanks!

1 Like

Do you have an updated script that works with Network Tables yet? Thanks!

Hi,

Here it is: Axon/uploaded.py at Romi · GrantPerkins/Axon · GitHub

I was hoping someone would approve my PR. still waiting.

It outputs a JSON string to NetworkTables. You will need to parse the data obviously, but the structure of the JSON should be self-explanatory. Still working on documentation.

1 Like

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.