Team 88 has collected over 6000 images of labeled cone and cube images. I don’t know of the best way to share these images with the community. The compressed tar ball is 27.5 GB! Does anyone know of a preferably free service where I can post this file? Is there community interest in sharing our labeled data in one place?
Where ever these images end up, I’ll also post the neural net we trained from it. We use yolov5 v7.0 and pytorch, similar to last year.
The neural net runs in ~0.0125s on my GTX 1080. On the Jetson Xavier NX we run, it clocks at ~0.035s.
The playback is running at ~40 FPS. The original capture is running at 15 FPS.
What would be an easy way to run that model on a Google coral accelerator? We ordered one a few days before kickoff and I’m still trying to figure out if it will be useful for this kind of task or not, and then try to get our hands on a Xavier NX
I ran 500 epochs for the final model. For a first draft with 4000 images, I ran 150. The first draft was ok but would mix up cones and cubes some times. I haven’t seen the final model do that yet.
I’m not sure if I’m doing something wrong but when grabbing the dataset down from hugging face all of the text files that should contain the annotations are blank but the images download perfectly fine.
Is the Jetson Xavier NX going to be a legal part this year? I got one a couple years ago and didn’t think it would be a problem when I saw you were using it, but looking at current prices and availability… I’m not sure it gets under the FMV price limit.
Our team has also been experimenting with using the Jetson Nano. We like to use the Jetson Inference library but there is no official support for YoloV5 yet (as far as my research goes), I was able to re-upload all of the photos from huggingface to Roboflow here so we could test it out.
With this tool, you are able to download the dataset in many different download formats. We use the Pascal VOC format, but one thing to note is that RoboFlow exports in trainvalid and test folders which is different from the official folder structure used, but with a quick python script we were able to reorganize everything in the correct manner.
I mixed up which files my archive contained. I had made a back up of every image we’ve taken (labeled and unlabeled). This set is substantially larger and that’s where the 27.5 GB came from. I’ll see if I can edit the original post.