This weekend one of the tasks we started working on was computer vision to detect the red and blue cargo recognition. This ideally will be useful in auto.
Our algorithm mostly works pretty well. The blue and red color aren’t as great as a yellow color obviously. But still pretty good.
I know this would also be a good image recognition problem with machine learning. If we are only trying to find the blue and yellow balls, we could label a pretty large dataset relatively quickly. However, due to the lack of hardware (google coral usb accelerators), I’ve been afraid to even head that direction as I don’t think it’ll be performant enough. Computer Vision based algorithms have no problem keeping up with the FPS on our raspberry pi 4s. I’ve seen benchmarks for tensorflow lite on the Pi4, and it looks pretty slow.
So a few questions I have for the group
Have you found that machine learning for this task would work better than straight computer vision algorithms? (we do ours in OpenCV)
If you do machine learning instead, what kind of hardware do you use? We use Raspberry PI 4s for are offboard vision
WPILib has Axon which is a new tool for image recognition using machine learning. However, the last release was six months ago and there was no kickoff release, so I’m not sure how competition-ready it is (from the docs it seemed so, but a WPILib developer will know better).
Also, Coral USB accelerators don’t seem too expensive ($60) but it looks like their out of stock.
Thats the problem haha can’t seem to get a google coral. Which is why we’ve held off on a machine learning based approach as it seems to be pretty necessary. They’ve been out of stock for months.
Limelight or (Raspberry Pi + Webcam )+ PhotonVision should work pretty well. I have been playing with it with just my laptop web cam. I haven’t ran it yet on our old limelight to verify though.
Its going to be harder this year for gyro + vision to correct for drift since the target looks the same from many angles around the field.
I heard photon vision can do colored shape detection, so lets hope that works.
I have a google coral, but I am not sure its legality if its out of stock everywhere. A backup plan could be a Jetson nano 2gb starts a vision app with a hardcoded IP and python script that starts at boot with the results pushed into network tables.
If I recall Axon is a tool for building the model for recoginition, then you upload the model to the wpilib-pi image.
You don’t need axon if you can build the model using other methods. Perhaps someone could build the model and publish it somewhere. There are other SBCs with TPUs too.
Unless you are shooting on the move I am not sure what the gyro has to do with it. Turn your bot or turret until your target is in the sweet spot and shoot. Field relative angle is not important just target angle and the computer vision gives that well for the shooting target.
Seeing different colored balls for intake is harder than just old limelight software can do, that is why I pointed out PhotonVision.
It’s just about keeping the pose as accurate as possible. Faster to turn left or right to get target in view, its probably not that the drift be big enough for that to be a problem. (once in view that becomes the new target)
there is an additional problem (if using the low goal) getting into the right angle offset or position to avoid the pillars.
There might be interest in a jetson nano networktables based machine learning package.
My team is working on Cargo Detector soon we should post some updates.
Last year we send camera stream from RoboRio and ran ML model in Driver Station PC, then send data back to robot, so it can be run without Coral USB accelerators.
Ah cool. Guess my only concern is latency going that route, but I’m guessing if you are doing it again you must have had good results last year.
For now we will probably just plan on doing it with Computer Vision. Just feel like a ML model would probably be a little better. Lot of Blue and Red on the field.
I made an algorithm with axon last night based on 40 photos and it detected the ball fairly well on my computer (>90% acc) however my sample photos didn’t have a lot of variability so ofc it wasnt perfect. It seems like a really easy way to get rookie programmers involved due to supervisely allowing multiple people to work on it. I’m also worried I can’t do the ML due to not being able to get a TPU tho.
However, we also have PixyCams, which we have used in the past (basically a VERY easy hsv filter object detector). While is isn’t quite as accurate as a flushed out ML (worried about detecting bumpers ATM), it’s also rookie friendly and simple to use.
Vision and ML both seem like good options, so I would test and figure out which one is easier for your programmers and have more focus elsewhere. If you just want a color detector to determine between types of balls, I’d recommend vison like the PixyCam or OpenCV etc., if you want an algorithm to find, pick up, and track the balls, ML seems like a fun and possibly more accurate way to do that (based on limited testing), if you can get your hands on a TPU. Also could consider how this applies to a control award if you wanted to.
I will be grabbing and testing a pi later this week, as all mine are used lol
We’ve enjoyed using PixyCams in the past and I love using vision projects to introduce rookies to the ideas of more complex programming (hence my emphasis on both options being rookie friendly or easy to “send home”).
I haven’t used the Jetson (heard of it plenty), but have been looking into it today. Would love some pointers to helpful sites if you have any.
Here is the tutorial I was following if we needed to go down this route:
and here is the pythonnetwork tables code:
It’d be a bit of programming to put the to together, have it automatically run on boot and have it automatically connect (and reconnect) to the RoboRio.
I’d personally hold off on Axon for now. In the most recent WPILibPi release, the software supporting the coral accelerator is outdated. Hopefully it gets resolved soon