Hi, my name is Cody with team 7485. we are setting up a pixy but we have a concern. how do we have the Pixy distinguish between the balls and other teams bumpers?
Pixy and Pixy2 are color sensors. While we have only used the original Pixy (and pretty successfully) I’m unaware of any way to get it to distinguish between shapes. The bumper and game piece colors are close enough that we probably would not try using a Pixy for game piece detection this season.
So , how does your team do the game piece recogniton? Use Limelight?
For this game, our drive team uses the Mark I Eyeball for gamepiece recognition.
Can share a link of it ?
Apologies if I used an uncommon expression. Our drive team uses these, though theirs are still attached.
Deep Learning. We’re using a camera equipped Nvidia Jetson Nano running TensorRT with a trained YOLOv5 model. The idea is similar to AXON from WPI, but it uses different technology.
If you are interested, I’ll tell you more about it, but it might be a little late in the season to take this on.
How does your team do gamepeice recognition during autonomous. We aren’t using the picy during teleop so we can’t just use our eyes when it’s running in autonomous
Our autonomous is based on odometry, gyro readings, and trajectory planning. We have no gamepiece recognition during autonomous.
We’ve had some great results in the lab, but I’ll preface this by saying we are not using vision tracking of cargo on our competition robot. At least not yet.
As mentioned above, the challenge with this year’s game is that the cargo are the same colors as bumpers. So traditional methods that look for blobs of a particular color will tend to give you too many false positives on the field. Some implementations factor in the shape or aspect ratio of the blobs, and this can help, but it can still easily be fooled by bumpers. I haven’t used this myself, but I have read good things about Photon.
We’ve been experimenting with Axon and machine learning to detect cargo. Unlike simple color filtering, machine learning uses a neural network to look for certain objects that you train it to recognize by showing it a lot of pictures of the thing(s) you want it to find for you.
The first model I trained was using a little over 1,600 pictures I downloaded from the Open Images collection that is linked from within Axon. The trainer ran slowly and often without much visual feedback, but that is to be expected when you realize how much computation goes into this process. I was surprised when it only took a couple hours, not days, to generate several usable checkpoints. (It took about 4 hours and 2 cups of coffee from the time I started installing Axon to the time I had my first trained model.)
You can see the result of this first test in the following video. The original video was provided as part of the kickoff materials; I merely ran it through my model to draw boxes around any targets that it recognizes. The percentages are how confident the machine is in its answer.
Like other vision systems, you will need a Raspberry Pi and a camera to run Axon. Unlike other systems, you will want to add a Google Coral TPU hardware accelerator to get more than a couple of frames per second of output. (We get about 30fps with the Coral, up from 3 - 5fps without it.) Due to chip shortages and supply chain issues, both the Pi and the Coral are extremely hard to find and expensive when you find them. I believe you could make do with 3fps if you write some clever robot code, but I wouldn’t try to use it inside a tight control loop without acceleration.
You can read more about using Axon in this thread, which also has some helpful links to resources shared by other teams that may get you going quicker.
How do you handle other bots getting in your way and also picking up the cargo?
Over our last two events we haven’t had a robot cross our path in auto. Once the ability to run a 4 or 5 ball auto has been shown, our alliance partners are happy to give us some space. Picking up the cargo is nothing more than running the pickup at the time and place the cargo is expected to be. It’s not perfect but it’s been working pretty well so far.
This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.