Does anyone know how I could track all cargo this year using just one ball camera? I have gotten a limelight to accurately track a ball, although I can only do one color at a time.
We want to be able to track our alliance color’s balls to use that data to autonomously pick them up. And we want to know if we have picked up an opposing alliance’s ball so we can automatically shoot it off somewhere else (similar to what 1690 is doing here). I would just use a rev color sensor for this, but with the i2c port being unreliable, I was looking into using vision for this in order to avoid having to use a raspberry pi, which no one on the team has experience with.
I looked into using GRIP vision for this somewhat, but it seems to only be able to track one color at a time similar to the limelight. Would making one limelight pipeline for red balls and one for blue and then constantly switching between them work?
I am looking at the github and it’s saying to plug into the mxp port on the rio. But we already have our NavX gyro there, so are there any wiring alternatives? Or am I missing something?
How would this work? Just have the limelight look for any circles and then check the color of them in my code?
It would basically be run the feeder & slow shooter while there wasn’t a target (your cargo), and then spin up to full speed when your cargo was there.
try creating a pipeline that includes 2 HSV thresholds, Find Contour and Filter Contour - one for each color. make sure to follow Limelight’s docs about generating output and collecting input.
Please note - searching for two colors at once should(I assume) damage your performance, but It shouldn’t be bad - Limelight’s regular speed is 90 FPS, Which is faster than most FRC teams actually needs.
Although using a color sensor is way more efficient and reliable, using vision to recognize cargo is a great feature.
This task could be difficult with a limelight for sure. You can do it with the new custom Python scripts (the first thing my team tested), but that was very slow, at least with circle detection (rather than color to avoid bumpers and such).
What we have been using successfully is a raspberry pi + google coral using axon machine learning. It took a while to get it working well, but we have been able to use it successfully for a while now.