Hey all, I am aware that there is retroreflective tape on each side of the switches (only alliance facing side) but, I was wondering what your teams approach to computer vision tracking will be this year?
Do you think vision tracking is going to be useful this year and why/how?
Will your team utilize/focus on vision tracking, if you don’t already have it and why?
If you already have it will you use it just because you can or because it is a good idea; please explain?)
If you’re looking for framerate, I’d check out the https://limelightvision.io As for my plans for vision, the first thing I want to try is tracking the cubes. I don’t exactly know how I want to go about this yet, but that’s what tomorrow is for. If tracking cubes would be available in auton, that could mean 2-cube auton (obviously only useful if it is consistent, but still worth a try in my book). If not, I don’t think vision will be very useful, as the retroreflective tape is only on the switch, and that is not NEARLY as specific as that peg last year. I’d hope that most teams would easily be able to dead reckon the switch.
I doubt the vision targets on the switch will actually be useful this year. The switch is a very large target in relation to the cubes, so virtually any form of encoder-based driving autonomous will be accurate enough to get a cube on the switch. Even if a team is going for a two cube auto, the cubes will be consistent enough in location that a good intake should be enough to grab another cube with motion profiling, making vision tracking for the cubes unnecessary.
Vision is DEFINITELY not necessary this year. It’ll be a fun task, but not necessary at all, especially for single-cube auto. Can’t wait to see what some teams might come up with though. If I know anything about FIRST, it’s that lots of brilliant people think of lots of brilliant things.
And if they can’t dead reckon it, there are lights on the plates. Those lights are just as easy to find as the tape. It might be harder to find their exact location, but if you can see some pretty blue or red lights, you know you have a 3 x 4 platform on the other side of them. Precise location isn’t quite as significant. I don’t see any reason to mount an LED ring when you are able to see the lights.
Tracking cubes for a multi-cube autonomous seems the way to go, for sufficiently advanced users. (We’ll find out by the end of build season whether that includes me. I’m going to try…but I’ll remember the immortal words of Dean Kamen, encouraging us to fail repeatedly.)
I have also always thought that positioning in autonomous could be greatly aided by using the information provided by those tape lines they gave us on the field. If going for the scale, some vision feedback about where you are in relation to those lines could compensate for an awful lot of encoder inaccuracy. We’ve set the sensitivity so low on cameras in the past, so to only get the tape marks, that it wasn’t worth trying, but if we don’t mount an LED ring, this will be the year I make a go at it. I’ve worked out the math. Now to translate it into code and test, test, test.
The first logo isn’t part of the tracking, I just chose to do it first because i figured it would be more difficult than a solid side. Here is a picture of the side as well.
That is awesome! How did you do that? Are you shining a light at it like you would retroreflective tape, or are you just looking for the color of the case without any light? Is it reliable? This might be a very good use of vision this year!
Since the color yellow isn’t very common on the frc field, I turned up the camera exposure and had the limelight looking for yellow targets within a specific range of aspect ratios. If you have the camera level with cube, you can easily figure out that range (the largest being when you look at a cube from a corner, and the smallest being when you look at it from the short side). The green light did reflect off of the fabric a little, but the yellow color is what made it possible.
All your images are looking at a cube directly in line, with the image pretty close to a square.
What about when you are off at an angle? Then you are going to get a projection of a cube, which will have 4 to 6 sides. It still will be bright yellow, but sometimes will look like a hexagon.
Light identification is pretty hard, much harder than just getting dead reckoning working properly. I do however see the merit in going after a multi cube auto, but auto is already going to be significantly harder, without the added challenge of doing vision; but that won’t stop us from trying it…
I am looking for suggestions on using optical sensors to determine what color the lights are on the scale in front of our starting position in auton. Recommendations?