So, in light of the issues that have come up from the Houston Champs, I was wondering what teams preferred in means of ease and reliability on how they aim (for any game). I know dead reckoning is easier, and non-reliant on many factors, but vision is very useful when working right.
Since teams on Houston Einstein were not allowed to recalibrate their vision when day turned to night in Minute Main Park, do you think dead reckoning is the way to go from now on?
My team settled on a hybrid approach. Dead-reckon to a spot that you know is pretty close, then try to find the vision target and keep going. In theory if the dead reckoning worked the target “should” be directly in front. If no target is found (due to lighting conditions or whatever) then proceed directly forward anyway.
In practice this allows for minor corrections of a few degrees/inches off. It was a trade-off: too much dead reckoning and you don’t give the vision system enough of a chance to correct if you’re off, but too much vision tracking and you risk going completely off course if the system locks onto the wrong target.
It really depends on how much accuracy you need. Getting the peg into the gear means you can’t be off by more than a few inches. But the airships and the other field dimensions had a tolerance of up to 5". And you need to be able to consistently place your robot at the same starting point, without the benefit of measuring tools. (Our drive team actually used our driver’s shoe to measure distance from the center line. Hey, it worked…)
So you have to compensate for variations one way or another, either by using vision, by tightening up your alignment/placement process, and/or by adjusting your dead reckoning code.
We had a lot of trouble with auto this year. At our first event the vision system wasn’t working properly so we had to rely solely on the dead reckoning, and adjusting inch by inch for the two airships. By the second event our vision system worked, but our encoder failed, so our dead-reckoning component started to lead us astray. Finally at District Champs everything worked the way it was supposed to and we had a consistent, reliable auto.
I think this is a bit of an overreaction to a situation that occurred once (once!) to no more than 24 teams. FIRST is often looking to improve upon their mistakes, so I wouldn’t be so certain this will be the case next year.
Now, that said, in my experience, vision is far more reliable than dead reckoning, and always worth it if you have the time to pull it off. We used it for aiming in 2013 and 2016, and it was wonderful. This year, we tried to use it for our side peg auto, but didn’t have the time, and our consistency massively suffered for it (it’s pretty hard to reliably line up a robot with an arbitrary point on a wall that changes depending on which side/alliance you’re on. Not to mention field variation between events).
Honestly, you need to evaluate your teams capability before deciding. If your team has the programming ability to do vision then it is a very good option when working. Using a flashlight to do dead reckoning is a very good option when you have an experienced driver. I am curious if using a long range photo eye opens up a 3rd option for targeting that would be more resistant to light conditions.
That makes me sad. I’m glad it worked out and I hope you had a planned backup solution and it wasn’t just something you had to implement on the fly.
FIRST (and FRC in particular) seems to suffer from a lack of vision (pun intended) when it comes to autonomous and vision systems on these robots. Autonomous hasn’t changed much since 2005 at this point and that’s really sad considering the advances we have seen outside of FIRST.
We used dead reckoning (via talon SRX motion profiling) and were able to repeatably hit the same spot on the floor within an inch or two.
Unfortunately, we did not have any particularly good plan for handling field tolerances (especially for differences between each individual side peg!), and were ultimately reduced to making small adjustments between matches. By the end of CHS district championships, we had finally successfully delivered to the side peg - but it took a long time to get there.
We are hoping to have some vision code ready to go for St. Louis. In the future, we will not use dead reckoning alone for tasks that require this degree of precision without first coming up with a robust, reliable method of field measurement/robot positioning (if anyone has good suggestions for this, please share them!).
We did the exact same thing. We drove forward and turned using dead reckoning, but then vision kicked in to properly align with the side peg. It worked pretty well and accounted for field variation.
This is absolutely the hardest part of building a successful dead-reckoning based autonomous mode.
It is not trivial to build an auto mode that does the same thing every time, but it is achievable for any team with a little bit of tuning and application of best practices.
It is pretty tough to build an auto mode that you can transfer to a new field and have work the first time you run it. This isn’t helped by the fact that the dimensions that matter for auto mode are often not explicitly specified in field drawings, and knowing which dimensions are more likely to be invariant than others requires detailed knowledge of how the field is manufactured and how the tolerances stack up as the field is assembled. Talk to your local neighborhood FTA.
Thankfully FIRST now gives teams a period of time to measure the field, but getting the dimensions you care about is sometimes tricky as well. Field borders are often not straight, it’s hard to ensure you measure at right angles over long distances, etc.
Lastly, even if you have “perfect” measurements of all of the quantities you care about, you need to have an autonomous driving routine that is parametric enough that you can translate those measurements into the corrections you desire. This can get tricky, as a “repeatable” auto mode is not necessarily a parametric one.
Dead reckoning + vision is a nice solution, since you can be pretty sloppy but still get into the right ballpark using dead reckoning, and then hand off to vision.
One other thing to note is that a very well tuned vision system wouldn’t be as affected by issues like the daylight situation (I recall 254 mentioning in their 2016 lecture at world’s that they turned their vision outdoors and never had to recalibrate it at all). I’m unsure of how difficult this would be with a device like the pixy which uses a built in algorithm for vision processing, but tuning a grip or similar solution to be able to work in several situations isn’t too difficult.
Despite this consideration, I would still always have a backup(perhaps a flashlight or streaming camera) in case you do have an issue with vision and need to run without it!
This isn’t helped by the fact that the dimensions that matter for auto mode are often not explicitly specified in field drawings
Literally one of the most annoying parts of my season, doing trig for really basic auto distances. It would be really nice if First gave us a drawing covered in useful dimensions, instead of making us find it ourselves
Why don’t more teams use ultrasonic sensors in their auto mode? I used it with my team this year and it worked really well for center peg. We just drove forward until the ultrasonic sensor read back a certain distance then stopped and drove backward. We didn’t have to do much for field calibration since it was the distance from the robot to the peg vs the distance of the robot and the wall.
Yeah, and I annoyed our CAD guy as well by making him crash his laptop trying to open up the field CAD to get some distances that were completely underivable from the field diagram (he eventually was able to get it to work in simplified mode).
I’ve heard that 118 had to make a similar call, I wonder if any teams on Einstein were able to successfully use vision - and if so, what kind of setup worked under those conditions. Even with the exposure set as short as possible, a glaring afternoon sun through a gigantic wall of glass is likely going to blow out much of the image that can be captured by the cameras most teams are using.
Sonars work well. If you understand the detection range and cone of detection the many models available offer. But sometimes it is better to have a very small spot for distance measurement. The Lidar-lite at 149$ is more expensive but, well within the robot budget. Also, the STMicro VL53L0X is very precise at less than 3 feet. It is now post season for many teams. Now is the time to play with different sensors for autonomous. Learn to integrate gyro, nax-x and distance measurement into your auton. When that is mastered begin to investigate vision.