Hey everyone! Myself and three other students at WPI are working on localization for FRC for our senior project. Localization means your robot can figure out it’s position on the field. We need to see goals and criteria for our system, and we need your help defining it. How accurate should it be, how cheap, etc… Please fill out this survey so we can get it right!
My team’s answers to the first six questions won’t fit your form, so here they all are:
1, 2, 3, 4 (required accuracy and its value): It depends. Angles were important in 2012, 2016, and 2017, not so much in 2013, 2014, and 2015. Positions to 6" have been good enough lately, but if they were easily available, the GDC would make it tougher.
5, 6 (Shop): our shop is about 110 sq ft, and we can leave stuff up. We also have a similar sized programming/media room and another 300 sq ft of storage (in four separate spaces) where we can leave stuff up, and a 600 sq ft room which we can use for practice, but cannot leave anything up (probably have to roll up the carpet every night).
7, 8 (Sensors): botcam, webcam, drive encoders, IR and ultrasonic range finders, photo interrupt (curb feelers for 2015 platforms as well as manipulators), capacitive proximity sensors, potentiometers (manipulators), pressure-sensitive resistors (manipulator), limit switches.
9 (development platform): Java.
Edit:
Ha ha! What I called our shop is actually our main assembly room. Our cutting (drill presses & chop saws & such) takes place on a carport. I agree that in the context, OP probably means “practice space”.
What are you referring to when you say shop? When I think of our shop, I think about where we have all our machining resources. However I get the feeling, given the topic of this survey, that you’re more looking for a teams practice or programming “shop”.
Yea, by shop I mean practice space. Basically, how big is the space where you might test and use a localization system
Neat project! What approaches are you currently looking at? Remember, any individual COTS part can cost a maximum of $400, so a lot of LIDAR and stereo is out of the question
We’d love having a complete localization system. We’re developing one ourselves, but the challenge is immense. One thing you have to keep in mind (that we didn’t at first) is for any visual odometry or SLAM-based approach is that loop closure can only be done if you have a constant chain of correspondences, which is very difficult in any environment but especially so on a competition field where you’re getting knocked around and having your vision obstructed by robots. Works great in a static environment, but not so much on the field, as we’ve found.
That leaves you with a few different possibilities. Point filtering doesn’t quite work because of, once again, the presence of other robots on the field. 1706 did a neat system in 2014 where they used 360 video and got their position from target size, but I don’t think that’s too feasible unless you work directly with FIRST to provide always-in-sight vision targets like in FTC. Though, given WPI’s relationship with them, that’s not out of the question!
We’re looking forward to seeing what you come up with!
Edit to elaborate on a few questions:
Degree/position really doesn’t matter too much, at least from our experiences with on-field localization. Given +/- 6"/15° or so, you can use the field vision targets to do the rest. What localization would really help for is following long paths that are actively generated on-the-fly (LIDAR based pathfinding, anyone?). Those really don’t need too much precision nor accuracy assuming you don’t have any particularly tight spaces on the field, but that’s what point filtering is for.
Path following is the #1 reason we at 2898 are excited about the possibilities of localization, and is what makes it an instant sell for us. We can detect other robots on the field in real-time (neural network), and integrated that last season with a dynamic pathfinding algorithm to theoretically do point to point pathfinding on the field. Issue of course was localization — we had acceptable drift (+/- 6") over the typical 15-20 second “return to base” path when testing, but throughout the match that error just kept adding up. We tried calibrating based on finding the boiler decals, but that would’ve required yet another GPU coprocessor to run at an acceptable speed.
Honestly, judging from the amount that localization could help both for learning as well as on the field, assuming that it was an approach that could last year-to-year, was robust, etc, we’d probably be happy to drop a lot of money on it. Robot CAW is a problem you have to keep in mind though, a lot of teams with the software chops to be using localization have a high enough complexity they don’t have too much of a CAW headroom. I can’t imagine many ways to really brush that $1k point though on the robot, without using SWEEP + ZED + TX2, but that has a lot of problems.
You probably should also put in an option for Python on the form, it’s an up and coming language option, and more and more teams are choosing to use it over Java or C++.
Also, please don’t make me plaster our robot in April tags this year