Quote:
Originally Posted by EDesbiens
That is a lot more positive then the last comment... I'll try the NavX... Do you think that triangulation is a possible option? Maybe with a fisheye camera on top of a bot or something like that...
|
The triangulation approach you mention has been done w/active landmarks like
wireless beacons (requiring installation of multiple wifi transmitters), passive landmarks (well-known "checkerboard" patterns at well-known positions, w/identification/range calculated by a scanning camera), and by systems not dependent upon landmarks, including range-finding vision sensors (stereoscopic cameras, structured light sensors like Kinect [which are very short range, e.g. a living room], scanning LIDAR sensors, and scanning (or alternatively fisheye lens, as you say) video camers w/image recognition.
The optical approaches are non-trivial, rarely used in FIRST, and typically considered university-level. So a lot of folks tend to get a funny look on their face when these topics are discussed, and wonder why we're making things so complicated.
The algorthms for localization are becoming more and more available online, and there's
free online courseware from MIT in probabilistic localization and related technologies.
More close to home, the Zebracorns have
published as open source their 2015 vision software, which used OpenCV-based classifiers that were trained to recognize game pieces, pushing forward the State of the Art in vision processing in FIRST. If the same approach were used to recognize fixed-position field pieces, and then range to those objects was calculated and fused w/a map of the field (known ahead of time, in FIRST), you'd have what you were looking for. Again, not-trivial, but for those with interest, worth looking into in my opinion.
Looking ahead, a colleague of mine pointed out this fascinating
new research from MIT that fuses object recognition with SLAM.