Place identifiers using vision?

First let me state that I am not a biologist, nor am I a programmer. I just read this article (behind a pay wall, sorry) and immediately thought of FRC applications.

Evidently our brains, as well as other mammals’ brains, code “place” based upon inputs. A relatively small number of neurons are able to identify many places or different scales. Pretty cool! Based on whatever input the mammal has, the brain evidently matches what it’s receiving with what it remembers and recognizes a place (near food? near water? near danger?).

I know many teams use IMU input and other wheel-movement-based systems to track location on a field, and plan paths for autonomous driving. Has anyone tried to establish placement on field using only what the environmental sensors (camera, lidar, ultrasonic) produce?

As fields get more complex, more obstacles (looking at some game design competition winners this year), etc., having vision/lidar input give you a decent approximation of location, or “place”, without needing wheel rotation input, could be hugely beneficial. When at a certain “place” the robot knows to slow down due to a bump on the field, or that a wall should be nearby on the left, etc.

Anyway, thought this was cool. Maybe there are teams already doing this?!

https://team900.org/blog/ZebraVision-7.0/

971 has also been doing PF localization.

4 Likes