How does everyone plan to keep track of their position, especially in autonomous mode (but having several applications in user control as well)? Some ideas we’ve thrown around are:
The Hall Effect Sensors
Optical (Mouse?) Tracking
GPS (probably too expensive and hard to interface)
A small idler with encoder
The Camera
Use of a gyro would be necessary in all of these instances, except perhaps GPS (which could only be reliable within around 2 feet anyway).
I know a lot of these ideas are covered in other threads. I’m just wondering how your team plans to do it and what kind of success/accuracy you’ve had in the past with it. I don’t see anyone being realistically able to cap the vision tetra in autonomous without some kind of positioning system.
