So what we've been doing (with partial success) is implementing a grid on which our robot keeps track of where it "thinks" it is based on a calculated angle in relation to the field, the angle that its wheels are facing in relation to its center line, and our encoder counts for distance.
For autonomous, we've been trying to put a waypoint system in to drive the robot. Basically we adjust the angle of the wheels to point towards the waypointsThe problem that we've been having is that it'll approach a waypoint, sometimes it'll veer off in the opposite direction off the field. Not quite sure what's wrong here..but another problem is that it'll sometimes approach a waypoint the "wrong way." For example, here's a diagram illustrating what we're doing. The green path and robot is what we want, the red path is what we don't want as it leaves the an impossible path to the next point.

The image sucks, I know.
My current ideas are..
1. node prediction - use the waypoint ahead of the current one to modify the angle
2. simple inequality check - see if i'm outside or inside the path, outside is good, inside is not.
3. increase resolution of waypoints - make more waypoints in between.
4. set an angle goal at specific waypoints. thus, the goal now becomes reaching the point as well as slowly orienting the wheels in the given direction. (probably coupled with one of the above)
Anyone else do something like this before? My ideas seem ok..on paper, but I want some feedback before investing my efforts and time.