My team uses (and loves) WPI’s Trajectory Library to drive our robot in autonomous. We have one automode where instead we use the camera to drive (aiming for a ball).
I was thinking of somehow during that vision based drive, somehow capturing each ‘pose’ the robot is at so I can then generate a trajectory that the robot can drive in reverse. I already have code that allows me to reverse a trajectory, but I was more wondering how I would capture each “Trajectory.State” for each loop the robot is at. Is there a preferred method?
The WPILib Odometry classes would work, but you’d be much better off re-planning a trajectory back to the original pose than reversing the actual recorded robot motion.
Because your recordings are polluted by measurement noise (e.g. encoder resolution, timing errors, wheel slip) and system noise (e.g. vibration, external forces). Replaying them backwards will force your control algorithms to attempt to replicate the initial motion as it was recorded, including all of this noise.
Past that, there’s usually no reason to want to retrace the path even approximately; the desired goal is to get from point A to point B. Why trust that your initial route was optimal, rather than just re-compute an optimal route?
And if it is wrong, you will probably have issues driving the path regardless. I think it’s safe to assume your code is solid and correct. If you can’t do that, I’d work on getting it to a point you can trust first, then move on.