Given the following:
Accurate x/y/theta offset to the peg with accurate timestamp from a camera
Reasonably accurate data on where the robot was when the image was captured, interpolated back from encoder/gyro data
How do you align and drive to the peg?
Method 1: Rotate to face a position in front of the peg, drive the appropriate distance (using motion magic), rotate to face the peg. Works, is reasonably easy to implement and explain, but seems less than optimal with the “extra” rotation
Method 2: Generate a trajectory path (we use 254’s library from 2014), works great, but take too long on the RIO to be useful. Offload to a faster process/reduce points/etc and would seem to be viable (we are still testing)
Method 3: Idea (no code yet), generate pseudo splines if we are far enough away by calculating a 3 segment move where segment one combines the rotation and some forward movement, a middle segment of moving forward and a final segment moving forward and rotating. Just need an accurate model of how the robot rotates when given different left/right distance inputs.
A common way to do mobile manipulation like this is to first get your robot into a “pre-manipulation” pose relative to the feature you detect (ex. aligned with the lift a foot or so away), then use a preprogrammed sequence to do the actual manipulation task. So the problem becomes a matter of getting from your current pose to the pre-manipulation pose quickly and accurately.
As you note, there are many methods that can do this! As usual, there are tradeoffs.
Rotate-Translate-Rotate (Method 1) will certainly work for any choice of start and end pose. For that matter, so would Translate-Rotate-Translate, or a number of longer sequences of different motions. It has actually been shown that the minimum wheel-rotation path for a differential drive robot is always one of 28 different basic motion sequences. So one approach (let’s call it Method 1a) would be to quickly evaluate a handful of potential motions and choose the shortest one.
However, unless your robot has infinite acceleration, minimum wheel rotation does not necessarily mean minimum time. This is why splines (Method 2) are nice; they are smooth, so your robot doesn’t need to speed up and slow down so much (though coming up with a true minimum-time spline is a tough problem that requires iteratively optimizing your trajectory). But as you point out, this can be expensive to compute (unless you optimize the heck out of it).
Method 3 is a bit of a middle way…you propose generating arcs and lines to get you to your goal. This is totally reasonable (and a common way to do online path planning in the presence of obstacles). One caution is that any time you have a curvature discontinuity, your robot may not be able to follow the generated path exactly.
Regardless of how you generate the path, you will want a controller to be able to follow it. One approach is to turn the path into left and right wheel profiles and just follow those (you might drift a bit because nothing is correcting for synchronization errors).
You might want to look into a controller that does not require continuous curvature paths (like a pure pursuit controller). This controller can even handle discontinuities in heading (90 degree turns) smoothly.
Alternatively, there is a Method 4 which does not require a path and simply uses a controller on the error in pose between your current location and the goal. The downside is that the path the controller takes may wind up with you driving through the airship…
Thanks for great reply. I have been looking at the poof’s 2016 code and the pure pursuit controller, but the link to Math Works page is a much easier read than the link in the code to the paper.
I also really like the looks of the “Method 4” paper. We implement these actions as something that only functions while the driver holds down a button, so they could use a reasonable heuristic (looks like the robots is doing something crazy) and just abort.
Looks like I have a lot of reading to do – thanks again for the great links (and the wonderful example code, my kids have a much better robot for having read 254’s code).