I’ll preface this by saying that I’m not a total expert in the field, but I can give some ideas as to how I might approach the problem. What follows is mostly an “educated guess” as to an approach.

Sadly, TAMP is an *astonishingly* difficult problem. A generalized algorithm for optimal solutions, especially when we have dynamic constraints, would be computationally intractable for most cases. However, we can make a few simplifying assumptions:

- The state space dimensionality is low (typically we assume a second order model).
- The space is not pathologically difficult - typically your intakes are tolerant of reasonable errors in alignment, and there are few obstacles around.
- Robot is (hopefully) controllable.
- Our intake is “touch it, own it” - that is, we do not need to stay within a set state for a given amount of time to pick up a piece. For example, if we reach a state where we are directly in front of a ball while traveling at 0ft/sec, we pick it up, even if we were in that state for a very small amount of time.
- There are no constraints on the order in which we pick up the game pieces.

**The Approach**

For notation: suppose our configuration space is \mathcal{Q}, and the set of control inputs is U.

We can use these assumptions to make a solution by the following strategy:

- For each game piece p_1, p_2, \cdots p_P, we can construct the sets \mathcal{P}_1, \mathcal{P}_2 \cdots \mathcal{P}_P \subseteq \mathcal{Q}, where if a robot state q \in \mathcal{Q} is also in \mathcal{P}_i, the robot picks up the i-th game piece.
- We sample N elements \{q_1, q_2, \cdots q_N\} := Q_{\text{sampled}} of \mathcal{Q}.
- With some clever math, we construct a roadmap of our space, where nearby q_i, q_j \in Q_{\text{sampled}} have known controls u_{i \rightarrow j}(t), u_{j \rightarrow i}(t) \in \mathbb{R^+} \mapsto U to travel between them. We can then use that to create a state transition graph G, whos nodes are Q_{\text{sampled}}, and whose edges are the amount of time it takes to transition between each configuration you sampled.
- Construct a traveling-purchaser problem, where a configuration q_i \in Q_\text{sampled} has some product g_j if q_i \in \mathcal{P}_j. We can assume that all products have a cost of 0 here. Once you’ve found your solution as a series of edges, you can go back to your roadmap to find the controls which would make the path through the system.

**Some Notes:**

For something like a WCD, I would use a second-order differential car model. In it, we typically claim that every q \in \mathcal{Q} is the tuple (x, y, \theta, v, \omega) of position, angle, velocity, and rotational velocity, and our control inputs (u_1, u_2) \in U are the torques on the left and right sides. This gives us the control rule (\dot x, \dot y, \dot \theta, \dot v, \dot \omega) = (v \cos \theta, v \sin \theta, \omega, k_1 (u_1 + u_2), k_2 (u_2 - u_1) ), for some constants k_1 and k_2 determined by the physics of your robot. IIRC, the second-order differential car is controllable (and if you don’t bound your controls, you should be able to make a control which connects to arbitrary states with ease), but I can’t seem to dig up any papers on the topic from a minute or two of searching.

Due to limitations on the physics of your space (for instance, you are limited by the power of your motors and your traction from your wheels) it may be difficult to create a control which connects two different states. In this case, simply do not add that edge to your roadmap.

To construct your roadmap, I strongly recommend using the PRM* algorithm (linked in the roadmap paper above). This will result in a pretty well-formed graph, leading to fast paths.

**Closing Thoughts**

Pretty much every step here is computationally very intensive, and will have lots of gotchas and fiddly details to hack out. To get really good answers, your roadmap would have to be huge (~10,000 or more nodes), leading to huge computation demands to actually solve. However, this is a very interesting problem. If you relax some of the constraints on the problem, you can make it as hard as you want it, but this is about where the train stops when it comes to applying canonical methods to solve these things. For more reading, I recommend *Principles of Robot Motion* by Howie Choset et alii.

Additionally, look at the Open Motion Planning Library for implementations of motion-planning algorithms. If you have more questions (or interesting ideas!) don’t hesitate to post them!