Finding Optimal Path Through Game Objects

Looking at 604’s Quikplan which generates a path from a start point to multiple power cells and ending at a shooting position, I had the idea of extending this problem further. Here is a problem statement to focus this discussion:

Given a starting position, a number of game objects on the field, and an end position, what is the time-optimal path for a drivetrain to follow?

At first I would imagine it would take an optimization approach, as I’m sure the solution space is quite big and there are a lot of inputs. To take this further than the post I previously mentioned, I have prepared a few questions:

  • What happens when the balls are clumped together? The width of the intake and its position on the chassis would now matter. Do you treat each game object as a position and heading setpoint? Do you treat items clumped together as a bigger ‘superpiece’?
  • How does the solution change if I compute paths before a match vs. on the fly?
  • What differences would there be between a tank path and a holonomic path? How much is one better/worse than the other?
  • What if the game objects are moving?
  • What about coordinating with other subsystems (ie. game objects are at different heights and you have to also consider an elevator)?
  • What happens when the game objects have different point values?
  • If you add a time constraint, how do you determine which pieces to acquire and then score? I feel like some form of the rod cutting problem would apply.

If anyone has worked on projects that have answered any of these questions, please share with us! I want to discuss algorithms and how they could be implemented in code. These are very exciting questions to me because I love thinking about how computer vision, motion control, and now decision-making (ie. last two questions above) can all blend together. If you have any other ‘questions’ like mine, please feel free to add so we can discuss! I know that this community will have a lot of opinions and ideas on these questions, and I would love to hear from all of you!

EDIT: formatting and added a few more questions.


This and the variations below are versions of the well-studied travelling salesman problem.


Given the relatively few nodes, this is computationally not the “hard” part. Instead, the main issue is that a robot can be in many different states to pick up a single game piece, which makes it impractical to just blindly stitch pickup states together. For example, a path which picks up balls 1, 2, and 3 in order could be drastically different from picking up 3, 2, and 1 (despite being very similar on a simple graph structure).

The generalized name of these kinds of problems is task and motion planning (TAMP) and it is an active field of research. Additionally, the requirements of optimality and the kinodynamic constraints make planning even harder.

1 Like

Hi, thanks for replying. Is there a variation of Travelling Salesman that takes into account the ‘geometry’ of the salesman and the cities? As I understand TSP is a graph optimization problem that would give the order of game objects to drive through. But would this path not change given an intake? Using 2021 as an example, the path a bot with a 2+ ball wide intake would take is different than the path a bot with a 1-ball wide bumper cutout.

At a quick glance, I think Minkowski Sums could help with this, but I’m not too sure how they work. Maybe there’s a way to shift the nodes accounting for the geometry of the problem such that the TSP solution for game object order would be time-optimal (ie. treating really close balls as 1 node).

Hi, thanks for replying. You brought up a good point that states do matter: TSP doesn’t care about heading, which is important for a robot (assuming there is only one intake).

I will definitely be looking into TAMP after this! If I were to try to section this problem into broad steps based on what we’ve just discussed (is there an algorithm that would address #2?):

  • TSP finds optimal path in x,y
  • Consider the heading of the robot at the beginning, during, after
  • Consider the geometry of the intake relative to the chassis, game pieces
  • Consider kinodynamic constraints
  • Optimize for time

Clayton, do you have any experience in TAMP you wouldn’t mind sharing with us?

I’ll preface this by saying that I’m not a total expert in the field, but I can give some ideas as to how I might approach the problem. What follows is mostly an “educated guess” as to an approach.

Sadly, TAMP is an astonishingly difficult problem. A generalized algorithm for optimal solutions, especially when we have dynamic constraints, would be computationally intractable for most cases. However, we can make a few simplifying assumptions:

  • The state space dimensionality is low (typically we assume a second order model).
  • The space is not pathologically difficult - typically your intakes are tolerant of reasonable errors in alignment, and there are few obstacles around.
  • Robot is (hopefully) controllable.
  • Our intake is “touch it, own it” - that is, we do not need to stay within a set state for a given amount of time to pick up a piece. For example, if we reach a state where we are directly in front of a ball while traveling at 0ft/sec, we pick it up, even if we were in that state for a very small amount of time.
  • There are no constraints on the order in which we pick up the game pieces.

The Approach

For notation: suppose our configuration space is \mathcal{Q}, and the set of control inputs is U.

We can use these assumptions to make a solution by the following strategy:

  1. For each game piece p_1, p_2, \cdots p_P, we can construct the sets \mathcal{P}_1, \mathcal{P}_2 \cdots \mathcal{P}_P \subseteq \mathcal{Q}, where if a robot state q \in \mathcal{Q} is also in \mathcal{P}_i, the robot picks up the i-th game piece.
  2. We sample N elements \{q_1, q_2, \cdots q_N\} := Q_{\text{sampled}} of \mathcal{Q}.
  3. With some clever math, we construct a roadmap of our space, where nearby q_i, q_j \in Q_{\text{sampled}} have known controls u_{i \rightarrow j}(t), u_{j \rightarrow i}(t) \in \mathbb{R^+} \mapsto U to travel between them. We can then use that to create a state transition graph G, whos nodes are Q_{\text{sampled}}, and whose edges are the amount of time it takes to transition between each configuration you sampled.
  4. Construct a traveling-purchaser problem, where a configuration q_i \in Q_\text{sampled} has some product g_j if q_i \in \mathcal{P}_j. We can assume that all products have a cost of 0 here. Once you’ve found your solution as a series of edges, you can go back to your roadmap to find the controls which would make the path through the system.

Some Notes:

For something like a WCD, I would use a second-order differential car model. In it, we typically claim that every q \in \mathcal{Q} is the tuple (x, y, \theta, v, \omega) of position, angle, velocity, and rotational velocity, and our control inputs (u_1, u_2) \in U are the torques on the left and right sides. This gives us the control rule (\dot x, \dot y, \dot \theta, \dot v, \dot \omega) = (v \cos \theta, v \sin \theta, \omega, k_1 (u_1 + u_2), k_2 (u_2 - u_1) ), for some constants k_1 and k_2 determined by the physics of your robot. IIRC, the second-order differential car is controllable (and if you don’t bound your controls, you should be able to make a control which connects to arbitrary states with ease), but I can’t seem to dig up any papers on the topic from a minute or two of searching.

Due to limitations on the physics of your space (for instance, you are limited by the power of your motors and your traction from your wheels) it may be difficult to create a control which connects two different states. In this case, simply do not add that edge to your roadmap.

To construct your roadmap, I strongly recommend using the PRM* algorithm (linked in the roadmap paper above). This will result in a pretty well-formed graph, leading to fast paths.

Closing Thoughts

Pretty much every step here is computationally very intensive, and will have lots of gotchas and fiddly details to hack out. To get really good answers, your roadmap would have to be huge (~10,000 or more nodes), leading to huge computation demands to actually solve. However, this is a very interesting problem. If you relax some of the constraints on the problem, you can make it as hard as you want it, but this is about where the train stops when it comes to applying canonical methods to solve these things. For more reading, I recommend Principles of Robot Motion by Howie Choset et alii.

Additionally, look at the Open Motion Planning Library for implementations of motion-planning algorithms. If you have more questions (or interesting ideas!) don’t hesitate to post them!


Talked to Clayton a bit about this offline- while this is definitely the “right” approach to be completely generalizable and get the academically time-optimal scoring setup, we agreed it’s pretty much totally infeasible to be actually implemented as an algorithm even to generate paths offline without vastly reducing the search space. My suggestion was a monte carlo approach:

  1. Generate a random/best guess set of possible paths through/near the game pieces.
  2. Fit these paths to splines respecting the constraints of how your robot drives and intakes.
  3. Calculate how long each of these paths would take.
  4. Take the best path(s) from (3), generate a new set of paths from modifying these ones slightly, and return to step (2). Repeat until satisfied.

This is probably still not feasible for online use, but may be possible to generate paths offline.


This is a great question in a really fascinating field!

Traveling salesman, and other geometric pathfinders like a*/prm*, are definitely not the solution you are looking for. This post explains it in a bit more detail, but routes constructed of straight lines and turns are only time-optimal if the robot has infinite acceleration available. When actual FRC robots, 120 lb objects moving at 15 ft/s, are considered this assumption breaks down quickly.

You also won’t find much useful for FRC by looking into motion planning either, though the term is more applicable to what we are trying to solve.

The most effective solution is trajectory optimization which is pretty much exactly what you are looking for. The idea here is that the problem of “find the optimal path” can be transformed into a mathematical constrained optimization problem (calc 3 covers a simple form of this with Lagrange multipliers) which the computer can solve numerically. The process for doing this starts with defining the state of the robot, (x, y, θ, vₗ, vᵣ), and the control inputs of the robot, (uₗ, uᵣ), the accelerations of the left and right motors (this is all for a differential drive, the dynamics will change of course between different types of drivetrains). The derivative of the state with respect to time can then be expressed as (cos(θ) * (vᵣ + vₗ) / 2, sin(θ) * (vᵣ + vₗ) / 2, (vᵣ - vₗ) / wheelbase, uₗ, uᵣ). Next, the trajectory of the robot is calculated as a large number of states each separated by an equal amount of time. Given the initial state of the robot, the robot’s position and velocity can be calculated at all the states by integrating the dynamics over time using the above equations. Now that everything is expressed mathematically we can apply constraints such as the initial and final position and velocities of the robot, or the max acceleration allowed by the motors. This might all sound complicated, by it’s actually not that bad since you can get a library to do almost all of this for you :slight_smile:

I’ll list some ideas below and link some resources to help you learn more about trajectory optimization.

  • Once you get comfortable with trajectory optimization, you can start to look into multi-phase trajectories. This will allow you to expand the trajectory beyond just moving from point A to B, to Point A to B to C to D etc. The game pieces you are targeting can then be defined as the initial and final states of each sub-trajectory. This is used a lot in the aerospace field for calculating trajectories of rockets as there is an instantaneous change in mass when fuel cells are detached.
  • Obstacle avoidance can be added as a constraint that the distance from each side of the robot to the position of the obstacle must remain greater than some predefined safety distance.
  • If obstacles are moving over time, things get more tricky. This can be dealt with by modeling the position of the obstacle as a function of time (maybe a high-degree polynomial?).
  • Elevated obstacles an elevator might hit can be avoided by repeating the above method, only this time constraining the distance from the elevator and obstacle above a certain distance.
  • The method described in this post can be expanded to holonomic vehicles as well, which Triple Helix has been developing in the offseason.
  • Trajectory optimization is unlikely to be fast enough to be calculated on the fly for a non-convex problem but can be easily be precomputed.

Here are some resources that might help you get started (good example of how trajectory optimization can be done on a FRC robot)
Introduction to Trajectory Optimization - YouTube
Ch. 10 - Trajectory Optimization (the videos to this course are also awesome)
CasADi - Blog - Optimal control problems in a nutshell

I’d be happy to answer any more questions as this is something Triple Helix has been working on for a while now!


Thanks for your reply, and especially for the readings. Based on how computationally intensive the project is, I wonder if you could combine something like the Traveling-Purchaser problem (which would figure out the order for the objects) and then merge all the (objX, objY, heading) with a single spline, then follow that with something dynamic like pure pursuit. I know an inherent flaw of pure pursuit is that it keeps the robot on the correct (x,y) but doesn’t do too much about heading. Do you know of an algorithm like pure pursuit which also factors in the heading?

Just a thought I had about the pose for game pieces: to ensure the robot goes through the intake with the right heading , you could set a setpoint a fixed distance in front of the object, so then the next point would be the game object and that segment would be a straight line. Then the problem is that that point becomes dependent on the point behind it, and it would trickle down into all the other points. This would also eliminate the option of sweeping into a game object.

Thanks for the reply. As I understand, Monte Carlo is used because the solution space is way too big, so we need a way to converge towards a solution? This to me looks similar to a genetic algorithm without the random chance of a mutation occurring. So essentially, we might not find the 100% answer (it would take way too long), but we are finding the 95% a whole lot faster? Using the case of power cells for example, the difference between a 95% path and a 100% path is probably very minimal given the robot can only intake five. If there were many more nodes the 5% difference would be more pronounced.

Thank you, I have a new understanding for the problem now!

Could you clarify what you mean by this section? I’m not really sure what the approach you’re looking at is.

For sure! Basically, I was trying to envision having two steps:

  1. INPUT: all game objects, represented by (x,y, point value) → OUTPUT: a list of the game objects, sorted with a Traveling-Purchaser algorithm such that the total path has the “minimum combined cost of purchases and traveling” (in this case I believe it would be distance and time).
  2. With the output from the last step (x and y values of all the objects), you create a path with each of the game objects as a waypoint, and you tweak this path so that it is time-optimal for the robot to follow.
    Sorry about the confusion, I think I misunderstood TPP the first time.

Additionally, could you add a time dimension to TPP? More specifically, trying to maximize your points as early as possible. A simple example would be if you had five objects equidistant from you, but they had point values of 1, 2, 3, 4, and 5 respectively. With my understanding of TPP, any path that passes through all five would be weighed equally because the costs of ‘purchasing and travelling’ are the same. How would you modify it so that it knows to go with the 5-4-3-2-1 path? My first guess would be to add a cost along the lines of (time / points) so that it penalizes high times and low point values.

I wonder if you can recover some of the power of the combinatorial approaches by averaging over some distribution of meshes?

Thanks for the reply, sorry mine took a while! Per Lagrange multipliers, my understanding is that there is a function f(x,y,z…) and a constraint g(x,y,z…) that you find the critical points of. If this is correct, then g be the kinodynamic constraints of the robot (ie. length, width, wheel size, wheel friction, etc.)? In this case, we could add something like obstacle distances as an additional constraint (making a function representing the robot’s distance over time with that obstacle if it’s moving)?

This method makes me think you can cascade these ‘constraint-obeying’ controllers together. For example: I generate a path with a bar the robot needs to duck under (for example the wheel of fortune in IR). The robot obeys its physical constraints so that the trajectory is time-optimal and the drive doesn’t slip. Additionally, you have a superstructure constraint running in parallel that must duck under all structures (thinking of a certain Israeli team this year :wink: ) that gets activated when the robot is close to an obstacle (so its constraint would be the robot’s distance to any potential obstacles). This would be a bit more complicated if you have to consider the speed of the superstructure, so probably something like if time to obstacle <= time to move * safety factor, drop the structure. Probably can’t use pure distance because you could be very close but outside the wheel of fortune, so maybe this wasn’t the best example… Basically the idea is instead of explicitly stating when the superstructure should move up or down, you set a constraint along the path so it knows itself when to move. Could be extended to an intake that’s not touch and go as well.

Lastly, would love to hear more about this :slight_smile:

Edit: Forgot to hit the reply button, but this is to you @JN25

I’ll reply more later as school is taking a while, but in the meantime, you should look into the non-linear feedback controller “RAMSETE” as a better solution to trajectory following than pure pursuit. It takes into account the x, y, and heading of the robot and trajectory instead of just the x and y like pure pursuit which leads to far superior performance. I think wpilib added their own implementation of this last year which might be worth looking into.

1 Like

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.