Auto-scoring with Dynamic Pathfinding and Inverse Kinematics

The Demo

A few things to note before the demo:

This is controlled by many parameters, many of which are untuned. If you see any overshoot, slight clipping, or error on the path, it is all controlled by PID controllers and custom parameters for path generation that will be tuned once this is running on the real bot.

Breakdown

I’ve been working a lot to get all of this working. This is a completely custom implementation of the A* pathfinding algorithm, pure pursuit, inverse kinematics all coming together with a modified version of advantage scope as a dashboard.
This demo features click to go pathfinding, where the driver can click a point on the field and the bot will pathfind to it, and auto-scoring, where drivers can pick a scoring node and it’ll pathfind to it and score there.

After the season ends, I may write a paper or at least a more detailed explanation for anyone interested. For now, if you are curious about something in specific, feel free to ask. Sorry for the length here, but there’s a lot to cover.

The dashboard used is a fork of advantage scope we modified. It uses raycasting to get the click position on the 3d field, and then sets it as the pathfinding target over network tables.
The scoring nodes display was a custom tab I added to advantage scope, that supports showing and setting which nodes have been scored. When a node is clicked, it sends that info over the networktables.

This demo was done using the estimated position from an apriltag localizer, which had a lot of noise injected into it for the sake of testing. In real life, it’s likely the localizer will be more accurate with less jitter.
In addition, vision models are being trained and tested right now by other programming members (in addition with some math) to detect and range find cones, cubes, and robots which can be plugged in to the pathfinder.

For the auto-scoring, pathfinding is used to get there and once the robot is close enough to the scoring nodes a custom inverse kinematics routine I wrote determines the target angles and extension of our arm. Right now, the simulator is just showing this as a mechanism2d and the real motors aren’t being simulated, which is why the arm jumps around.

Path generation & following breakdown

  1. The field is stored as a JSON file with a vertex table and an edge table.
  2. Pathfinding is running on a co-processor, but in the event it goes down or I don’t want to start it up for testing there is a backup that will run on the rio, just slower. Communication is handled via Java-websocket
  3. When the co-processor starts up, the field is loaded from the JSON. Using a custom vector and vertex class, an inflation process occurs. This will grow the size of the real obstacle according to a controllable parameter. This makes it easy to make adjustments for different robots or if we notice clipping. This inflation is a custom routine I wrote. Additionally, a duplicate set of vertices is created that is slightly more inflated than the first inflation.
  4. Next, the co-processor will calculate every line of sight between vertices in the extra-inflated vertices talked about last time. This is done by creating a vector between each pair of vertices, and using the dot product to check for intersection with vectors between any of the inflated obstacle points. This creates a “visibility graph.”
  5. When it’s time to generate a path, a custom implementation of the A* algorithm is used. I won’t go into detail with it here, but it uses a distance function derived from the Pythagorean theorem.
  6. Once the nodes for the shortest path are selected, it’s time to convert to a custom path structure I’ve created. This is done by using a custom implementation of Bézier curves (done using my custom vectors) at the corners, and injecting many points along the straight lines.
  7. The path is then followed by a simplified version of the Pure Pursuit algorithm. Since there are so many points injected into the path, there is no need for the circular intersection function usually used to determine the look-ahead point, instead I just go for the furthest point inside of the look-ahead circle.
  8. A vector is created from the robot to the look-ahead circle, which is passed to my lowlevelvelocity controller. This controller takes in a vector in world space, rotates it to the local robot frame, and calculates the wheel speeds required to move it in that direction.
  9. For paths longer than roughly a meter, the rotation the bot needs to be driving aligned with the path is calculated since it is faster to drive parallel with our two side motors, rather than just the lateral. However, if needed, the bot can follow the path completely independent of rotation. This may be used to allow for scoring while moving in the near future.
12 Likes

This is further progress on this earlier demo before auto scoring.

1 Like

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.