Software Update #3: Flailing With Precision
That double jointed arm looks pretty fancy
. I guess we’ll need some code or something to control it. /s
We’ve been working on controls for the arm for a couple of weeks. As the second most important subsystem on the robot (the drivetrain is sort of critical), we want to make sure to get the arm controls as fast and reliable as possible before needing to test on the real hardware. To review, our arm has four components to manage:
- A static segment centered on the frame and extending ~25 inches off of the ground.
- A shoulder segment with a ~180° range of motion that’s ~27 inches long.
- An elbow segment with a ~270° range of motion that wraps around the bottom and is ~33 inches long. Note that the elbow segment is longer than the shoulder.
- A wrist (gripper) with a ~180° range of motion that extends ~10 inches.
Check out @Matthew3’s post above for more information on the arm’s construction.
Kinematics
Kinematics is the logic that converts between joint angles and the XY position of the end effector. Here are the assumptions we’ve made when defining the arm coordinate system:
- All XY coordinates are aligned with the origin of the robot (centered and on the ground), then projected onto the XZ plane of the robot.
- All angles are CCW+. The shoulder angle is measured relative to “flat” along the ground (for example, straight up is 90°) and the elbow/wrist angles are measured relative to the previous joint.
- To avoid angle wrapping on the elbow joint, it only uses angles from 0° to 360°.
All of our preset positions are defined at XY coordinates with a “global” wrist angle. This means that the angle of the wrist relative to the ground as defined in the preset doesn’t change based on the positions of the other joints.
The forward kinematics of the arm (converting joint angles to the end effector position) are relatively simple:
(x_0,y_0)=\text{shoulder joint}
\\
(x,y)=\text{end effector}
\\
L_1=\text{shoulder length}
\\
L_2=\text{elbow length}
\\
x=x_0+L_1\cos(\theta_1)+L_2\cos(\theta_1+\theta_2)
\\
y=y_0+L_1\sin(\theta_1)+L_2\sin(\theta_1+\theta_2)
Our implementation for inverse kinematics (converting the end effector position to join angles) is based on the math from this video. There are always two solutions for the inverse kinematics, and we select the appropriate one based on the sign of x. For example, we want to choose the joint angles shown on the left instead of the right when x<0:
Feedforward & Feedback
For a double jointed arm, the position of the elbow joint affects the feedforward voltage of the shoulder (and vice versa). We’re using an implementation of @rafi’s feedforward model to convert between voltages and the joint positions, velocities, and accelerations. In summary, given the mass (m), length (l), moment of inertia (I), and center of gravity (r) of each joint, the required torque (\tau) can be calculated using an equation of the following form:
\tau=M(\theta)\ddot{\theta}+C(\dot{\theta},\theta)\dot{\theta}+\tau_g(\theta)
Note that the positions (\theta), velocities (\dot{\theta}), accelerations (\ddot{\theta}), and torques (\tau) are expressed as vectors of the two joints. To convert between torque and voltage, we use WPILib’s DCMotor
class; this can account for the number, type, and gearing of the motors for each joint.
In addition to the feedforward model, the shoulder and elbow joints are running relatively simple PD controllers to reduce error in positioning. Our tests in simulation suggest that this is able to effectively deal with unmodeled disturbances like the mass of a game piece. Ultimately, we hope that the feedback controllers are similarly able to absorb any discrepancies between the theoretical physics of the arm (based on the CAD) and the true physics constants.
Of course, the arm is dealing with a wrist in addition to the shoulder and elbow handled by the feedforward model. Given its relatively small range of motion compared to the other joints, we assume that the wrist is always at 0° for the dynamics calculations. The physics constants for the elbow and wrist segments are merged by the code for this use case. To reach its setpoint, the wrist is running with a simple trapezoidal profile using WPILib’s ProfiledPIDController
class.
Simulation & Visualization
Based on the same physics model, we can convert the applied voltage (torque) back into acceleration. This enables us to run a “physically accurate” simulation of the arm where the joints interact realistically. We’re using WPILib’s numerical integration functions to calculate the joint positions and velocities.
To visualize the movement of the simulated arm, we’re using AdvantageScope’s new support for 3D mechanisms based on the current version of the robot CAD. The robot code just logs a series of 3D poses indicating the position of each joint. As an example of simulation in action, we can position the arm above the robot and then drop it. Notice how the movements of each joint affect each other, like the elbow falling both backward and forward.
This video also demonstrates the limitations of this simulation approach. Notably, there is no simulation of collision with the intake (or the ground). The wrist is also simulated as a single jointed arm with no gravity, so it doesn’t match the motions of the other joints. However, the simulation is still useful as a proof of concept for the control system. For example, we can experiment with adding forces to the simulation that aren’t modeled by the feedforward model (as mentioned previously).
Trajectories & Optimization
A feedforward model is all well and good, but we’re still missing a critical puzzle piece: How do we plan the arm’s motion from point A to B? Manual control is useful as a fallback or for fine adjustments, but software-driven trajectory following will almost always provide better results for bigger movements. There are several approaches to this problem that we considered:
- Trapezoidal profiles on each joint: This is a simplistic approach that’s effective in some cases, but its key disadvantage is that it disregards the intermediate positions of the arm. There are lots of obstacles that we need the arm to avoid, like the robot body, intakes, and nodes. Due to the risk of unplanned self-destruction, we quickly eliminated this approach.
- Pregenerated splines: Instead, we could define smooth splines between all of the important arm presets in the XY coordinate space. This is better, but still has some issues. It requires significant work to manually write and test these paths, and the pregenerated approach makes it difficult to maneuver smoothly from unknown arm positions (for example, if the operator makes a manual adjustment while scoring). More fundamentally, this approach prioritizes smooth paths over fast paths; the fastest route from A to B may not be a direct path in the XY coordinate space.
- A* + splines (or similar): This addresses many of the issues of pregenerated splines, particularly if the path finding is handled in joint coordinate space instead of XY. Obstacle avoidance can be handled on-the-fly and the generated paths are smooth. However, simple splines still ignore the true dynamics of the arm, leaving potential performance on the table.
Ultimately, our goal is to follow paths that will always move from point A to B in the shortest time that the mechanism will allow. We also have a feedforward model that predicts the required voltage for any potential movement, which cannot exceed 12 volts. To solve this problem, we turned to numerical optimization.
Numerical optimization libraries like CasADi attempt to minimize a function given a set of variables and constraints. This is a generic technique that’s applicable in lots of situations, including path planning for drivetrains and arms. Our problem formulation for CasADi looks something like this:
- Select 25 discrete sets of joint angles given start and end angles.
- Minimize the total time of the arm’s trajectory.
- Keep the applied voltage at every point under 10 volts (some margin is required for the feedback controllers).
- Keep the end effector position away from defined XY regions (the ground, robot body, intakes, nodes, etc).
Here are a few examples of trajectories CasADi has generated:
Avoiding obstacles

This is an example of an extreme obstacle in the middle of the optimal path, but the trajectory is able to avoid it effectively. On the robot these obstacles will include the robot frame, intakes, and nodes.
Taking advantage of momentum

On this path, the shoulder moves back and forth to keep the elbow centered for as long as possible while building momentum. It can accelerate faster when the arm isn’t cantilevered out towards the final target.
Lifting with the shoulder

When the position constraints allow for it, it’s optimal to lift the arm with the two shoulder motors instead of the single elbow motor. Notice that the elbow is assisted by gravity on the way down such that it “falls” into its final angle, then the more powerful shoulder joint lifts the full mechanism.
Kairos
The downside of numerical optimization for trajectories compared to simpler techniques is that it is far more computationally intensive. While many of the paths can be generated ahead of time and cached, we still need to be able to control from unknown starting positions. To solve this issue, we’re taking advantage of the four Orange Pis on the robot that were intended to run Northstar (AprilTag tracking).
We’ve written a Python wrapper around CasADi for arm trajectory generation, which we’re calling Kairos. Kairos runs on all of the Orange Pis and communicates with the RIO using NetworkTables. When a new trajectory is required, the RIO sends a request over NT to the Kairos instances. Within ~250 ms they will send the generated paths back to the RIO. Running Kairos on all four Pis is simply for redundancy (though perhaps a little overkill).

One of our key concerns when running the arm controls across multiple devices is keeping the configuration data (e.g. physics constants) in sync. We don’t want Kairos generating trajectories that are infeasible for the RIO to follow. While we generally prefer to keep constants in source code, we made an exception in this case by moving the arm configuration to a JSON file. The contents of the JSON can be sent directly over NT to Kairos, so there is only ever one source of truth for the arm constants.
Intake Avoidance
Smashing the intakes is bad, apparently. They’re also conveniently positioned directly in the optimal path for the arm:
For now, our intake avoidance scheme is to create avoidance regions around each intake. If the end effector (anywhere from the wrist to the tip of the gripper) enters one of those zones, the appropriate intake is deployed automatically. When following a trajectory, we also “look ahead” by 250 ms and deploy the intake if the arm is about to enter one of the avoidance regions. Here’s an example of the arm moving between several preset positions on both sides of the robot. All of the trajectories seen here are generated by Kairos and followed using the full simulation and feedforward models:
One of our next areas for improvement will be reducing the number of unnecessary multi-side extensions, as these are not legal outside the community or loading zone. However, we expect that these sorts of rapid movements from one side of the robot to the other will be rare in practice as the arm will automatically retract to center when we aren’t actively intaking or scoring.
Code Links
- Arm subsystem (here)
- Arm kinematics class (here)
- Arm dynamics (feedforward & simulation) class (here)
- Arm simulator implementation (here)
- Kairos source code (here)
- Kairos solver implementation with CasADi (here)
Thanks again to @rafi for the feedforward model on which much of this work is based. We hope that this may prove useful to other teams attempting to wrangle their multi-jointed arms. As always, we’re happy to answer any questions.
-Jonah
