Ah, sorry I misinterpreted. That situation isn’t really a needed feature of it for us because of our handoff design shown in the prior post. It is true that with an intake like this, if you are too far off to the side, it is more of a “touch it, throw it away” intake. We hopefully won’t really be using this end effector to be the first interaction with cubes in the match however.
While the second link is longer than the first, it is shorter than the sum of the first link and the height that the first join is mounted above the frame. In the GIF below I’ve put together a basic visualization of what that looks like, although the first link doesn’t need to remain vertical throughout, the actual possible motion.
You can play with in the “Crayola CAD” part studio to see that the angle of the “wrist” and the first link are more capable of some level of collision avoidance with intakes that fall in their path.
Here is a screenshot of the “front” view of the arm which may aid in visualizing how that works, though I would note that the widths between each of these stages is still being tinkered with to package each joint while leaving room for the end effector.
I hope this gives a better handle on our thinking and methodology for the arm. We’re happy to answer any other follow-up questions!
The 6328 Scouting Experience:
System Overview
We have been using a custom scouting system since 2019 to collect and analyze data at events, and it has been an integral part of our success, especially this past season. We use a custom app, which we run on Kindle Fire HD tablets to collect data and upload to a laptop over bluetooth. This data gets stored in a SQLite database. We then do post processing with an Alteryx workflow to sanitize our data. Finally, we use Tableau and Google Sheets to visualize the data for match strategy and alliance selection.
Switch to Svelte
Our offseason project was to migrate the front end of our app from plain javascript canvas, to Svelte in an effort to modernize the app. We also took advantage of Tailwind CSS utility classes and the DaisyUI component library to improve our UI design. This switch has drastically improved our development experience while also cutting down on our development time.
App
UI Design
In past years we used a layout that mimicked the field. However, this year we opted for simplicity and functionality. We realized that with the field based layout, components moved around based on the alliance color and field orientation, leading to an inconsistent experience for scouts, which is why this year every element is in the same place no matter what alliance is being scouted.
New UI
Old UI
Here is a walkthrough of the entire app:
The auto section consists of two pages. Scouts enter into the “pre-match” page, where they select the configuration of the starting game pieces and robot location before the match.
The start button takes scouts to the “Auto” page, where scouts enter robot actions in auto, including nodes scored, charge station interactions, and mobility. Note that the engaged button is disabled by default. This is because a robot cannot be engaged without first being docked.
The “Teleop” page is set up similar to the “Auto” page, but has options for game piece pickup from the single and double substation in addition to the floor.
The post game section contains various data fields that scouts fill out after the end of the match, including ratings and comments.
Admin Page
Our admin page serves as the “control center” of the scouting system during an event. From the admin page, the scouting systems lead can configure who is present in the stands, assign team preferences to scouts, monitor the status of the kindles, and monitor the upload status of data for each match.
What data do we collect?
Below is a list of all the data we collect through our app.
Match Scout:
Starting Location
Starting Game Piece
Starting Midline Configuration
Mobility
Scoring Location
- includes scoring both game pieces at all 3 levels, as well as an option if dropped
Pickup Location
- Includes ground, single substation, and double substation
Charging Station
Disabled/Fell
Ratings
Ratings
- Driver rating
- Intake ratings for both game pieces
- Defense rating
- Time played defense
- Under defense rating
- Time under defense
Ranking Points
Comments
Penalties
Pit Scout:
What drivetrain they use?
Multiple Drive Teams?
Dimensions
Build Quality
- A mostly subjective rating that indicates how well we think the robot is built
Programming Language
If they use a gyro for control
Picture of the robot
If you have any questions feel free to post here or shoot me a PM.
Thank you so much!
Wow this is amazing. I’ve been working on a similar form and honestly it can’t compare to the simple-ness of the ui on this one
Cad Update:
CAD progress has gotten back on track for the most part. The arm and cube intake are almost finalized and the end-manipulator and cone intake are nearing completion. Packaging and integration of all of the subsystems has continued to be our biggest challenge with the small frame perimeter and our desire to pass the arm through itself. Many changes have been related to motor mounting, strengthening the joints, and accounting for out of stock parts.
The addition of a custom gearbox:
Due to the limited availability of MAXPlanetaries and belly pan space constraints, we have re-evaluated the way we are powering the first joint. The gearbox will be placed low in the belly pan powering a dead-axle MAXSpline where we chain that to the two first joints. This has a few major advantages over our previous approach: packaging, mechanically linking the first two joints, limited stick out into the middle of the robot to make it easier to pass through, and eliminating several MAXPlantary cartridges.
The two bottom Neos power the MAXSpline that crosses the width of the robot, while the MAXPlantery neo is powering the joint of the cube intake (more on that later).
The First joint:
Powered by the previous gearbox, the first joint is a live MAXSpline with the chain coming up to power it. Our limited MaxSpline shaft collars didn’t influence this joint but at this point we have used all of our 6 shaft collars between the gearbox and the first joint.
The second joint:
Here is another example of where the desire to pass through has greatly influenced the design of the joint. The joint is powered by a MAXPlanetary mounted on the second arm link pivoting around a stationary MAXSpline. On the other side the encoder will be mounted on the first link and rotates off of the dark blue MAXSpline.
The wrist pivot (third joint):
Here is where we are finally using just a dead-axle with the motor mounting down the second link next to the second joint pivot motor. It was placed due to electrical simplification here. There wasn’t much concern around making the wrist able to pass through the gap due to use being able to just swing underneath.
Cube Intake:
The cube intake is a basic roller demonstrated with the prototypes feeding into the end-effector. The current plan is to have a polycarb roller wrapped with silicone tape to make it grippy, it will be powered by a Redline on a VersaPlanetary down to the roller. It has a front beater bar to prevent spinning it directly into things. It is mounted with polycarb plates to the A-frame of the arm to make it slightly compliant in the event of a collision. It will be pivoted by the MAXPlanetary in the arm gearbox with 2 belt reductions. Not pictured here, but seen in our previous posts are the thin polycarb flaps to direct the cube inwards to the end effector.
Cone Intake:
We have decided on using the auger intake mentioned in prototyping posts above. It will be on a virtual 4 bar so the intake doesn’t stick into the inside of the robot when in the stowed position. We did this by standing off a sprocket on the mounting plate, and the side plates with the rollers. There will be a virtual 4 bar on each side to provide more stability. The mounting points are on the support tubes for the jointed arm. The cone intake is our last priority for our first event because motor mounting/packaging is going to be difficult and we feel that picking up only standing cones with the end effector would still make us competitive.
Overall, the dual intake design is creating some very interesting packaging challenges, as we are only using a 25” square frame, but we feel it will likely work well with the double jointed arm being able to pass through itself. There will definitely need to be some collision avoidance needed that we plan to account for in software/sensors.
End Effector:
We have a model of an end effector now, but it will likely go through some further changes today. These include: putting the belts between the compliant wheels instead of on the top and bottom, adjusting the placement of the motor and gears, and lightening the plates. Using the MAXSpline and dead axle tube, we unfortunately need a larger plate to fit tubes on the sides as mounting.
Here is the link to the document: https://cad.onshape.com/documents/7b17c8664d1313c397a0fcf3/w/61b5c8329f7f5c6023f50c77/e/e97dd74a09392a7894b2cfe3
Feel free to reach out with any questions.
-Matthew and @Advaith
Your arm design looks good, but I have some concerns that I’m interested in hearing if you are going to address or have already addressed. Just based on the design, it feels like the arm is going to end up being very floppy, or at least prone to side to side movement, even with the collars and other devices you have used to lock it down. Have you considered adding any other supports specifically designed to keep the arm rotating in its intended directions? (Maybe two disks on both sides of the joint, meant to prevent the arm from flexing too much to the left or right?)
Software Update #3: Flailing With Precision
That double jointed arm looks pretty fancy . I guess we’ll need some code or something to control it. /s
We’ve been working on controls for the arm for a couple of weeks. As the second most important subsystem on the robot (the drivetrain is sort of critical), we want to make sure to get the arm controls as fast and reliable as possible before needing to test on the real hardware. To review, our arm has four components to manage:
- A static segment centered on the frame and extending ~25 inches off of the ground.
- A shoulder segment with a ~180° range of motion that’s ~27 inches long.
- An elbow segment with a ~270° range of motion that wraps around the bottom and is ~33 inches long. Note that the elbow segment is longer than the shoulder.
- A wrist (gripper) with a ~180° range of motion that extends ~10 inches.
Check out @Matthew3’s post above for more information on the arm’s construction.
Kinematics
Kinematics is the logic that converts between joint angles and the XY position of the end effector. Here are the assumptions we’ve made when defining the arm coordinate system:
- All XY coordinates are aligned with the origin of the robot (centered and on the ground), then projected onto the XZ plane of the robot.
- All angles are CCW+. The shoulder angle is measured relative to “flat” along the ground (for example, straight up is 90°) and the elbow/wrist angles are measured relative to the previous joint.
- To avoid angle wrapping on the elbow joint, it only uses angles from 0° to 360°.
All of our preset positions are defined at XY coordinates with a “global” wrist angle. This means that the angle of the wrist relative to the ground as defined in the preset doesn’t change based on the positions of the other joints.
The forward kinematics of the arm (converting joint angles to the end effector position) are relatively simple:
Our implementation for inverse kinematics (converting the end effector position to join angles) is based on the math from this video. There are always two solutions for the inverse kinematics, and we select the appropriate one based on the sign of x. For example, we want to choose the joint angles shown on the left instead of the right when x<0:
Feedforward & Feedback
For a double jointed arm, the position of the elbow joint affects the feedforward voltage of the shoulder (and vice versa). We’re using an implementation of @rafi’s feedforward model to convert between voltages and the joint positions, velocities, and accelerations. In summary, given the mass (m), length (l), moment of inertia (I), and center of gravity (r) of each joint, the required torque (\tau) can be calculated using an equation of the following form:
Note that the positions (\theta), velocities (\dot{\theta}), accelerations (\ddot{\theta}), and torques (\tau) are expressed as vectors of the two joints. To convert between torque and voltage, we use WPILib’s DCMotor
class; this can account for the number, type, and gearing of the motors for each joint.
In addition to the feedforward model, the shoulder and elbow joints are running relatively simple PD controllers to reduce error in positioning. Our tests in simulation suggest that this is able to effectively deal with unmodeled disturbances like the mass of a game piece. Ultimately, we hope that the feedback controllers are similarly able to absorb any discrepancies between the theoretical physics of the arm (based on the CAD) and the true physics constants.
Of course, the arm is dealing with a wrist in addition to the shoulder and elbow handled by the feedforward model. Given its relatively small range of motion compared to the other joints, we assume that the wrist is always at 0° for the dynamics calculations. The physics constants for the elbow and wrist segments are merged by the code for this use case. To reach its setpoint, the wrist is running with a simple trapezoidal profile using WPILib’s ProfiledPIDController
class.
Simulation & Visualization
Based on the same physics model, we can convert the applied voltage (torque) back into acceleration. This enables us to run a “physically accurate” simulation of the arm where the joints interact realistically. We’re using WPILib’s numerical integration functions to calculate the joint positions and velocities.
To visualize the movement of the simulated arm, we’re using AdvantageScope’s new support for 3D mechanisms based on the current version of the robot CAD. The robot code just logs a series of 3D poses indicating the position of each joint. As an example of simulation in action, we can position the arm above the robot and then drop it. Notice how the movements of each joint affect each other, like the elbow falling both backward and forward.
This video also demonstrates the limitations of this simulation approach. Notably, there is no simulation of collision with the intake (or the ground). The wrist is also simulated as a single jointed arm with no gravity, so it doesn’t match the motions of the other joints. However, the simulation is still useful as a proof of concept for the control system. For example, we can experiment with adding forces to the simulation that aren’t modeled by the feedforward model (as mentioned previously).
Trajectories & Optimization
A feedforward model is all well and good, but we’re still missing a critical puzzle piece: How do we plan the arm’s motion from point A to B? Manual control is useful as a fallback or for fine adjustments, but software-driven trajectory following will almost always provide better results for bigger movements. There are several approaches to this problem that we considered:
- Trapezoidal profiles on each joint: This is a simplistic approach that’s effective in some cases, but its key disadvantage is that it disregards the intermediate positions of the arm. There are lots of obstacles that we need the arm to avoid, like the robot body, intakes, and nodes. Due to the risk of unplanned self-destruction, we quickly eliminated this approach.
- Pregenerated splines: Instead, we could define smooth splines between all of the important arm presets in the XY coordinate space. This is better, but still has some issues. It requires significant work to manually write and test these paths, and the pregenerated approach makes it difficult to maneuver smoothly from unknown arm positions (for example, if the operator makes a manual adjustment while scoring). More fundamentally, this approach prioritizes smooth paths over fast paths; the fastest route from A to B may not be a direct path in the XY coordinate space.
- A* + splines (or similar): This addresses many of the issues of pregenerated splines, particularly if the path finding is handled in joint coordinate space instead of XY. Obstacle avoidance can be handled on-the-fly and the generated paths are smooth. However, simple splines still ignore the true dynamics of the arm, leaving potential performance on the table.
Ultimately, our goal is to follow paths that will always move from point A to B in the shortest time that the mechanism will allow. We also have a feedforward model that predicts the required voltage for any potential movement, which cannot exceed 12 volts. To solve this problem, we turned to numerical optimization.
Numerical optimization libraries like CasADi attempt to minimize a function given a set of variables and constraints. This is a generic technique that’s applicable in lots of situations, including path planning for drivetrains and arms. Our problem formulation for CasADi looks something like this:
- Select 25 discrete sets of joint angles given start and end angles.
- Minimize the total time of the arm’s trajectory.
- Keep the applied voltage at every point under 10 volts (some margin is required for the feedback controllers).
- Keep the end effector position away from defined XY regions (the ground, robot body, intakes, nodes, etc).
Here are a few examples of trajectories CasADi has generated:
Avoiding obstacles
This is an example of an extreme obstacle in the middle of the optimal path, but the trajectory is able to avoid it effectively. On the robot these obstacles will include the robot frame, intakes, and nodes.
Taking advantage of momentum
On this path, the shoulder moves back and forth to keep the elbow centered for as long as possible while building momentum. It can accelerate faster when the arm isn’t cantilevered out towards the final target.
Lifting with the shoulder
When the position constraints allow for it, it’s optimal to lift the arm with the two shoulder motors instead of the single elbow motor. Notice that the elbow is assisted by gravity on the way down such that it “falls” into its final angle, then the more powerful shoulder joint lifts the full mechanism.
Kairos
The downside of numerical optimization for trajectories compared to simpler techniques is that it is far more computationally intensive. While many of the paths can be generated ahead of time and cached, we still need to be able to control from unknown starting positions. To solve this issue, we’re taking advantage of the four Orange Pis on the robot that were intended to run Northstar (AprilTag tracking).
We’ve written a Python wrapper around CasADi for arm trajectory generation, which we’re calling Kairos. Kairos runs on all of the Orange Pis and communicates with the RIO using NetworkTables. When a new trajectory is required, the RIO sends a request over NT to the Kairos instances. Within ~250 ms they will send the generated paths back to the RIO. Running Kairos on all four Pis is simply for redundancy (though perhaps a little overkill).
One of our key concerns when running the arm controls across multiple devices is keeping the configuration data (e.g. physics constants) in sync. We don’t want Kairos generating trajectories that are infeasible for the RIO to follow. While we generally prefer to keep constants in source code, we made an exception in this case by moving the arm configuration to a JSON file. The contents of the JSON can be sent directly over NT to Kairos, so there is only ever one source of truth for the arm constants.
Intake Avoidance
Smashing the intakes is bad, apparently. They’re also conveniently positioned directly in the optimal path for the arm:
For now, our intake avoidance scheme is to create avoidance regions around each intake. If the end effector (anywhere from the wrist to the tip of the gripper) enters one of those zones, the appropriate intake is deployed automatically. When following a trajectory, we also “look ahead” by 250 ms and deploy the intake if the arm is about to enter one of the avoidance regions. Here’s an example of the arm moving between several preset positions on both sides of the robot. All of the trajectories seen here are generated by Kairos and followed using the full simulation and feedforward models:
One of our next areas for improvement will be reducing the number of unnecessary multi-side extensions, as these are not legal outside the community or loading zone. However, we expect that these sorts of rapid movements from one side of the robot to the other will be rare in practice as the arm will automatically retract to center when we aren’t actively intaking or scoring.
Code Links
- Arm subsystem (here)
- Arm kinematics class (here)
- Arm dynamics (feedforward & simulation) class (here)
- Arm simulator implementation (here)
- Kairos source code (here)
- Kairos solver implementation with CasADi (here)
Thanks again to @rafi for the feedforward model on which much of this work is based. We hope that this may prove useful to other teams attempting to wrangle their multi-jointed arms. As always, we’re happy to answer any questions.
-Jonah
I could watch this a thousand times, awesome work!
Double-jointed arm trajectory optimization on a Pi is a solid approach. I like the chosen architecture a lot, and it’s cool to see more teams using modern controls tools like CasADi.
250 ms is a pretty good total solve time for CasADi too. From a local run of run_local.py (I appreciate the fact this is portable btw):
Total seconds in IPOPT = 0.140
EXIT: Optimal Solution Found.
solver : t_proc (avg) t_wall (avg) n_eval
nlp_f | 71.00us ( 2.09us) 66.66us ( 1.96us) 34
nlp_g | 6.84ms (201.09us) 6.82ms (200.64us) 34
nlp_grad | 462.00us (462.00us) 461.17us (461.17us) 1
nlp_grad_f | 245.00us ( 7.00us) 242.05us ( 6.92us) 35
nlp_hess_l | 49.12ms ( 1.49ms) 49.38ms ( 1.50ms) 33
nlp_jac_g | 26.26ms (750.34us) 26.52ms (757.72us) 35
total | 139.99ms (139.99ms) 140.66ms (140.66ms) 1
Setup time = 0.794 ms
Solve time = 271.116 ms
I’ve been doing optimization R&D for a while now, and usually, CasADi spends a lot of time in problem setup before the CasADi solve() call (might be their graph coloring autodiff, but I haven’t been able to tell).
By the way, prior experimentation I’ve done with CasADi has shown that the Python wrapper has minimal overhead over the C++ version, so sticking with Python is a good choice.
https://youtu.be/IUdA7jxHnhw 6328 Mechanical Advantage has made amazing progress and provides an awesome kinematics simulation that details how their arm will move during a match to not collide with their intake. They also detail their intake prototype and provide a demo of their scouting software.
How did you guys make a toggle on the top of the screen?
You guys are doing amazing work this season, and I am incredibly excited to see your robot at competition. I wonder if you guys have set your bumper height because I do not see them in your cad. Also, I don’t understand what the bottom wheels on your cone intake are used for.
Very nice work! It’s neat to see more teams using optimization based solutions in FRC.
Looking through the formulation, I’d expect it would be significantly faster using a fixed dt and cost function of state error instead. This is almost always done in industrial robotics since allowing varying dts makes the problem inherently more non-linear and non-convex (making the solver slower and more unstable, not a good idea). A very cool recent example is https://www.bostondynamics.com/resources/blog/picking-momentum.
Warm starting the solver using the previous solution as the initial guess for the current problem would also likely improve the computation time.
I’m not sure exactly what you mean by ‘toggle’, but I’ll assume you’re referring to the “auto, tele, post game” toggles at the top of the screen. If you were referring to something else, please let me know.
For these toggles, we created three HTML buttons with some JS to switch the game state in our app. When one of these buttons are clicked, the game state will update, and the new component will be called in Svelte every time game state changes to load the respective page.
The bottom wheels were originally added in the prototyping phase to help sorta “kick” the cone into the flange grabber. We are actively assessing if these are necessary and they may vanish in the next couple days. The tipped cone intake has been somewhat on the back burner as of late. The bumpers are planned to be about 2.25 inches off the ground.
Pardon me if I’m missing something because a lot of this seems over my head, but would you be able to elaborate further on how you converted position waypoint data output from your pathfinder to velocity and acceleration data? I am trying to write code to calculate these motion profiles trajectories but I am stuck going from position data to velocity information with realistic accelerations while maintaining time-synchronization. Thanks!
Now that I am looking at the code itself, I am going to assume that the velocities and accelerations are handled by CasADi. I am trying to run the run_local.py on my machine, and it is outputting the graphs, but where can I find the trajectory data? If it is being sent only over NTs, where can I find that in the code so I can instead print it out?
CasADi is optimizing the positions and total time (effectively \Delta t), which can be used to calculate the velocities and accelerations using a finite difference. For example, with a fixed time between points any that are closer together will have a lower velocity. Calculating the velocity at a sample requires two points and calculating acceleration requires three points. Here’s the relevant code in Java and Kairos (the latter of which is also calculating jerk with four points).
“run_local.py” is specifically for testing the solver without sending data over NT. The return value of “solver.solve()” is a tuple of (total time, shoulder position array, elbow position array). You can modify “run_local.py” to print out those values after they’re returned here. Calculating the velocity and acceleration at a given sample point is handled by the Java code (linked above). The “run_client.py” script will receive requests and send the trajectories over NT.
Thanks for the detailed response!