FRC 1466 Webb Robotics – 2023 Build Thread

Code Update!

This week’s update will be a general summary of all the wonderful code things we’ve done this season. Prev post didn’t have images/videos.

PathPlanner

As this is our first year using java, it’s also our first year using PathPlanner. We started out by learning how to use the desktop app to design different routines and simulate them on the app. This really helped in our early strategy planning of deciding what auto is feasible within a certain timeframe.

Soon, we started being able to actually test the routines on our robot and tuning the auto PID constants. One thing we’re happy to note is that, if for some reason your rotational velocity is inverted in your teleop command, and you want to be able to translate that to auto, you can simply invert the sign of the P constant.

After we experimented with a simple PPSwerveCommand, we started moving on to the SwerveAutoBuilder, event maps, and stop events. This makes it so we can integrate actions from the arm and gripper seamlessly with the robot moving around. Overall, Pathplanner has proved to be a great tool for us to move our focus on to developing and testing other things, like autonomous but in TeleOp.

Auto-align in Teleop

When experimenting with PathPlanner and learning the basics of what robot pose was and why it mattered, we stumbled upon the PathPlanner on-the-fly trajectory builder (link to docs). Along with our goal of using vision processing this year, this led us to pursue auto routines in teleop as a method of driver assist.

We purchased a 3x3 button box, which will represent one scoring grid in the community, where each button represents a node. Instead of buying three of them and lugging around three button boxes with 27 buttons (I’m not even sure if wpilib can handle that many buttons), we decided to only use one, but detect if the robot is within one of three zones to decide what nodes it correlates to.

We currently have the robot successfully aligning according to button presses. One thing to note is that PathPlanner path generation, as of writing this, is not constrained via rotational constraints, only translational. What this means is that if you’re really close to the translational pose you want, but are rotationally far away, maybe facing 180º away, then the command finishes almost instantly and wildly turns to compensate. This can simply be solved by moving the robot to a moderately correct rotation at a decent distance away before hitting the button, but we’ve also developed a pseudo-rotational constraint function that roughly updates current constraints depending on a radPerSecond constraint.

In addition to the 3x3 box, we’re planning on having two buttons on the drive controller to be reserved for aligning to the double substation. It remains to be seen if this is the best idea, since this would rely on the humans behind the substation to be very consistent, and for the robot to match it.

Pose Accuracy

What you might have noticed by now is that a lot of this requires for the estimation of the robot’s pose to be pretty accurate. We’ve tried to combat some of this by making sure encoder measurements were on point, but at some point this requires more, since drift would build throughout the match (excluding some sort of driver zeroing routine, since that would take away from cycle time).

This year will be our first year using vision processing on our robot. Our software of choice is PhotonVision, which we’ll be running on an Orange Pi 5 and an AR0144 Global Shutter Monochrome camera. At this point, we’re only going to be using one camera, though this could change in the crunch time before competition (we bought an extra for redundancy). We were successfully able to set up PhotonVision on the Orange Pi and access the web interface to calibrate the camera. Unfortunately, we haven’t had the time to mount/wire the camera on the robot and test pose estimation there.

Another possible source of pose inaccuracy we thought of early in the season was problems with pose estimation when going over the charge station. The current pose estimator works by getting diffs of module positions, and using the current module angles and the gyro angle to apply a twist. The problem is that this is all calculated in 2D, assuming that the field is flat. This is a problem because, if the robot drives straight a meter, its true position is going to be a lot different between traveling on flat ground and on the charge station, but the estimator is calculating it as the same thing.

Initially, we tried to come up with a solution to this ourselves by creating a custom odometry updater using roll and pitch in addition. This would simply transform the plane z=0 according to the pitch and yaw using a rotation matrix then find the projection of the diff vector onto the z axis to find what the “true” difference should be. This ultimately didn’t work out, which we think now could be because of mistakes in assumptions about our gyro coordinate frame and how that combines with the robot relative angles. A picture though is included below of our thought process.

We later found out about this PR on the wpilib github adding 3d pose. We tested this on our real robot for half a meeting, and it seemed to work well (though we didn’t get much testing with it). We also encountered a problem where the code crashed sometimes when doing a PathPlanner auto simulation, and a Rotation3d filled with nulls appeared. Luckily, we were able to debug this and submit a PR upstream for the Pose3d class log method. Our plans are to use this code locally before the full 3d pose estimation PR is merged to fix the charge station pose issue, and hopefully use the updated version if the PR is merged. With working vision, the issue should be minimized, but since this is our first year doing vision, we want to make sure we have contingencies.

Swerve

Since this is our first year doing swerve, it’s also been fun ironing out the errors that pop up from time to time. One characteristic of our swerve robot is that we’re still aligning it by hand at boot, since our CANCoders still don’t appear to be working. We tried gluing the magnets recently, but it still seems to have some problems after that (one of the lights is orange but more than one CANCoder is having issues).

Because of the inconsistencies that pop up aligning by hand, we simply correct for the heading error (when the robot is only translationally moving) with a pid loop to output a new ChassisSpeeds. Our PID constants (both for the heading correction and for the modules) are also currently only tuned by hand since we haven’t gotten around to actually systematically identifying our system with the SysID tool (we’re encountering the CAN bug).

We were also able to moderately successfully test our swerve robot with the math of the 2nd order kinematics whitepaper that came out a while back. Because of physics, the robot would originally skew translationally when rotating and going in a direction. With the fix, we were able to stop this skew from occurring.

Since we wanted to have a strong drive base without issue, we decided recently to switch to an up-and-coming swerve library: YAGSL (Yet Another Generic Swerve Library). This better implemented the 2nd order kinematics, and is generally a lot easier to use and better in implementation than our previous code. Because a rewrite was going on, we also took the opportunity to contribute to adding support for TalonFX motors (since the author was mostly testing with NEOs). There still appear to be a few possible issues, which we’re working on figuring out (like possible problems with Cancoders, which we’re not using so we can’t test unfortunately), but we were able to get the library successfully working for us.

Simulation

With the advent of YAGSL also came an exciting tool: simulation. The library comes prepackaged with the basic ability of a Field2d and being able to test out working drivetrain code, which has sped up our development time a lot. We can now make the theoretical stuff out of the lab, test it to make sure it actually behaves how it’s supposed to in simulation, then convert it to the physical world in the lab.

We also are using Mechanical Advantage’s AdvantageScope software to simulate the robot in 3D on the field. We haven’t done a lot of experimenting with this tool yet, but a whole lot of possibilities are now open. We also made a one jointed arm Mechanism2d just to test out how to do it, and it works pretty well.

Swerve Balance

Another implementation we were able to develop was swerve balancing on the charge station. We developed some theory for it, assuming that the robot was on a tilted 2D plane, and this plane was essentially the derivative of a 3D function. We want to then find the maximum of this function where the plane is level. This is essentially a gradient descent problem, so we just needed to find the derivative of the 2D plane and walk along that in the x and y directions (using robot coordinate system) every loop until we get to the maximum.

We first applied an approach to use a rotation matrix to transform the set of all points on the z = 0 plane to a new plane, then cancel out things and find our final function. This seemed to work pretty well, but after closer inspection of the Pigeon 2 User manual, since roll is defined as about not a coordinate axis, but a local reference axis, it could be as simple as calculating the tangents of the angles. Regardless, it works fine, and the possible error only exhibits itself when the robot is trying to balance at a diagonal (where it just drifts a bit to one side). This is fixed in the latest.

Videos are attached of it working. We also included a tuning mechanism of a constant and an exponent. This means you can scale the balance to be a lot more aggressive or less aggressive. There are problems when increasing the scale of it oscillating (since this is really just a P loop), but this (should) be fixed with the exponent which makes it so that when the robot gets closer to level, it drops its output more than when it’s not level (think of an exponential curve far away from the origin vs close). A good combination of this should theoretically net a really fast balance. Note that your gyro must be calibrated precisely for this method to work.

Arm/Gripper

We were also able to start coding our arm and gripper recently. Our arm is just a one jointed arm, so coding it wasn’t too difficult. Since we didn’t have limit switches for the arm and wanted to make sure that, with any setpoint, the arm wouldn’t run into itself, we created a custom ArmPIDController that uses Rotation2d for measurement and setpoint. This includes a new setAvoidanceZone method (also using Rotation2d) that creates a small arc of the circle (must be less than 180) as an avoidance zone. If the setpoint is within the zone, it will be pushed out to the closest border (plus/minus a tolerance). If the measurement is within the zone on the edge, it will move the opposite direction to get to the setpoint. If the arm has to pass through the zone to get to the setpoint, it will go the opposite direction, taking the long way around. Absolute encoders are only supported for measurements currently.

We’ve been able to test the arm and it works well. We have a bug where if you switch buttons quickly there’s unexpected behavior (this should be fixed in latest). It’s also working well with auto and driver assist in teleop.

Our past gripper design (we should be getting a new one soon) was a simple claw, so we just used some presets of setpoints for gripping. This worked really well for cube pickup, not so well for the cone (gripper design meant cone pickup had to be super precise).

Our past gripper design also had to not be open when going from ground/storage or vice versa, since it gets in the way of the arm and tries to destroy itself. We implemented fixes for this in the code, but sadly a bug occurred. If a button was pressed, then the robot was disabled/enabled, the robot would continue following the past command. This was fixed later by implementing a Trigger on DriverStation.isTeleopEnabled, but not before this happened.

Future Plans
We’re currently refining controls and the speed of the robot (especially when doing autonomous). We should have some new controllers on the way that should be fixing an awful deadband issue. When the next gripper is designed and built we also need to integrate that. We also haven’t tested vision yet, but the cases for the camera and the coprocessor should be finished so we can wire and mount it.

4 Likes