Team 254 Presents: 2019 Code for Backlash

Team 254 is proud to present the code for our 2019 robot: Backlash. If you have any questions, feel free to ask!

Some highlights from this year’s code:

16 Likes

One thing that I noticed right off the bat was the lack of the Ramsete controller and splines that you all used last year.

Was there any specific reason to switch back to the adaptive pure pursuit controller and straight line + arc paths?

9 Likes

What coding measures were taken to make the drivetrain more controllable and precise?

It’s because they used NEOs

1 Like

There’s a reason you ended up not using your Drive Characterization?

First and foremost, thank you for releasing your source code. It’s an incredible resource and I always look forward to discovering all the little details and ways it gets better every year.

I would love to know more about your subsystem testing process. Is that system used for testing in the pit? In addition to any automated tests, what manual checks were done for testing and checking overall health?

One part of it was that we ended up not being able to characterize the drivetrain reliably because of the NEOs and prioritized the rest of our code (primarily superstructure code) over rewriting our auto stack. Another reason, which we realized early on, is that if we wanted to use vision for actions such as the AutoSteerAndIntakeAction class, we would not necessarily be able to predict the end velocity of our robot after this action, so would not be able to pre-compute trajectories on start up. One thing we did try in the first couple weeks of build season was to compute trajectories on-the-fly, but this was too time consuming. Therefore, we wanted to use a follower that we didn’t have to spend too much time on and knew would work with these constraints, which was our adaptive pure pursuit controller.

I’m not sure exactly what you mean. Are you asking about teleop driving or about how our autonomous works?

We did not use our drive characterization because we did not end up using the nonlinear feedback follower from 2018, so the drive characterization was not required.

The subsystem tests were not used as much this year. I believe that the ones that are currently in the code were used around Bag & Tag time to test the robot before bagging. In the pit before matches, our systems check consists of checking all sensors (encoders, gyro, Limelights, etc.) for direction/values and then testing all driver controls including driving, superstructure movements, and climbing to make sure that everything behaves as expected.

The other thing we did was every time we replaced a motor, we would use Phoenix Tuner to verify that it was the correct direction and was working correctly.

Teleop driving, sorry for not clarifying.

Actually my intent was the answer you gave to Prateek_M, thanks.

1 Like

This year, our teleop driving worked a little differently than in previous years. We used what we’re choosing to call Cheesy-ish Drive that is a little simpler than the CheesyDriveHelper class that we’ve used in the past (the link there is to 2018’s code), but felt better to our driver during this season.

In terms of aiding the driver with lining up accurately with vision targets, particularly the alliance wall loading station, we have an auto steer method that allows the driver to control the forward throttle, while automatically calculating the desired curvature for the robot to drive based on a proportional feedback loop aiming for the vision target.

The accuracy of the drivebase this year wasn’t the most important thing, though. One of the reasons that we decided to go with a turret, is that under heavy defense our drivers could just aim the superstructure towards the rocket or cargo ship and score, without the drivebase placement being good.

Hope this answers your question.

1 Like

Was the switch purely because the driver liked the way it felt better? How does this differ from the previous iteration of CheesyDrive?

2 Likes

The primary reason was because it felt better on this particular drivebase (probably had something to with the NEOs, since we’ve successfully used CheesyDriveHelper on CIM and Mini CIM drivebases before).

Cheesy-ish Drive, as we call it, simply scales the turn stick value using a sine function (we always do this because we’ve found it makes it feel better to drive), and then drives the robot based on the output of the inverse kinematics method, which converts desired robot velocity to left and right wheel velocities.

CheesyDriveHelper on the other hand, is much more complex. This scales the turn stick value similarly with a sine function, but also takes into account the robot’s negative inertia, so it aids you if you are trying to increase or decrease the magnitude of your turn, as well as has quick stop, which allows you to stop on a dime while quick turning, rather than continuing to turn due to momentum. After the turn stick value has been altered by this method, it then determines the linear and angular powers that we want the robot to drive at, then converts that into left and right wheel powers to command the motors to go at, open loop.

1 Like

Interesting. We currently are running something similar to the 254 2016 CheesyDrive implimentation with NEOs and it feels comparable to CheesyDrive with CIMs/Mini CIMs.

Was this driver preference or something that the programming team thought was better for this robot?

Why do you prefer using Fused Heading over Yaw?

I see you guys also know the secret to (effectively) doubling the distance allotted for automatic vision alignment - accounting for reverse! Though your acceleration models don’t quite let a full-reversal reach its full (yeet-like) potential.

Both. Cheesy Drive accounts for robot inertia, but the NEOs had way more torque, making that unnecessary. Furthermore, the NEOs coasted a lot more, making it harder to control using Cheesy Drive. Using the pure curvature drive (Cheesy-ish Drive) with the NEOs on brake mode was the alternative, and our driver ended up liking this a lot better, anyways.

The only difference between fused heading and yaw is that, if you calibrate it, fused heading uses the Pigeon’s compass to reduce drift of the gyro’s zero position, which makes the measurement a little more accurate, which is why we use it. However, if you don’t calibrate the compass, the fused heading and yaw are exactly the same.

1 Like

This is my first time reading your code, and I’m curious about something rather basic. In your geometry libraries you have a file called Twist2D. I don’t really understand how it would look. Is it an arc where the robot changes direction? I’m confused about how it would be defined. Is there any reference material I can see?
Thanks

The Twist2d class is intended to represent a delta position, a velocity, or an acceleration, so it’s physical representation changes based on the use.

Two examples of how Twist2d objects are used are in our Kinematics class. The forwardKinematics method converts delta encoder for each side of our drivebase and the delta gyro angle into a Twist2d object representing the overall delta position for our robot. The inverseKinematics method on the other hand converts a Twist2d object representing the overall vehicular velocity into a left velocity and right velocity for the drivebase.

A couple of clarification points. First of all, our frame of reference is as follows: positive x is forward, positive y is to the left, and positive theta is counterclockwise from straight ahead (per the right hand rule). Also, most of the time the dy value in any given Twist2d object is 0. This is because, since we use a differential drive, we assume that our robot goes forward (dx) and then turns (dtheta). This assumption is valid, because, per differential calculus, if the time period for which you calculate your delta position, velocity, or acceleration is low enough, this method provides a relatively good approximation of what the actual robot kinematics are.

1 Like

1. Cheesy-ish Drive
In what situations the driver use the quickTurn mode? (Except for in-place turning).
Why not to make automatic switching between quickTurn and regular mode?

2. Sensor frame rate update
As it seems your loops run in 10ms, what is the sensor update rate in the drive (for getting drive wheels’ velocity from the spark max)?
What would be the sensor update rate you were choose for drivetrain if you were using SRX?
There is a reason not to make it same as loop time? Does it overload the controller? (I’m asking because it seems that in 2018 the update rate for your drivetrain sensor was 50/100ms, while drive loop run in 10ms).

I can speak to 1 - because Jared mentioned it in the past. The simple way of doing this (you might think) would be to use quickturn when you aren’t applying throttle. However there is generally a large difference in turn rate between max turn while driving and max quick turn. As a result the transition can cause problems like overshoot when the robot suddenly starts turning much faster.

There are ways to mitigate the difference by smoothing the transition, but I haven’t driven one I was happy with. Or you can use a button.

We actually use the throttle method, and have a ramp if you transition from driving forward turning to quick turn to smooth it. Our driver was happy, so we went with it.

1 Like