Team 254 Presents: 2019 Code for Backlash

Interesting. We currently are running something similar to the 254 2016 CheesyDrive implimentation with NEOs and it feels comparable to CheesyDrive with CIMs/Mini CIMs.

Was this driver preference or something that the programming team thought was better for this robot?

Why do you prefer using Fused Heading over Yaw?

I see you guys also know the secret to (effectively) doubling the distance allotted for automatic vision alignment - accounting for reverse! Though your acceleration models don’t quite let a full-reversal reach its full (yeet-like) potential.

Both. Cheesy Drive accounts for robot inertia, but the NEOs had way more torque, making that unnecessary. Furthermore, the NEOs coasted a lot more, making it harder to control using Cheesy Drive. Using the pure curvature drive (Cheesy-ish Drive) with the NEOs on brake mode was the alternative, and our driver ended up liking this a lot better, anyways.

The only difference between fused heading and yaw is that, if you calibrate it, fused heading uses the Pigeon’s compass to reduce drift of the gyro’s zero position, which makes the measurement a little more accurate, which is why we use it. However, if you don’t calibrate the compass, the fused heading and yaw are exactly the same.

1 Like

This is my first time reading your code, and I’m curious about something rather basic. In your geometry libraries you have a file called Twist2D. I don’t really understand how it would look. Is it an arc where the robot changes direction? I’m confused about how it would be defined. Is there any reference material I can see?
Thanks

The Twist2d class is intended to represent a delta position, a velocity, or an acceleration, so it’s physical representation changes based on the use.

Two examples of how Twist2d objects are used are in our Kinematics class. The forwardKinematics method converts delta encoder for each side of our drivebase and the delta gyro angle into a Twist2d object representing the overall delta position for our robot. The inverseKinematics method on the other hand converts a Twist2d object representing the overall vehicular velocity into a left velocity and right velocity for the drivebase.

A couple of clarification points. First of all, our frame of reference is as follows: positive x is forward, positive y is to the left, and positive theta is counterclockwise from straight ahead (per the right hand rule). Also, most of the time the dy value in any given Twist2d object is 0. This is because, since we use a differential drive, we assume that our robot goes forward (dx) and then turns (dtheta). This assumption is valid, because, per differential calculus, if the time period for which you calculate your delta position, velocity, or acceleration is low enough, this method provides a relatively good approximation of what the actual robot kinematics are.

1 Like

1. Cheesy-ish Drive
In what situations the driver use the quickTurn mode? (Except for in-place turning).
Why not to make automatic switching between quickTurn and regular mode?

2. Sensor frame rate update
As it seems your loops run in 10ms, what is the sensor update rate in the drive (for getting drive wheels’ velocity from the spark max)?
What would be the sensor update rate you were choose for drivetrain if you were using SRX?
There is a reason not to make it same as loop time? Does it overload the controller? (I’m asking because it seems that in 2018 the update rate for your drivetrain sensor was 50/100ms, while drive loop run in 10ms).

I can speak to 1 - because Jared mentioned it in the past. The simple way of doing this (you might think) would be to use quickturn when you aren’t applying throttle. However there is generally a large difference in turn rate between max turn while driving and max quick turn. As a result the transition can cause problems like overshoot when the robot suddenly starts turning much faster.

There are ways to mitigate the difference by smoothing the transition, but I haven’t driven one I was happy with. Or you can use a button.

We actually use the throttle method, and have a ramp if you transition from driving forward turning to quick turn to smooth it. Our driver was happy, so we went with it.

1 Like

I’m having trouble following a piece of the code, and I’m hoping you can help. It is about robot localization/state estimation.

It seems that Limelight is giving you a target location, but that location is in camera coordinates, right? (I’m not familiar with Limelight.) So, that has to be translated to ground coordinates. I’m not seeing where or how you are doing that. How are you taking the location of the target in the image, and translating that into the distance and orientation from the camera? I assume that is happening in your robot code, and not on the Limelight itself.

I see this piece of code in Limelight.java

        double nY = -((y_pixels - 160.0) / 160.0);
        double nZ = -((z_pixels - 120.0) / 120.0);

        double y = Constants.kVPW / 2 * nY;
        double z = Constants.kVPH / 2 * nZ;

        TargetInfo target = new TargetInfo(y, z);

but I don’t think there’s enough information there to get ground coordinates out of the image points, and I don’t see where it’s later modified.

Or is there an approximation being employed that I’m missing, so that you don’t need complete information?

Anyway, thanks for publishing your code. Any help understanding that calculation, or understanding your general approach to localization would be appreciated.

Our drivers do not use the quick turn mode other than in-place turning, and even that is kept to a minimum. Quick turn mode is controlled by a button on the turn stick, and our drivers know that it’s only pressed when they want to turn in place. As far as I know, we have never attempted to try automatic switching between regular turning and quick turning, because it’s never been really needed.

This year we were kind of iffy on what type of encoder to use for the drivebase, because we weren’t initially sure whether we wanted/would be able to use a 3 Talon SRX/Mini CIM drive gearbox or a 2 SPARK MAX/NEO drive gearbox. What we ended up using for an encoder was the US Digital S4T encoder on the front drive wheels, because they were compatible with both motor controllers (since they were just wired to the DIO on the roboRIO), but also had comparable resolution to the SRX Mag Encoder.

Since the encoder wasn’t read over CAN, there’s not a sensor frame rate per se, but we just read it once per loop in our Drive class over DIO. If we were to use the talons with the SRX Mag Encoder, we would probably use the 5 ms feedback status rate, which we used in 2018. This never really gave us issues, in terms of overloading the controller.

The reason it wouldn’t be the same as the loop time, is that (I’m not an expert on Talon SRX’s, so I could be wrong) the Talon’s might have a control loop that runs even faster than our loopers, so a faster feedback update rate would more optimal for that. We usually decide number like these based on experimentation.

So the code you referenced is just one step in a much larger process that eventually leads to a position of the vision target, in the field frame of reference. I’ll go through those steps in this post to make it a little bit more clear.

I guess the place to start is with the LimelightManager class. You can read the JavaDoc comments at the top of each of these classes to understand their purpose, so I won’t go into that a lot. The main part here, is that in the enabled loop onLoop method, this calls the RobotState singleton’s addVisionUpdate method, passing in the current target that the active Limelight sees, and some other parameters.

The getTarget method in the Limelight class extrapolates the top corners (because sometimes the bottom of the vision target is occluded by a hatch panel, so choosing the top corners is more consistent) and generates TargetInfo objects for each corner, which represents their position in view plane coordinates, using the math described here in the Limelight Documentation.

In the addVisionUpdate method in RobotState, these target infos are converted into Translation2d objects representing the camera to vision target position, by accounting for the camera’s pitch, and then scaling the coordinates to real life coordinates using the known differential heights between the camera and the vision target. Then, both of these positions are averaged out to find the midpoint, which is then transformed into the field frame of reference, and is added to two GoalTracker objects, one tracking the high goal (i.e. cargo ports on rocket), and one tracking the low goal (i.e. hatch panel ports on rocket, etc.). GoalTracker objects help to stabilize the vision target position over time to reduce variation caused by noise.

Just a note on tracking the two heights. When we’re seeing a vision target, we don’t know if it’s the taller target or the lower target, so we calculate the position for both, and store them separately. Then, when we need to use the actual position (i.e. we’re auto aiming the turret or auto scoring), we select which one to use based on the scoring objective and superstructure position.

1 Like

Thanks. Now I need to work through the math.

1 Like

It took me a while (darned that real life/job/financial interference) but I finally figured out what this was doing…at least up to the point where it passed off the data to GoalTracker.

1 Like

The Auto Steer functionality is so awesome. I love the idea of driving the robot as autonmously as possible during TeleOp Play.

How often did the drivers use that functionality? I was watching some of your matches, and was trying to determine how much of the navigating is human controlled as the driving looks so smooth. When running scoring cycles to the rocket ship… how were the goal trackers utilized?

I obviously cannot answer the question of how often the functionality was used, but I believe I can point to how the goal trackers were used. Anyone feel free to correct me, I’d love to learn more about the system than what I can get from reading it.

TLDR
The aiming parameters could be used to autonomously steer the drivebase during teleop and autonomous, as well as aim and control the turret during all parts of the match minus endgame.

The Goal Trackers were storage for GoalTracks, they also handled the divvying up of updates to tracks.

From the LimelightManager class, the RobotState was updated with information from the appropriate Limelight (at certain points, one’s view was obstructed by the superstructure)

Within the RobotState singleton, both trackers were updated with the information from the limelight.

All is packaged together with the method getAimingParameters(). This is where the desired GoalTracker was selected, and it’s tracks sorted based on constants found here.

Side note for 254
Absolutely beautiful code this year (just like last year, and the year before that :wink: ). It’s really fascinating to be able to see the progression of your code via the structure of things (adding on prismatic linear extension after SFR, multiple different planners for the superstructure after the bottom intake roller was removed). As always, your state machines are very clean, and I particularly enjoyed the addition of the ServoMotorSubsystem. I have made use of the motion profile system within each subsystem as an effective way to simulate mechanisms for my own projects.

A question of my own:
Was the integration of the custom motion profile system for each subsystem going to be used for the prismatic linear extension, and then just left alone after a second speed for the arm during thrust was added?

Also, the ‘tuck’ capability was only used when the driver’s pressed the stow button, which seemed to only happen here. Am I correct in both statements?

Reading your code the past few years has most certainly raised my ceiling, as I would expect it has for many others. Thanks again.

3452
GreengineerZ

1 Like

Right. Haha I understand how they did it technically, was more just curious how often they used it. I’m guessing they loose sight of the targets often

Our drivers used the auto steer functionality pretty much every match. It was used to help line up with the loading station pretty much every time we went to get a hatch panel, including in auto/sandstorm.

The GoalTracker objects weren’t really used to keep track of vision targets over long periods of time when we couldn’t see it; rather they were used to stabilize the position of objects while they were in frame (to account for noise), which improves accuracy. @maccopacco’s summary is a good general overview of how vision worked on our robot.

When we’re writing code that will likely be used in future seasons, such as the ServoMotorSubsystem class, we like to develop it so that we have options in the implementation of it later, which was a big reason for adding our custom motion profiling to it. With the second speed for the arm during thrusting, that was just meant to slow down the arm’s motion magic movement during our prismatic motion, so that we can effectively move in a straight line.

I didn’t really see it in that video (I could have missed it), but you are correct in that it was only used when the operator pressed the stow button. This is because the tuck motion is slower, and is only necessary when we are under heavy defense or have other space constraints that require us to keep superstructure movement between the frame perimeter. I’m not sure whether or not it was ever used in a match.

1 Like

Post Chezy Champs Update
Here are some of the changes we made to our code for Chezy Champs:

  • We extended our side cargo ship autonomous mode, so we now score our second hatch panel and attempt to return to the loading station. We chose to run this auto off of hab level 1 (rather than hab level 2 like at Champs) to increase our chance of scoring the second hatch panel before teleop, and to start moving towards the loading station so we’re primed to pick up another hatch panel once teleop starts.
  • We also reduced the distance we drive forward when placing the second hatch panel in the aforementioned auto to avoid situations like this.

This commit contains all the changes.

As a side note, we chose to keep our Spark MAX Firmware and API at 1.1.33 and 1.1.9, respectively. This is because, even after using their workaround for this issue, we still encountered weird issues where random devices would blink the codes for either the brushless encoder error or gate driver fault, effectively disabling that device until robot code was restarted.

5 Likes

Hey, I wanted to ask about your LEDs – How did you connect them? Via Arduino or just straight into the roboRIO?

Our LED strips are wired to one of our two CANifier’s on our carriage. It is then controlled in code here.