Team 1519 - 2018 Code Release w/2019 Beta Test Updates (Java)

It’s the day before kickoff and we on Team 1519 are finally ready to release our 2018 code, including the minor updates we made to support the 2019 beta control system.

Autonomous remains one of our areas of focus; this year we had 17 different autonomous programs, each of which could have multiple variants depending upon field layout. These autonomous programs were made up of smaller building blocks that performed portions of the drive path, placement on the scale or switch from different locations, etc. The more complex autonomous programs were sequences of the autonomous program building blocks, all strung together using the Command-based robot features.

Our mainstay autonomous programs were the “StartRightScaleAndSwitch” program, which placed 2 cubes - one on the scale, and one on the switch, for all 4 combinations of scale and switch - and a multi-cube scale auto, which placed 2+ cubes on the scale (automatically determining which is the “hot” scale from the field data). We also regularly ran a “smart switch” autonomous, and many of the other variants of these.

Our autonomous driving paths are almost exclusively done by a “DriveStraightOnHeading” command which has the robot drive an odometry-measured distance on a given heading (specified in degrees) with closed-loop robot steering to drive on the specified heading. Nearly all of our “turns” in autonomous driving are handled by the closed-loop control of “DriveStraightOnHeading.” In other words, if we “go straight and then turn left,” we would do this as follows: drive forward 100 inches on a heading of zero degrees, and then drive forward 100 inches on a heading of 270 degrees. The robot will drive forward 100 inches and then do an arcing left turn onto a new heading of 270 degrees. No “turn” command is actually specified. This makes for easy, smooth navigation. (Well, as long as the navX gyro works properly, which it did for us very reliably.)

If you’d like to see the robot in action, a good match to watch is Qual 44 at the University of New Hampshire District Event. Our robot is on the blue alliance and starts in the near corner of the screen.

We’re doing something different for the code release this year - rather than attaching a zip file to this post with all the code, we’ve made our “2018-Robot” repository on github public so that all can see it: https://github.com/FRC1519/2018-Robot

Feel free to post questions here or to email or PM me if I don’t seem to be paying attention to this thread during the build season…

2 Likes

I like the simplicity and has a feel of Path-following with out the headache of figuring that out.

1 Like

@Ken_Streeter I see you set all the talons to coast. That makes sense with sequential calls to the DriveStraightOnHeading command when the first command ends and zero the command speed the robot will coast into the next DriveStraightOnHeading. Does the distance coasted get lost? or is it so little to worry about?
Q2: Maybe I have not followed it fully but the last Drive… command Does it coast or some how decelerate to position? Like when it drove to the Far scale and dropped the cube.

Yes, we do set all the talons to coast. We do that for a few reasons:

  • It avoids having the robot make a real sudden stop if the driver lets go of the controls. With the robot center of gravity being quite high when there is a cube elevated near maximum height, this helps avoid tipping. The slower “coast to a stop” is preferable.

  • It makes the robot drive more smoothly (less jerkily), particularly when stopping, whether in teleop or autonomous.

In autonomous, if we were trying to drive and stop the robot in a very specific spot, the coasting behavior would be an issue. However, we planned our autonomous paths so that this would be a feature, rather than an impediment. For example, when placing a cube on the scale, the specified distance to drive towards the scale actually gets reached about 4-8 inches before the robot contacts the scale. The robot then coasts up against the scale more smoothly, which ensures that the robot is in the right place (touching the scale lightly.) We do this same thing when driving to deliver to the switch.

In regard to the question about some distance being lost in between sequential commands – yes, we do lose a bit of “recorded distance” in between commands. The good news is that this distance is pretty much the same from run to run, at least if the robot is going at approximately the same speed. So, as we are empirically tuning (testing) programs to make them work, the “lost distance” is repeatable for a given autonomous path.

Another way to think about this is that if we command the robot to drive 100 inches straight forward, we’ll get a fair bit of overshoot before the robot comes to a stop. At half-speed, maybe 6 inches extra.
At full speed, maybe 10-15 inches extra.

If we had instead commanded the robot to drive 100 inches in two commands, with the first command for 50 inches, immediately followed by a second command for 50 inches, the “overshoot” from the first command will be only about couple inches, because the second command starts 20ms after the first one finishes. (At a robot speed of 10 feet per second, that’s 120 inches per second. At that speed, the robot covers 2.4 inches in 20ms.) The second command will then drive the next 50 inches, and the robot will coast to a stop, with probably at least a foot of overshoot from the second command.

What appears to be deceleration to position is just the robot coasting the rest of the way to the scale. We actually rely upon the coasting to let the robot drive up gently against the scale, to ensure that the robot is right against the scale, without a gap between the robot and the scale.

Thanks, That’s inline with what I was expecting. I didn’t want to waist a bunch of time looking for something that was not there.