Velocity PID control and setpoint ramping

For the first time, 449 is running a drive with velocity PID control (well, actually P control with a simple integrator on the output).

On the whole we are quite happy with it, but various people have noticed that driving with it can be a bit “jerky” and violent compared to standard controls. Namely, if the driver lets go of the joystick while the robot is moving full speed, the control loop will (as it should) stop the robot as fast as possible (which is actually pretty close to instantaneously, it seems). This is not something we’re used to, and has caused some concern, though it has gotten better with additional driver practice.

Additionally, we managed to shear all the teeth off of the 14-tooth gear in one of our WCP SS gearboxes at our last event. We’re replacing these with steel gears, but we’re also concerned that the sudden stops are placing additional stress on the drive, and as we only have steel replacements for the 14-tooth gears (and not for the 60-tooth gears meshing with them) we’d like to err on the side of caution.

So, we’re looking at potentially implementing a setpoint ramp to tame it a bit. I was wondering if other teams have had any experience with this, and if they found setpoint ramping to be a necessary/helpful addition and, if so, what ramp rates they found to work best.

EDIT: We’re handling our PIDs using the PIDSubsystem object in WPILib.

If the problem is just when the robot stops, I would actually recommend that you use a smaller proportional value whenever the commanded speed is close to zero. This will make the drivetrain drift more if your drivers want it to. It should also stop ruining your gears; they are probably being worn down by constantly oscillating back and forth when trying to stop, but when PIDing to a non-zero value they shouldn’t see any abnormal loads.

I am pretty sure that there is a velocity ramp function for Talon SRX, where you can make a max ramp steepness, so that there is less jerkiness.

Is there any convenient way to do this using the standard WPILib PIDSubsystem object?

We’re doing our PID on the RoboRio, not on the motor controllers. Since our PID output is integrated we can implement a voltage ramp rate using the default setOutputRange() function, but I’ve heard that input ramping is a better way of handling this than output ramping.

We use LabVIEW, and this VI is probably what does the trick.
pic.PNG

I am sure that this method is available in Java and C++.

pic.PNG


pic.PNG

Inside of the execute method of the command running your drivetrain:


if(commandedSetpoint < some value that is close to 0) {
   yourSubsystem.getPIDController().setPID(small kP, kI, kD);
} else {
   yourSubsystem.getPIDController().setPID(normal kP, kI, kD);
}

Ah, didn’t know there was a method to change the PID values after instantiating the object. Thanks! We’ll give this a look, too.

Are you using TalonSRX’s with their built in PID control? If so, I highly recommend following the procedure for tuning PID in the Talon user guide. We have been doing things in a way similar to you (mostly P) and had similar results. Then we got the great advice to try things “by the book”. That means starting with 0’s all around for PID and upping the F, feed forward, term until you get the speed you expect based on your input. Then use PID to tune that. Drive is much smoother now.

We’re not using the TalonSRX PID, we’re handling it all using code available from the Command-based framework in WPILib.

Since we’re using an integrator on our output, we can’t really use the supported feedforward term. We could implement our own, but I’m not sure the lack of a feedforward term is what’s causing problems.

Consider what would happen if you added feedforward and took the output of the FPID directly to control the motor rather than integrating it.

This would certainly be another way of doing velocity control. I don’t think I can guess exactly what the differences in performances would be without having an actual robot to test it on - naively, I suspect that doing this could yield faster response times than integrating the output (since the F term provides an immediate approximation of any desired output, while we have to wait for the integrated output to reach it) but we’re not really worried about response lag.

It’s worth noting that we’re not seeing any problems with oscillations or overshoot, we’re more worried about the taxing effect on the drive components from the (correct) behavior of the motors fighting the robot’s forward momentum to come to a stop quickly.

I would set up something that limits how quickly your velocity setpoint can change when you are slowing down. You could put this in the part of your code where you’re mapping joystick position to desired velocity.

For example, you could have something like:


//set currentSetpoint here
if(currentSetpoint - lastSetpoint > .1){
    currentSetpoint = lastSetpoint - .1;
}
//set your PID controller here
lastSetpoint = currentSetpoint;

You could replace .1 by how much you want the maximum change in velocity setpoint per 20ms to be. You’d also need to do something clever for when you’re going in reverse and slowing down, but the idea is the same.

Let’s test that code.

inputs:
currentSetpoint = .1
lastSetpoint = 1

output:
currentSetpoint is not updated

I’m not sure why currentSetpoint wouldn’t be updated.
currentSetpoint is updated before the if statement, so the condition for the if statement isn’t met, it will use the value of currentSetpoint set in the first line.

It does only work in one direction though - if you wanted it to work in both directions for only decelerations, you’d need to know which way you were going.

The large step in setpoint is allowed through without being ramped.

.1 - 1 = -.9 which is not > .1, so the logic does not prevent a jump in setpoint from 1 to .1.

One thing I’m still rather clueless about is how little ramping we could get away with while still reducing the impact on the gears. We haven’t yet timed how fast the robot is decelerating under the current control loop, and we probably won’t be able to between now and our next district.

If we knew both that and by what factor we need to reduce the loading on the drive to reduce the chance of future damage to acceptable levels (I’d say just preventing gear failure would be acceptable), then we could calculate a rate to start with. Since we’re not immediately stripping this out, I don’t think we’re too far above such a threshold, so perhaps halving the acceleration would be a good place to start…

Alternatively, if we knew the max rated load of the gears we could calculate the maximum acceptable acceleration from the robot weight and gear diameter, but I don’t see any such figure on VexPro (and I wouldn’t be surprised if gear failure is more complicated than simply a “maximum load”).

Anyone have any guesses?

EDIT: Looking at an online gear strength calculator (http://www.botlanta.org/converters/dale-calc/gear.html), it seems I should be aiming for under 100 lbs of force on the gear.

The gear that broke (14t) was meshing with the gear on the output shaft (60t). Both of these gears are experiencing the same load. The load on this set of gears is clearly going to be higher than that on the input gear that meshes with the CIMs. So, I reason thus:

The maximum permissible torque on the output shaft is 100 lbf * 3 inches (pitch diameter of 60t gear).

The robot mass is about 130 lbm with bumpers and battery, one half of which is loaded on any given side of the drive. The wheels are 8-inch diameter.

Thus, the maximum permissible acceleration of the robot is (100 lbf * 3 inches) / (130/2 lbm * 8 inches) = 5/8 g = ~20 ft/s^2

Our robot moves about 10 feet/second at top speed, so we should be decelerating from full speed to zero in no more than half a second.

Does this seem right to everyone else? Any idea how close to this we should be willing to stray?

So, in labview…
lineal ramp…
We use this.

Believe it was written by Killer Bees, we have just been dragging it along for years.





We toyed with an input ramp today, and found that the values we had been considering (roughly a half-second from 0 to max) were way too slow for effective driving.

We’re going to move forward with the ramp disabled for now, and hope that switching the 14t gear to a steel version will prevent further catastrophic drive failure (we observed no major damage to the 60t gear meshing with the 40t gear upon taking apart the gearboxes yesterday, so we think we’ll probably be OK).

Thanks for all the help!