Lately, I have been working on our team’s autonomous program. Essentially, we have decided what we want to do and are working toward accomplishing that. However, I have some questions regarding the motion planning aspect of the autonomous program.

The main presentation I have watched on motion planning is 254’s from worlds a couple of years ago. They talk, at one point, about plotting a function that describes, in a mathematical way, the curved path which your robot takes. This makes perfect sense, and I absolutely love this idea. However, there is one major problem which I don’t know how to solve. Whether I missed it in the presentation, it wasn’t addressed as field conditions didn’t necessitate it that year, or I just didn’t understand it, I’m not sure. But my question is this:

What happens when you get knocked off course or off to the side of your pre-plotted path?

The encoders and PID control make sure your wheels spin the right amount to follow that path, but if your entire robot is oriented incorrectly or a foot off to the side, how do you know? I assume that’s where the IMU comes in. Some combination of the accelerometer, gyro, and magnometer will give me orientation and position relative to starting position, and I use PID or something similar to bring that error to zero?

Sorry if this rambled, or asked questions that have already been answered.

Thanks,
zdwempe

EDIT: Forgot to mention that we’re using the BNO055 IMU and Talon SRX Motor Controllers.

The IMU will give you orientation (or pose), but it by itself does not provide position. You could combine some form of odometry with pose to calculate position but there are of course some technical challenges. Will your odometry maintain sufficient accuracy over time? Is your odometry effective when you encounter a disturbance (knocked off course)?

Simultaneous Localization and Mapping (SLAM) is at the other end of the spectrum of technical complexity. It will give you your position, but it’s not simple.

Many Autonomous routines are quite successful with either assuming no disturbance (the GDC tends to keep opponent robots separated during Auto) or using routines with target tracking for mid-course corrections.

In the presentation, we mostly talked about the case where major disturbances are not expected, because this is simpler (still, I think the presentation was a bit too technically deep for most of the audience and we’ll tweak our next one).

There are lots of strategies for how to deal with this, but generally they all boil down to some variation of the following:

Estimate where you believe your robot to be (from some combination of sensors as you describe)

Determine where you would like your robot to be right now (based on elapsed time in a trajectory, or the nearest point on a path, or something else).

Figure out the components of the error between these two poses (with respect to your current pose). There are naturally three parts:

a. Along-track error. Your robot is lagging behind (or leading) the desired pose.
b. Cross-track error. Your robot is deviating to the side of the desired pose.
c. Angular error. Your robot orientation is deviating from the desired pose.

Construct a controller to drive these sources of error to zero. PID controllers work just fine. For a holonomic robot, this is easy: put a PID controller on the x velocity (fwd/rev) based on along-track error; a PID controller on the y velocity (strafe) based on cross-track error; and a PID controller on the angular velocity based on the angular error.

For a non-holonomic robot, you can’t strafe, but you can steer back towards the path…you actually want a PID controller that maps from cross-track error to a steering adjustment (if you are to the right, steer a bit left, etc.).

Execute your PID controllers, add in feedforward gains (from your original trajectory, for example), and now you have new desired velocities for your motors.

Note that, as with most things, the devil is in the details and you will need to make sure you choose your desired pose carefully (using the time-indexed trajectory is simplest), and ensure your controllers don’t end up fighting each other (since in the non-holonomic case, you will have two controllers that both want to affect steering…the easiest way to do this is make the output from the cross-track error controller be the input to the angular-error controller).

What confuses me is component #1 of your solution. When estimating the current robot position, what method do I use for that? Do I use data from the accelerometer on the IMU? Some combination of that and acquiring a distance from vision tracking? I (at least somewhat) understand how to get from knowing my current position to getting back on track, but where I’m really lost is how I know my current position and thus how much I need to adjust my drive path in order to get back to the planned path.

There are many sensors that can inform you about the current position of your robot. Encoders, gyros/IMUs, vision systems, ultrasonic sensors, LIDARs, contact sensors, etc…

Moreover, there are many, many approaches for taking various combinations of these and producing an overall estimate of robot pose.

One approach that tends to work well in FRC is to combine encoders with a gyro/IMU and use a model like this for estimating robot velocity:

Once you have the velocity and heading, you can integrate the velocities over time to obtain position (taking into account that robot velocity is in the direction of the robot heading)…

This assumes your robot moves at a constant velocity in a constant direction over a period of time (dt) which we know is not true, but if dt is small enough, this is a good approximation (it is a Riemann sum).

If you are interested, there are many other more sophisticated ways to do this that exploit the fact that there is also overlap in what sensors can sense (for example, an IMU can tell you something about your linear velocity, and encoders can tell you something about your angular velocity) and “fuse” these measurements intelligently by considering the uncertainty properties of each source. Look up Kalman Filters, Extended Kalman Filters, Particle Filters. But in general, the above formulation is good enough in FRC.