How to Dynamically Update Trajectory Based on AprilTag Pose

Hello!

We are new to the Command-Based framework, and are trying to figure out how to build a simple autonomous program that simply follows an AprilTag around using the TrajectoryGenerator and SwerveControllerCommand. Our goal is that it will drive to an AprilTag someone is holding, and follow it around if they move it (not really trying to accomplish any particular game task, just get a basic implementation running).

We were able to get our auto command to run once, but run into issues when it reaches the end of its trajectory and tries calculating a new one. Because the TrajectoryGenerator takes a Pose2D object for the “start” argument and not a function, we can’t get the trajectory to update after each loop. As a result, on the first iteration of the command, the robot drives as we expect, but for each iteration afterwards, the “start” parameter is still the original start position of the robot - not its current position. As a result, the trajectory it follows is wrong for every subsequent run.

Is there a way for us to:

  1. Dynamically create and run the SwerveControllerCommand at the start of each loop so that we can re-create the Trajectory object?
  2. Dynamically update the Trajectory we pass to the SwerveControllerCommand so that we can modify its start position at the beginning of each loop?
  3. Repeatedly re-create our entire autonomous command and run it in auto so that our getPose() function can be called each time to update the trajectory?

Again, we’re very new to this whole method of programming, so please let me know if we’re going about this all wrong. Thanks!

Autos.java (just the function for getting our auto command):

  public static Command visionAuto(DriveSubsystem driveSubsystem, VisionSubsystem vision) {
    TrajectoryConfig config = new TrajectoryConfig(
      AutoConstants.kMaxSpeedMetersPerSecond,
      AutoConstants.kMaxAccelerationMetersPerSecondSquared)
      .setKinematics(DriveConstants.kDriveKinematics
    );

    var thetaController = new ProfiledPIDController(AutoConstants.kPThetaController, 0, 0, AutoConstants.kThetaControllerConstraints);
    thetaController.enableContinuousInput(-Math.PI, Math.PI);

    Trajectory trajectory = TrajectoryGenerator.generateTrajectory(
      vision.getPose(), // This is only eval'ed once!! (presumably our problem)
      List.of(),
      new Pose2d(2, 0, new Rotation2d(0)),
      config
    );

    return new SwerveControllerCommand(
      trajectory,
      vision::getPose,
      DriveConstants.kDriveKinematics,

      new PIDController(AutoConstants.kPXController, 0, 0),
      new PIDController(AutoConstants.kPYController, 0, 0),
      thetaController,
      driveSubsystem::setModuleStates,
      driveSubsystem,
      vision
    );
  }

RobotContainer.java (just the auto function)

  public Command getAutonomousCommand() {
    return Autos.visionAuto(m_robotDrive, visionSubsystem).repeatedly();
  }

I don’t think you’re going about this the right way. As noted at the bottom of the Trajectory Generation - Generating the trajectory page:

Generating a typical trajectory takes about 10 ms to 25 ms. This isn’t long, but it’s still highly recommended to generate all trajectories on startup (robotInit ).

If you want to chase a moving tag by just driving straight to it, you don’t need a trajectory, you can just use PIDControllers (or ProfiledPIDControllers for X, Y, and theta.

I’ve had success with this using two different

  1. Without pose estimation: Create PID controllers and configure your setpoint distance and rotation, relative to the tag. When you see the tag, get the x, y, distance and rotation and plug them into the calculate() method.
  2. With pose estimation/odometry: Look at the tag and find where it is relative the camera. Since you know where the robot/camera are on the field, calculate where the tag is on the field. Determine where you want to be on field, based on where the tag is, and these are your setpoints. Plug your current x, y, and theta into the calculate() method.

It’s worth pointing out that this would be a lot more difficult without holonomic drive.

With holonmic drive (swerve/mecanum), you can move forward-to-backward, side-to-side, and rotate all independently.

With non-holonomic (differential/tank), you cannot move side-to-side, and rotation affects the other vectors, so you have to plan a trajectory to “snake” your way to the end pose.

PathPlanner’s pathfinder takes about that long, and is absolutely intended to run during teleop.

That’s straight from the WPILib docs. By default, the robot runs at 50Hz so everything for an iteration needs to complete in under 20ms. You can get by generating a path here or there, you’ll just get a loop overrun and maybe a momentrary stutter.

However, the OP is asking about repeatedly regenerating the path “at the beginning of each loop”. That’s not what trajectories are for. It’s not the right tool for the job, and it won’t perform well.

I’m just making the point that the performance isn’t necessarily a problem.
But yes I agree.

The performance of generating on the fly isn’t a problem, per se, but doing it on every iteration certainly is.

True :+1:

Thanks for the suggestions! This was super helpful. Originally we thought this would be a simple way to test our vision code, but clearly it’s not lol. We’ll move forward with the PID controller method instead.