Rembrandts has a very good post on using coordinates to do this: FRC 4481 Team Rembrandts | 2024 Build Thread | Open Alliance - #373 by Jochem. Essentially, if you know where your robot is and you have and an x and y coordinate that you want to aim to, you can do some simple translation math. I would recommend combining photonvision with odometry to get a pose estimate of your bot location. Have you looked into SwerveDrivePoseEstimator at all? I recommend reading this: Pose Estimators — FIRST Robotics Competition documentation (wpilib.org). Also, take a look at the photonvision docs on how to use AprilTags for localization: 3D Tracking - PhotonVision Docs.
Here is a code snippet of how our team handled aligning to the speaker. We used PID to control the angular speed and let x and y speeds still be controlled by the driver.
public Command alignWhileDrivingCommand(Supplier<Double> xSpeed, Supplier<Double> ySpeed, Supplier<Translation2d> target) {
PIDController pid = new PIDController(0.01, 0, 0);
pid.setTolerance(1.5);
pid.enableContinuousInput(-180, 180);
return new DeferredCommand(() ->
new RepeatCommand(
new FunctionalCommand(
() -> {
// Init
},
() -> {
Translation2d currentTranslation = this.getPose().getTranslation();
Translation2d targetVector = currentTranslation.minus(target.get());
Rotation2d targetAngle = targetVector.getAngle();
double newSpeed;
if(DriverStation.getAlliance().get() == DriverStation.Alliance.Red)
newSpeed = pid.calculate(this.getGyroYawDegrees() + 180, targetAngle.getDegrees());
else
newSpeed = pid.calculate(this.getGyroYawDegrees() + 180, targetAngle.getDegrees());
this.drive(xSpeed.get(),ySpeed.get(),
newSpeed, true, true);
},
interrupted -> {
pid.close();
this.drive(0.0,0.0,0.0,true,true);
},
() -> {
return pid.atSetpoint();
},
this)), Set.of(this)
);
}
For reference, here is our github repository: FRC1884/season2024 at testing-1884 (github.com).
I highly recommend taking a look at our PoseEstimator.java and Vision.java classes.