Trying to use Limelight to aim and range to speaker (target) april tag. I have successfully updated and configured the limelight and systems and I’m able to put tx/ty/ta data to the smart dashboard. I’m familiar with the steps to mount and configure the limelight. My question is, how do you integrate this data into the robot’s drivetrain system/code to auto adjust angle and distance to the target? If i have the current “distance to target” value, how do I make the robot drive to the desired distance?
The goal is to use an “aim” button pressed by the driver when she is “close” to the correct scoring position/angle. For context, our robot is using a REV MAXswerve drivetrain and a navx gyro–all functioning well. Code structure is command based using Java. If possible, I’d also like to use the april tag captures to update odometry and correct for gyro/encoder errors during auto.
It seems these base functions are common practice with teams using a limelight…but I can’t seem to find a guide or walkthrough for implementing it with a swerve drive robot. Any help/examples would be greatly appreciated…thanks!
Let’s break down the problem into multiple subproblems.
Updating Odometry
This one is the most straightforward. WPILib has a class called SwerveDrivePoseEstimator which can be used as a drop in replacement for a regular SwerveDriveOdometry object. When using this class, the method addVisionMeasurement() can be used to add data from your Limelight. In this case, you are looking for BotPose and Latency. It should be described in more detail in the Limelight docs.
Updating Pose
This one is a little trickier and it depends it a bit on how your robot drives during autonomous. If it already has a way to follow paths or drive to a certain position, you can leverage that.
In the most simple scenario, you can use a p-controller on the offset of the middle of the tag to the center of the screen, and do the same for the distance. Then create ChassisSpeeds based on that.
Personally, I would recommend a simpler approach of just aiming at the April tag if this is your first real adventure into vision. Especially, you don’t have a practice field with tags in the exact right spots, etc. Just something to consider. IMHO the effort to outcome ratio is a lot better.
So…I would need to replace all the instances of SwerveDriveOdometry with SwerveDrivePoseEstimator in my dirve subsytem? LIke this?:
// Odometry class for tracking robot pose
SwerveDriveOdometry m_odometry = new SwerveDriveOdometry(
DriveConstants.kDriveKinematics,
Rotation2d.fromDegrees(-m_gyro.getAngle()),
new SwerveModulePosition {
m_frontLeft.getPosition(),
m_frontRight.getPosition(),
m_rearLeft.getPosition(),
m_rearRight.getPosition()
});
// drop in replacement for above obometry class - to allow addition of vision/limelight data to **
** //update odometry
** SwerveDrivePoseEstimator m_PoseEstimator = new SwerveDrivePoseEstimator(**
** DriveConstants.kDriveKinematics, Rotation2d.fromDegrees(-m_gyro.getAngle()), **
** new SwerveModulePosition {**
** m_frontLeft.getPosition(),**
** m_frontRight.getPosition(),**
** m_rearLeft.getPosition(),**
** m_rearRight.getPosition()**
** }, **
** new Pose2d());**
…Then I’d need to use the addVissionMeasurement method in this same subsystem?
The second problem: we are currently implementing pathplanner with some success. I’d like to use this “aim” feature in teleop as well to assist with positioning. I would use ChassisSpeed for this?
…Simpler does sound good. I mainly want to “aim” at the april tag. If I can figure out the pose updating that would be great, but secondary to aiming at the speaker. Programming is not our team’s greatest strength at the moment (I’m working hard on improving that)…and this is our first attempt at using vision in any way…and our first time with swerve drive, so a lot to onboard.
There isn’t really a place you must put it a little comes down to style (everything discussed here is for command based). Let’s first talk about the different pieces of code:
Code that gets information from the limelight
Takes the LL info and decides what to do with it.
Code that actually does the selected action. This could be turning the drivetrain to an angle or spinning the shooter to a speed based on how far away or a number of things depending on game and team priorities.
Here is my suggestion:
Have your vision code together. This can be in a subsystem or a POJO. A subsystem is only required if you are switching pipelines and might need to “reserve” the sensor.
Have you drivetrain or shooter code together.
The code in between could be in a command or in either of the 2 previous sets of code. My suggestion in this case is to make “TurnToAngle” part of your drivetrain code as then the same code could be used in other applications and in future years (you may need to retune PID for different weight/wheels etc).
To summarize your drivetrain calls LL code to get a setpoint and then it handles actually turning to the proper angle
Thank you! This is helpful for structure and concept. I have a vision subsystem, swervedrive subsystem, and shooter subsystem all functioning seperately. Vision subsytem can put tx,ty, & ta to smartdashboard. So, in my drive subsystem I can call LL values to turn and move to the setpoint, right? I can make a TurnToAngle and DriveToPoint methods in the drive subsystem?
Something similar to this drive method but using values determined by the LL?
You probably want to handle the intersubsytem communication in either a separate Command (if it gets too complicated) or in your RobotContainer.
It might look something like this (I just wrote it on the fly), while the a button is held it will continuously run a command that will invoke the drivetrain movement with values from the limelight: drv.a().whileTrue(drivetrain.run(() -> drivetrain.aimAndRange(shooterCamera.getAngle(), shooterCamera.getDistance())));