How to do field localization

How does field localization work?

I am the only programmer (unless you count underclassmen) and my team doesn’t have a programming mentor so I’m turning to the Internet for this. Last year, I really wanted to have the robot align to the speaker when it entered a certain part of the field. I immediately realized that the robot would need to know where it was on the field. I have no idea where to start and could use some help. My team has two limelight 3’s from last year that we never got to use, so they are in good condition. Besides that, I don’t know what else I need. What would a field localization pipeline look like?

I know how to:

  • control the yaw of the robot using a controller
  • create a trigger from a Boolean value
  • read values like tx and ty from the limelight

I DON’T know how to:

  • find the robot’s position on the field
  • calculate the direction for the robot to point (the yaw)
  • have some Boolean turn true once the robot entered a part of the field

Any and all advice will be helpful. My team may not want field localization this year (depending on the game and when I get the robot), but I would like to teach the freshman and sophomore how to do it. Thank you everyone!

1 Like

The Wpilib PoseEstimator class is what you need to get started. It takes information about your chassis including stuff like a kinematics object and an odometry object. The class has builtin methods to reset pose, update it periodically, and get pose. Furthermore, it has the builtin ability to add a vision pose estimate. Since you know how to get tx ty from the limelight, pose is also something you can get either thru networktables or Limelight helpers.
If its not there already, add a SwerveDrivePoseEstimator object to your drive class (or whatever type of drivetrain you have), but I think most of the template projects would already have it included.

It really does not matter if your team wants it, localization should not affect anything if its running in the background and not used to autonomously do things (ie its a good learning experience to implement it anyway), but the way the meta is going, localization is more and more important since we have apriltags instead of retroreflective.

3 Likes

this is good advice.

I’d also add: don’t wait to “get the robot.” if you have a spare RoboRio, you can build and test the complete localizer now, even before kickoff.

1 Like

Don’t even need that, you can do everything in the sim.

Write your code, and run the simulator.

If you’re using PhotonLib, it’s easy to even get simulated cameras for simulated AprilTag vision pose estimates.

5 Likes

oh yeah, that’s true. you can run real cameras with the simulator on your laptop.

if you are using ctre, they have a very simple example of vision with limelight and pathplanner. Phoenix6-Examples/java/SwerveWithPathPlanner/src/main/java/frc/robot/Robot.java at main · CrossTheRoadElec/Phoenix6-Examples · GitHub

OP has Limelights and if they want to use MegaTag2 they’d have to mount the camera and a gyro together. I’ve seen cameras with a gyro included but the LL isn’t one of them. Is this combo something that can run in simulation? My team is interested in doing that, too. Otherwise a roboRIO is needed I presume.

well, you could learn something with a simulated gyro that always returns zero. but for the real thing, I’m not aware of any COTS gyros that work in simulation.

a useful trick for this kind of simulated work: print the apriltags at 10% scale, and you can cover the whole field on the tabletop.

1 Like

I meant the simulated cameras that PhotonLib (the PhotonVision vendor library) can provide.

No hardware camera necessary

Heres my suggestions/info on the three topics you don’t know.

Finding Robot Position:

Note: This is very complex, and has been something i have been tuning, rewriting, and updating the system on my team for 2ish years. Don’t expect it to work out of the box

Limelight publishes a the estimated robot position to networktables, the format of which can be seen here. Alternatively, you could use limelightlib, a file of functions you can copy-paste into your robot code. The data can then be fused with your drive odometry data with WPILib’s SwerveDrivePoseEstimator as described by those above. You can check the WPILib docs if you want more info on that.

Limelight also has a few tutorials, one of which is “Localization with limelight

Another thing to think about is, with limelight, you can use normal pose estimation, called MegaTag. This requires nothing special, just a limelight set to apriltag mode in the web ui. MegaTag2, alternatively, uses the robot’s heading, which you pass into limelightlib, and can produce a more accurate position.

If you don’t want to mess with position, you can just have a button on the driver controller enable aim mode, and use the tx and an aim controller to aim at an apriltag.

Calcuating Direction
The simplest way i can think of to do this is use Arctan2. This is a function in java, kotlin, and python (as far as i know) that takes an x and y component and gives an angle. It may not be the exact angle you want, and you may need to add an offset to it or negate it to match your robot. To get the x and y components, you can simply subtract your robot position from a known location on the field. This is about how it was done in our code.

// both of these are translations/vectors
val vector = AprilTag.position - speakerPos
val angleToSpeaker atan2(vector.y, vector.x)

Detect when in location:
I would map this to a button instead, as it offers more flexibility (ex. not wanting to always shoot into speaker in 2024). If you want to do it automatically, you could simply check if the robot’s y coordinate is greater or less than a specific number (depending on alliance). This worked really well in 2024. If you want to check for whether the robot is in a box, you could look if both coordinates of the robot’s position is within two values.

Just as a closing note, if you still have questions feel free to ask, but the WPILib and Limelight docs both have really good explanations on certain things, like coordinate systems.

3 Likes

While the code and the colors in the slide deck won’t align exactly with your limelights, PhotonVision did do a talk at champs last year which was targeted just about at your team’s self-identified skill level and desires. I think you might get something out of it!

2 Likes