What are Kinematics, Odometry, and PoseEstimator?

What are these used for?

1 Like

Kinematics are how motor speeds compare to robot speeds. Because of gear ratios and conversions, it can calculate that spinning motors at 1/2 max speed will have the robot travel 5 feet per second, etc.

Odometry uses robot sensors to estimate how far the robot traveled, or its position. It is good, but bumping into something can mess it up

A pose estimator is when you combine odometry and vision data, specifically from April Tags. It gives a very exact location of the robot on the field, and can compensate for running into things and sensor errors, because you use vision as well. If vision data is bad, it can compensate by using odometry, and if the odometry drifts, then vision will make it accurate again.

kinematics is a but of math that turns some desired state for a mechanism (probably a drivetrain here) such as a forward, sideways, and rotational velocity into motor speeds. odometry uses wheel speeds to track the robots position over time, but is prone to drifting off of your real position because of things like wheels slipping. a pose estimator tries to solve that drift by combining measurements of your position usually from a vision system with odometry.

Thanks! I have a few quick questions!

  1. Why is it important to get the very exact location of the robot on the field?
  2. What are April Tags?
  3. Is a pose estimator always used over odometry in practice?
  4. Where are these actually used in the robot code? Like for which subsystems, files, etc.?

Thanks in advance!

  1. So you can drive predefined paths in auto - see the docs: Trajectory Generation and Following with WPILib — FIRST Robotics Competition documentation

  2. The QR-code lookalikes on the field - see What Are AprilTags? — FIRST Robotics Competition documentation

  3. If you have vision data (camera on the robot providing information on current position), you use a pose estimator. If not, you use odometry. It’s that simple.

  4. Kinematics is a general term for describing the motion of anything that moves. That said, most of the inbuilt WPILib kinematics classes have to do with the drive base subsystem, and odometry and pose estimators are highly linked to the drive base systems. See Introduction to Kinematics and The Chassis Speeds Class — FIRST Robotics Competition documentation for an explanation of kinematics and https://github.com/wpilibsuite/allwpilib/tree/main/wpilibjExamples/src/main/java/edu/wpi/first/wpilibj/examples/differentialdriveposeestimator for an actual example of the place in robot code.

Remember to read the docs - many of them are set up to answer precisely these kinds of questions.

Got it, thanks! So odometry and pose estimators are only used for autos?

Not necessarily, but without vision data there’s too much drift to make it useful. Even with vision data it’s often true that training a good driver will serve you better than working on getting the pose estimation perfect.

Oh ok! Wait so during the part of the match where we can use our xbox controllers, what is the purpose of the pose estimation or odometry? Is it also used for elevators and stuff?

Odometry/Pose Estimation isn’t used for anything but robot position, but if you have accurate robot position data you can use that to correctly line yourself up to field elements. As I said though, generally it’s not as useful in teleop.

1 Like

One application for a pose estimator this year was grabbing from the human player station. It sometimes takes a while to get lined up just right, but with a pose estimator you can have the computer align your robot correctly every time. It saves time and makes it easier on the drivers.

You can use pose estimation in teleop to, e.g., point the robot at a known target position on the field while the driver controls the “rest” of the robot operation. Semi-automation like this is extremely valuable for making scoring actions quick and reliable - it lets the computer handle the bits the human controllers are worse at, letting them instead focus the things the humans are good at.

I feel like this is semi inaccurate. There’s a lot of use to odometry/pose estimation during teleop. I will admit if you don’t know what you are doing it’s probably not realistic to use these features but they can be useful. Some things that odometry can be used for (at least how we use it) is that we program some robot actions. Like spin 180, doing spins around other robots (pirouette), hold robot bearing, and a lot more. Pose estimation can very useful as well. Our robot could score/pick up on both sides so we used this feature so that when we got close to the grid the robot would position the arm to that side and snap to a perpendicular angle to the grid. In 2022, when we got close to the goal the shooter wheel would spin up so we could shoot faster. Another pretty common one is to autonomously chase after game pieces to make acquisition easier during teleop.

1 Like

Well said, but an hsv color detection or ai object detection pipeline would probably be better to find objects, because you don’t always know their position on the field.

1 Like

I not sure how others use it but we still use odometry (biggest part being the gyro) to do it in addition to color/object detection. Also we detect if the robot isn’t moving so that we don’t burn out the carpet or there isn’t something in the way. However, I feel that those are kind of out of scope of the question.

**edit added more thoughts.

This topic was automatically closed 180 days after the last reply. New replies are no longer allowed.