Speed Encoders or Gyros?

We are trying to determine whether to use a speed encoder or a gyro. Can someone explain the difference between the two in terms of what they accomplish? Also, any pointers to code samples would be greatly appreciated! Thanks!

Gyro’s give you a “Rate of Rotation”. Directly reading it will only tell you how fast it is turning. If you integrate the reading over time, you can use it to tell you how far you have turned.

Encoders are typically used to determine how far a shaft has turned. You can figure out how fast it is turning by dividing the distance by an interval of time.

Both have advantages and disadvantages.

As for example code, many samples are available, but we will need to know what you are using to code: LabView, Java, or C++

One hopefully helpful way to think about this is to think about the driver joystick input commands and then the actual position (X/Y coordinates on the field - as if you were looking down at a google map of the field, this would indicate where the robot was currently positioned - as well as the angle of rotation of the robot chassis relative to the “head” of the field).

Assume a holonomic drive system (it’s conceptually easier to understand). When a driver moves the joystick in the X, Y and Z (rotation) directions, these inputs are translated into commands to the wheels.

The drive system translates these inputs into outputs to the individual wheels using an “inverse kinematic” equation.

So in theory, the encoders will read the amount of actual rotation that occurs. And in theory, these values can then be fed backwards (through the “kinematic” equation) and this can be used to track how far the robot actually traveled. Sounds pretty great, right?

The problem is that in actual practice, the motion that the encoders measure is not the same as the amount of motion that you might expect. For example, the wheels can slip against the carpet when the robot moves. Or the robot could have a collision that would actually cause the robot to lose contact with the floor.

So in actual practice this means:

  • you can’t use “time” and “motor rates” that are output to the drive system to accurately track the position of the robot over time. Some additional feedback (often referred to as “closed loop” processing) is needed.
  • if you also use the encoder readings fed back through the kinematics equation to track position change, over time there will be errors.

So, now on to the gyro:

Integrating gyros like the navX-MXP will keep track of the angle of rotation. These values will drift over time, on average approximately 1 degree/minute (less if the robot is still, more if the robot is traveling over bumpy terrain).

So you can use this to keep track of your orientation relative to the field.

And some teams (our team 2465 - Kauaibots - is doing this this year) use both together.

However, since each of these can have issues (encoders due to wheel slip and collisions, and gyros due to drift), we are augmenting that with a camera.

The big idea is that the camera with vision processing can set a target (in relative distance and angle) for the robot to travel to. And then based upon that, the encoder data and the gyro data can be used to track the motion of the robot over a short period of time until it reaches the target.

Cameras are relatively slow updating, wheel encoders and gyros are relatively fast updating, so that works out pretty well.

If it sounds like a bit of work, it is. But this is not that different from the type of processing that goes on in today’s advanced driving vehicles like Google Car and the Tesla Model S. So it’s a good thing to learn. I’d recommend assessing which of that work your team can handle this year, and then if there’s anything beyond that, set it as a stretch goal that you can focus on in the upcoming off-season. Having these skills and experience under your belt will likely be very important if you pursue a career in robotics or other control software development.

Thanks, guys! I am using Java, by the way. If you can tell me where to look for sample code, that would be great!

It’d help the community to help you if you could indicate what kind of drive system you are using (arcade, omniwheel, mecnaum), how many motors you are using, etc.

As noted in my previous post, you will likely need to understand the “kinematics equation” and the “inverse kinematics equation” for your drive system if you want to use the encoders to track position. These equations are different for each type of drive system, and can take into account your wheel diameter and the distance between your two front wheels and your back and front wheels.

I also highly recommend doing some experimentation on your own (by displaying the values from the encoders on all your wheels on the drive station dashboard). This will help you get a conceptual “feel” for the encoder data and how it relates to the motion of your robot.

All the information presented has been good. Another resource that might help is the set of videos here: http://wp.wpi.edu/wpilib/robotics-videos/

You should look at the sensors play list which has a bunch of short (about 10 minutes each) videos that talk about the relative advantages of each of those sensors.

Brad