View Single Post
  #3   Spotlight this post!  
Unread 25-01-2016, 17:52
slibert slibert is online now
Software Mentor
AKA: Scott Libert
FRC #2465 (Kauaibots)
Team Role: Mentor
 
Join Date: Oct 2011
Rookie Year: 2005
Location: Kauai, Hawaii
Posts: 356
slibert has much to be proud ofslibert has much to be proud ofslibert has much to be proud ofslibert has much to be proud ofslibert has much to be proud ofslibert has much to be proud ofslibert has much to be proud ofslibert has much to be proud ofslibert has much to be proud of
Re: Speed Encoders or Gyros?

Quote:
Originally Posted by akavoor13 View Post
We are trying to determine whether to use a speed encoder or a gyro. Can someone explain the difference between the two in terms of what they accomplish? Also, any pointers to code samples would be greatly appreciated! Thanks!
One hopefully helpful way to think about this is to think about the driver joystick input commands and then the actual position (X/Y coordinates on the field - as if you were looking down at a google map of the field, this would indicate where the robot was currently positioned - as well as the angle of rotation of the robot chassis relative to the "head" of the field).

Assume a holonomic drive system (it's conceptually easier to understand). When a driver moves the joystick in the X, Y and Z (rotation) directions, these inputs are translated into commands to the wheels.

The drive system translates these inputs into outputs to the individual wheels using an "inverse kinematic" equation.

So in theory, the encoders will read the amount of actual rotation that occurs. And in theory, these values can then be fed backwards (through the "kinematic" equation) and this can be used to track how far the robot actually traveled. Sounds pretty great, right?

The problem is that in actual practice, the motion that the encoders measure is not the same as the amount of motion that you might expect. For example, the wheels can slip against the carpet when the robot moves. Or the robot could have a collision that would actually cause the robot to lose contact with the floor.

So in actual practice this means:

- you can't use "time" and "motor rates" that are output to the drive system to *accurately* track the position of the robot over time. Some additional feedback (often referred to as "closed loop" processing) is needed.
- if you also use the encoder readings fed back through the kinematics equation to track position change, over time there will be errors.

So, now on to the gyro:

Integrating gyros like the navX-MXP will keep track of the angle of rotation. These values will drift over time, on average approximately 1 degree/minute (less if the robot is still, more if the robot is traveling over bumpy terrain).

So you can use this to keep track of your orientation relative to the field.

And some teams (our team 2465 - Kauaibots - is doing this this year) use both together.

However, since each of these can have issues (encoders due to wheel slip and collisions, and gyros due to drift), we are augmenting that with a camera.

The big idea is that the camera with vision processing can set a target (in relative distance and angle) for the robot to travel to. And then based upon that, the encoder data and the gyro data can be used to track the motion of the robot over a short period of time until it reaches the target.

Cameras are relatively slow updating, wheel encoders and gyros are relatively fast updating, so that works out pretty well.

If it sounds like a bit of work, it is. But this is not that different from the type of processing that goes on in today's advanced driving vehicles like Google Car and the Tesla Model S. So it's a good thing to learn. I'd recommend assessing which of that work your team can handle this year, and then if there's anything beyond that, set it as a stretch goal that you can focus on in the upcoming off-season. Having these skills and experience under your belt will likely be very important if you pursue a career in robotics or other control software development.