Gyro vs. Encoders for Driving Straight

At the moment, our algorithm for driving straight doesn’t work too well. We just start at a set speed and increment the left and right speeds by a small amount based on encoder readings. We get about 3-5 inches drift on an 80 inch drive, which isn’t good enough (like this year, to drive to a position to shoot).

We’re looking at implementing PID with either the encoders or the navX MXP, so which works better? I would think the gyro would be better, as there isn’t drift from wheel slipping, and it also seems simpler to implement, but I read on a different thread that encoders are really the right sensor to use.

Additionally, is it worth doing a PID control with both encoders and gyro and averaging the outputs? Would that improve accuracy enough that it would be worth implementing?

Simplest place to start is to perform your moves as a series of straight driving or turns in place.

Use the gyros heading to rotate to a speficoed angle, then use your encoders to drive set distances in straight lines.

All you have to develop are two commands/methods.
One that drives straight some arbitrary distance. And another that rotates to some arbitrary heading. Develop both of these and time them independently. Once they both get you to your destination position/heading within your acceptable error, you can create command groups that combine driving straight and rotating in the right order to perform more complex actions.

Your drive straight command/method should use the encoder ticks to determine distance traveled, did you go as far as needed. The gyro will be used to control to the heading the robot was at when the drive straight command begun. If heading slips off zero (or whatever it was at the begging of the move) adjust the relative speed of each side of the drivetrains to get back to zero.

You should be able to get both of these working pretty well with a simple proportional controller.

Gyro for keeping straight.

As to autonomous stuff, seems like more and more teams are going to a path-planner solution. Before autonomous starts, a set of x/y coordinate waypoints are fed into the algorithm. An output is generated that indicates varying desired velocities for each side of the drivetrain over the time duration of the path traversal.

In this case, each side of the drivetrain has an encoder feeding a PID to control the actual wheel velocity to the desired velocity (from the path-planner). To augment this, a gyro may be used to provide a “correction factor”, which reduces error from wheel scrub or going over rough defenses.

Net result in 2016 was a ~50% accurate high-goal autonomous routine under the low bar. Not as good as some of the really slick vision systems, but we’re still pretty proud of it…

I can’t say much about using gyros to stay straight, I know we tried to use them before, but there was too much drift to be useful, However I was not personally involved in that so I’m not sure.

On the other hand, we have had a lot of success with encoders and PID. We used them all through the season with great success. Wheel slip was definitely not a problem (we had treads this year) even on slippery tile flooring our robot had no problems. We had a major problem with our treads being misaligned. After lots of hard driver practice on our practice bot this became a problem. The robot would drift to one side even though both encoders were reading the same value because of the bent tread.

Also, if you do end up using encoders, it is very important to make sure the slop in both gearboxes is forward, especially if your encoders aren’t on the output shaft (they should never be anywhere else).

If you would like our encoder PID code, I have it in both Java and Labview, and I would be happy to share.

On another note, are you trying to shoot a high goal using just encoders? I wouldn’t recommend this, the goal is very small for that. It is best to use vision tracking to find the retro reflective tape on the goals. If you use Labview I can help you with this. If you don’t have vision, Going for a low goal auto is a better bet. Better to get 5 points most of the time than get 10 some of the time and lose the ball the rest of the time. Encoders are more than adequate for a low goal auto.

I hope this helps

I think you are traveling the wrong route - so to speak. 3 inches of drift left to right really isn’t bad. The reality of driving with an FRC robot is that when you add up the drivetrain backlash, bumps, slips, and other mechanical losses you will never drive perfectly straight.

That’s why vision, or sensors of some type are a necessity. It’s the law of diminishing returns. You can spend weeks, or months working to get your robot to drive perfectly straight. But when you build your 2017 robot you will face a whole new set of challenges getting it perfect. Or you can spend that same time learning to utilize the vision examples so that when you hit a bump, or wall, or when you start using your new robot for 2017 you’ll be able to hit a shot every time.

If you still want to continue optimizing driving straight, gerthworm nailed it.

The addition of a P term to each side of your drivetrain utilizing the gyro will get you darn close to perfect.

P is the proportional, or error term. It’s calculated like this:

Proportional_Term = Constant * (Desired_Gyro_Angle - Actual_Gyro_Angle)
Left_Drive = Left_Drive + Proportional_Term
Right_Drive = Right_Drive - Proportional_Term

You may need to swap the +/- in the calculation depending on your drivetrain direction. Simply increase the constant term until you start driving straight. Start very small - like .05 or .1 for your Constant.

This is the most basic form of PID driving straight.

Edit: On another note, try not to saturate your drivetrain. That means try not to send it a full +1 or -1. There are a number of ways to scale the output so that you can still maintain straight driving at full speed.

Thanks for the responses! Seems the general idea is that using encoders and gyro is best.

We did try to do camera vision (didn’t end up finishing it though). That was just an example to say that we would like higher accuracy.

That Java example would be great, if you don’t mind!

I sent you a pm with the code

Can you send it to me as well? Thanks a lot!

Use encoders for distance, use gyro for driving straight. With a properly tuned PID and a good gyro, you’ll be off by at most 1/2" over 30 ft. Once we redid our autonomous for champs, we didn’t need our camera half the time to score.

That said, with a shooter you want a camera for locking onto the target. Typically you’re going to slide laterally during some situations (such as crossing a defense) and only a camera will allow for correction to that error. There’s default image processing code in the FRC install every year, I’d recommend looking at it. The roboRIO is powerful enough to process 320x240 @10 fps.

This. The NavX functions as a great gyro with very little drift, highly recommend it.

The roboRIO is powerful enough to process 320x240 @10 fps.

Yep. That’s what we used.

Depending on what you want to do, it may be a mistake to process a camera stream real-time.

If you have a turret that aims while you are driving, it makes sense to process a video stream. You can be aiming all the time, so that when you stop you don’t have to move very much.

If you’re using the drivetrain, however, it’s a much less costly proposition in terms of CPU if you stop, take a picture, then calculate how much you need to turn. Once you’ve completed your turn you can do another picture to verify you are aimed correctly, or do the loop again and turn a bit more.

Very few applications in FRC require real-time video processing at any appreciable frame rate. Unless you are 254.

we used this approach and were good enough to cross the ramparts, rock wall, moat, and terrain and still score a low goal about 50% of the time. We were always close to scoring as long as we did not run into another bot.

There are a couple teams doing this (and yours may be one of them), but I’d like to emphasize that you don’t need a path planner (and all the complexity that comes with one) to do really cool autonomous modes. 971’s autonomous modes do not have a planner involved, and they are plenty complicated. We have trapezoidal motion profiles in distance and theta hand tuned, and that’s enough to do some pretty neat stuff.

There are places where a real planner with feedback would do better than simple profiles, but you don’t need them in FRC.

Nice!

Until we had vision working, we scored 5+ 1 ball under the low goal autonomous modes with 100% reliability at Davis with motion profiles in theta and distance, good control loops, and very careful alignment. It’s all about staying away from traction limits and designing nice control loops which track well. We use a gyro for heading and encoders for distance, which works great.

Keep it as simple as possible while still achieving your goals. (I didn’t mean to pick on you in particular, and we are definitely a team which pushes the limits, but I want to emphasize KISS. It’s better to shoot lower and succeed than bite off too much. Especially given the level I’m reading the OP to be at.)

Me too?

Did anyone use the KoP gyro from Analog Devices for driving straight successfully this season? What did your setup look like? Could you send a link to the library you used, or just send your code?

Sent from my LG-D851 using Tapatalk

We did, utilizing the LabView example supplied with LabVIEW this year. It was a fantastic gyro. Writing code for a Gyro is remarkably easy - it’s really just an accumulator that sums the output of the gyro.

Here’s some basic code for an arduino that can easily be written in C++ or LabVIEW:

http://playground.arduino.cc/Main/Gyro