But I have a follow up question: when creating motion plans, we can determine our expected orientation, using a gyroscope we can tell if we are “on point” or not – but should we use this extra information or rely solely on wheel encoders? Do you use a gyro and a similar motion planning process to automate a rotation or in practice do you find wheel encoders sufficient?
Perhaps to phrase the question in a different way, if my robot has both wheel encoders and gyroscope, and one sensor type does not match the other with respect to my calculated trajectory. Which do I trust or is there a smarter way to integrate all inputs into the motion plan?
The answers to this question will vary a lot in the field of robotics. In FRC, however, your encoders are often your most reliable sensors on the bot. Proper execution of your motion planning and trajectory following software, with proper tuning, will get you a very consistent autonomous and should be all you need.
The proper answer to your question, however, is sensor fusion. Put simply, sensor fusion is the combination of multiple sensor signals into “one” signal through math and logic. For example, you could use your encoder values along with your robot’s geometry to get your angle of rotation, and you could also integrate your gyro’s angular acceleration values over time to get your angle of rotation. Through logic, you can then determine if these values are viable and then act upon them.
I would suggest simply relying on encoders in FRC, but if you want to implement sensor fusion, I suggest relying on your encoder values for actual robot movement and using your gyro for checks and balances. (For example, if your robot’s wheels are not on the ground, your encoder values will still show the bot as moving, but your gyro will show no angular accelerations other than noise. From this, you can determine that your bot probably didn’t move.)
If you’re doing something similar to what 254 did in 2014, where you’re planning ahead and creating a table of how far each wheel should have traveled and the heading of the robot at every point in time, I would recommend using a gyro.
Our team tried this last year, and I found that even though the encoders would stay on target during the path within a few percent of where they should be and ended within less than an inch of where they should, the robot would often be misaligned by more than 10 degrees at the end of an s-curve. It always turned less than it should.
One way to make this better was to generate a path for a robot that was an inch or two wider than the actual robot, so the wheels would rotate more during a turn. However, we found that there wasn’t one width that would work well for every path - we had to guess and check to find different widths for different paths, which defeated the purpose of motion planning.
Our final solution was to use an additional PID controller for a gyro to keep the robot on the right heading. The gyro greatly increased our accuracy and also made the robot less vulnerable to oscillating/going off path when disturbed. The gyro never made it on our final robot because our path was simply a line, but if we did anything more than that, we would have used a gyro.
There has been an influx of very good gyros for FRC. Specifically the NavX is accurate over long periods of time, and even has a form of sensor fusion like you were talking about. It can be set up to rezero using the magnetometer when the robot stops moving. If you can afford it, using good gyro will give you a lot of improved performance over just using encoders.
Buy a decent gyro and use that. While observing the encoders may work most of the time, there’s a small probability that the wheels may slip or that your frame may twist and pull a wheel off the ground, leading to inaccurate measurements. You /should/ only ever have to drive straight and turn in place in a typical FRC autonomous mode.
In practice, a decent gyro will provide a quicker and more accurate estimate of yaw rate and heading for a differentially steered robot than using wheel encoders. Wheel slip is unavoidable, hard to measure, and varies a lot as your robot accelerates, decelerates, and bounces across an imperfect floor.
There are still valid uses of the encoders for measuring your heading, though. If your encoders aren’t moving, you can “hold” the gyro heading to prevent drift (this is most useful before the match starts). And if your encoders are estimating a drastically different yaw rate than the gyro, that’s a good indication that the robot has become stuck and has lost traction.
So practically speaking. If I were following a path, I would ideally generate my course correction using the encoders and then apply a second correction to my course using a navx or is there a smarter way?