Encoders on mecanum drive?

Can someone explain how encoders work on a mecanum drive? I don’t really get how its supposed to work. The only encoders I’ve worked with are 1 dimensional - we’ve always used tank drive (until this year) and we just put encoders on each side of the drive, and got 2 values for each side, either + or -. We could then convert it to tell how far we’ve traveled.

How on earth does this work with mecanum when you can travel at a 45 deg angle, and how do you measure left or right directional movement?

It sounds like you’ve just been using encoders as odometers, rather than as elements in a closed-loop control system.

The typical use on a mecanum drivebase is to maintain highly detailed control over the wheels, to make the robot go in exactly the direction and at exactly the velocity desired. Each wheel is independenly driven with encoder feedback letting the speed controller update the voltage quickly in order to keep the wheel turning precisely as specified by the software.

If you couple this information with a gyro sensor, you can achieve very accurate field-centric driving! It also has the side effect of making your autonomous very predictable!

Are there code samples available to illustrate how this might work?

As Alan pointed out, you can use encoders for more accurate speed control of individual wheels (which in itself may have utility). However, I think jtrv is probably really looking for the distance traveled function / odometer (as Alan pointed out).

That in itself you can do (easily?) - as you would need to mathematically combine the feedback from all four wheel sensors. I’ve been thinking about that the last few days, and in theory, it seems like you could do that and understand what is being driven to each wheel. If you mathematically reversed the trig functions from the software motor drive functions - you could effectively tell yourself what direction the robot is going, and for what amount of time.

The caveat is - what direction the robot “should” be going is different that actually going. You could spin a mecanum if you are stuck against something, and that problem is more prevalent in comparison to a traction wheel which might just stall. It wouldn’t account for designed slip of mecanum rollers.

I’m curious if anyone has done that and what the utility of that wound up being (if so please send a link or reference).

And then as juchong pointed out, with a gyro/IMU you should also be able to take encoder inputs and couple that with accelerations and gyro spin feedback and predict better where you are on the field.

Has anyone ever seen that done - and again - if so, what was the overall effectivity of that for positional control? It’s something I’d love to do - but don’t have a good understanding of the resources needed and whether it’s worth it. If anyone has any additional info I’d love to take a look.

I’m curious about this as well.

1 Like

Since doing this is very difficult, you have a couple other options.

You can put a couple unpowered casters with encoders are them to measure location. Or you could use distance sensors on the robot to determine positions from the walls.

Each has it’s own issue. The casters may bounce on the bumps and be inaccurate, and different sensors have their own issues with different wall materials and surface finishes.

Could you please give additional detail what you have in mind? Two casters with two encoders on each one? Something else?

Not sure if this is a thing because google searching on my phone is all sorts of awful. Looking at past technology I know we can measure movements on a captive ball system (you all remember those funny mice with the little grey ball on it) or even more recently high end gaming mice. Is anyone aware of these sort of devices being used in the FRC world?

I assume you have one X and one Y caster. If you do that, you still need to track rotation of the robot, as you spin those will later their orientation and not be accurate to original absolute X and Y.

I know one team at the Fingerlakes Regional about 5 years ago won the Innovation in Control award for using a mouse for movement sensing. The last we looked at trying this, the modern mice were too precise (6000 pulses per inch) and the cRIO could not handle the pulse rate.

What we did last year and intend to do again this year is to use two smaller omni-wheels (VEX 2.75in) with encoders mounted orthogonally square to the frame. They directly measure the actual motion, not the intended, wheel slip free, motion. The important thing is to ensure the un-powered follower wheels are always in contact with the floor.

A caster looks like this. It is neither X nor Y. See post8 in this thread.


Sorry, poor choice of language. I assume for full field localization, the poster was implying placing 2 individual casters, one oriented in X direction, and one placed orthogonal, in what they assume to be a Y direction.

Then you have one dragging caster at all times, or you go the mouse ball route. Even with the mouse ball, you still need to track rotation, as the mouse operates on the assumption of having the same general yaw position on a persons desk.

A caster swivels. It’s not clear what you mean by orienting a caster in the X (or Y) direction.

*I theory, you could track angular orientation and XY displacement using 3 unpowered omni wheels with one encoder on each.

See attached sketch.

3 unpowered omni wheels L, R, & C, with one encoder on each wheel.

Forward (FWD) is upward in the diagram, strafe right (STR) is to the right in the diagram, rotation (omega) is positive clockwise.

The red dot is the reference point of robot motion (typically, but not necessarily, the center of the robot).

Here’s how to convert the L, R, and C encoder speeds in fps into robot-centric motion FWD, STR, and omega:

FWD = (L+R)/2 fps

STR = C fps

omega = (L-R)/W rad/sec


3 omni followers.png

3 omni followers.png

Do you or the community have some code sample to illustrate this?

True again, I guess in my mind I had imagined the locked/non-swiveling caster-style wheels.

Swiveling caster with an encoder will only track total distance traveled from starting. Unless you track wheel spin and another encoder of the orientation of the caster. Which may, in fact, be what the poster meant originally and I misunderstood.

 * This is the experimental code for the 2015 robot.  DO NOT USE FOR COMPETITION!  
 * All code that is useful will be in  a separate project.
package org.usfirst.frc.team4301.robot;

import edu.wpi.first.wpilibj.SampleRobot;
import edu.wpi.first.wpilibj.RobotDrive;
import edu.wpi.first.wpilibj.Joystick;
import edu.wpi.first.wpilibj.Timer;
**import edu.wpi.first.wpilibj.Gyro;

public class Robot extends SampleRobot {
    RobotDrive myRobot;
    Joystick DriverStick;
    Joystick OperatorStick;
**    Gyro gyro;
    public Robot() {
        myRobot = new RobotDrive(0, 1, 2, 3);
        DriverStick = new Joystick(0);
**        gyro = new Gyro(0);
**    }

     * Drive left & right motors for 2 seconds then stop
    public void autonomous() {        

    public void operatorControl() {
        while (isOperatorControl() && isEnabled()) {
        	//set x, y, and z values
        	double x = DriverStick.getX();
        	double y = DriverStick.getY();
        	double z = DriverStick.getZ();
**            myRobot.mecanumDrive_Cartesian(x, y, z, gyro.getAngle());
**            if(DriverStick.getTrigger() == true) {
            Timer.delay(0.005);		// wait for a motor update time

     * Runs during test mode
    public void test() {

I have bolded the lines to pay attention to. Replace “4301” and all team-related items with your own code.

Stay in touch man that sounds like an awesome thing to do. It’s like the ABS system on your car minus the annoying noise it makes. I’m interested in seeing how well that works. Albeit this year sensing when your robot is being pushed and getting feedback on that isn’t as helpful as last year it would still be a nice off season project for some member.

Please read post#8 in this thread.