I have been looking around the web and couldn’t find anything. I am trying to do localization, which is finding the location of a robot, on an omni-directional, such as an X-Drive or Mechanum Drive. I have encoders on all four wheels. If anyone can point me towards the formulas or the code that I can use to figure it out.
Thanks for all your help.
Finding the correct speed for each of four mecanum (notice there’s no h in mecanum) wheels, given the desired instantaneous vehicle motion (rotatation and XY translation), is called inverse kinematics.
The opposite problem – finding the vehicle motion given the speeds of each of the four mecanum wheels – is called forward kinematics. I discuss this briefly starting at the bottom of page7 of this document.
But even if all four mec wheels are always turning at kinematically correct speeds (which is a non-trivial problem), in the real world there will be factors such as roller axis free play, roller friction, carpet compliance, weight distribution, acceleration forces, etc etc which cause the actual vehicle motion to differ (perhaps substantially) from the predicted motion.
Then, on top of that, you would have to integrate the vehicle motion over time to get vehicle location and orientation… and the errors would accumulate.
So the question needs to be asked: how accurate do you need the vehicle location and orientation to be, and what distances and times do you have in mind? Depending on the answers to those questions, there may be a better solution than what you currently have in mind.
Thanks A Lot,
I am trying to find a reasonably accurate position (+/- 6-12 Inches) on a 12x12 Foot Vex Field. I have a gyro that I can use for the heading and the match is 2 minutes long.
Thanks
vrcprogrammer
Correct me if I’m wrong, but since mecanum wheel rollers, when moving forwards or backwards, may not spin predictably due to friction in the rollers, you may not be able to get any good precision out of them.
With something like killough, kiwi, or X-drive the rollers will be forced to spin since the wheels are actually angled, providing more deterministic roller movement.
Also, 2 minutes? I doubt you would get that kind of precision over that time period. Any errors in angle measurements have a huge effect on positional measurement.
If your gyro drifts even 2 degrees, your positional error over 5 feet would be that’s sin(2 degrees)*5 ft. Which is just over 2 inches already. I imagine you plan to drive your robot a lot more than that.
If you really need this over a 2 minute time period you may want to think about having the robot drive itself into a wall or corner to re-zero its measurements at some point.
Thanks, Do you have any ideas? One idea that I had was to have 2/4 wheels to keep track of X-Y position. Any ideas/formulas that could help me impliment this.
Thanks.
vrcprogrammer
LIDAR from a Neato vacuum cleaner is another option if you are open to other ideas. If you Google or eBay or Amazon the terms “neato lidar” you will see what I am thinking of. Here is a link to some on eBay. Some vendors even offer code and instructions on how to talk to these devices.
-Hugh
Thanks, but I will have variable objects blocking my robot, so only sensors that are focused on the robot would work.
If anyone has any ideas, that would be great.
vrcprogrammer
Do you want the sensors to be mounted on the robot (and thus sensing the environment - floors, arena boundaries, etc.), or fixtured to the environment (and thus sensing the presence and/or location of the robot)?
I would need to have the sensors mounted on the robot.
Thanks
I dont know what the capabilities of your controller is (roborio or other controller?) But if you have usb ports you might be able to use this.
I really like this idea, is there any way to use this with wheels/encoders. Thanks.
I am trying to design/think of the program for a set of wheels, for X-Y positioning, and I am stuck on getting the formulas. Can anyone help me, or is a 4 wheel design better.
You’ll spend the same amount of money on the vacuum as you will on one of these
Except the RPLidar is built for hobby robotics, already functional, and already has solid ROS support. You wouldn’t get anything done in VEX since it has it’s own motor and isn’t a VEX part, but it’s a good option for a personal robot or for the Roborio, and again, same const as the neato, but you don’t need to take a vacuum apart and risk breaking things.
This all sounds great, but I need this to be on a vex robot, so does anyone have a way to do this with vex edr (vex robotics competition) parts.
Thanks
Vrcprogrammer
What about 2 omni wheels placed perpendicular to each other that drag along the floor. As the one in the y axis moves, it gives you your position in the y axis and the same as with the x. Although I do not know how the code might work, to make this more reliable you could add a gyro to correct for any kind of rotation.
As for the mouse idea, I don’t have any info on it other than what’s in the post. It’s on my to do list but that’s a mile long not to mention school work.
That’s true only if there is no vehicle rotation. If there is going to be vehicle rotation, you need either a) a gyro, or b) a third follower wheel… and you’ll need to integrate (vector sum) the vehicle movements.
You might find this post of interest (and also the rest of the post in that thread).
I corrected my post to include the gyro. I was thinking of it in my head but forgot to add it in. The third follower wheel for rotation is a good idea as well.
This all looks great, how would you recommend using a gyro with the three follower wheels.
Thanks
Well first of all, what is your issue with simply putting encoders on all four of your wheels? If you are going for the X/killough/omni/whateveryouwishtocalllit drive you already have TWO sets of “mouse wheels”. Theyre your drive wheels! Even without the gyro you can get rotational odometry. (I have tried this when we were prototyping swerve code on a vex base and it wasn’t all too terrible. A gyro is still better, but odometry can determine your angle)
Ether’s papers on controlling an omnidirectional drive apply in the reverse direction in order to find your movement based on encoders.
The biggest issue with this method is wheel slippage. Say I accelerate to quickly, or I get into a pushing match and my wheels start to spin because they lost traction. All of a sudden my robot thinks it’s 5 feet further forward than it actually is. By getting feedback from unpowered wheels or a laser, you know that whatever movement you detect should in theory be the exact actual movement of your robot.