We are working on finding ways to calculate robot pose for a mecanum drive. Many of the papers I have found on this subject use optic flow sensors or cameras to watch changes in the ground. Could we use a Computer mouse with an optical sensor to calculate the position of our robot?
What would our limitations be using this method?
Does the roborio support Human Interface Devices plugged into the USB ports?
We are a Java team, and I wonder if we could use the JInput library on the Roborio to get the position of the mouse.
Any thoughts on this are greatly appreciated!
EDIT:
The mouse would only be used for translation. We would use a gyro to track robot turns, and transform our coordinate system to continue tracking the mouse translation.
it probably would show up as a device. you probably need a gaming mouse to make sure you don’t get velocity limits. Optical mice can’t tell you rotation, so you would nee 2 mice to get full information. Reading carpet may be tricky.
There have been multiple previous threads on this subject, and I think the consensus tended to be that it is possible, with the right mouse (mice) and spacing from the floor. In this case, what is keeping you from using a gyro and/or encoders?
We want accurate X/Y position of our mecanum drive, and research told me that encoders are not accurate enough with mecanum wheels due to slipping of casters.
In addition, I think that you cannot determine your current position independent of time with just the encoder positions for mecanum, since it matters what combination of velocities you have each timestep.
That said, we are planning to use a gyro for heading. The mouse would only be for translation.
You can determine your position independent of time, as encoders can tell you both the position and velocity of the bot at any given time. That being said, I cannot speak to the accuracy of encoders with mecanum wheels, but I imagine that it could be feasible depending on how it is implemented.
You should be more specific about your positioning requirements. It’s one thing to calculate position during an autonomous routine (a la Ether’s description above), and another thing to know your position 45 seconds into the match.
If you’re talking an autonomous scenario, you can use motion profiling to control acceleration and manage wheel slip.
Correct me if I’m wrong, but I believe that accelerometers are pretty bad at tracking velocity or position, because the two integrations result in a massive amount of noise.
Six years ago I wrote a paper on using optical mice for robot odometry, and as far as I know it’s still the most comprehensive review of this topic. While we built a system that worked fairly well for our senior design project, I don’t recommend mice for FRC. For one thing, acquiring mice that you can easily interface with and building an appropriate lighting and lens system is a bit tricky (although I’m happy to help if you want to try). The mice often have huge errors in their measurements, because the readings are highly dependent on the surface quality, illumination, distance to the surface, and focus of the lens. For example, while the measurements may be repeatable on carpet, you’ll get very different results on the gaffer’s tape delineating the various zones on the field. In our system, we did a least-squares fit with six mouse sensors to estimate the position and orientation of the robot.
I’ve attached my paper if you’d like to read more.
That’s correct. You can typically obtain a position estimate for a wheeled mobile robot much more reliably through other means.
Typically the biggest challenge with using an optical mouse for position is that they are designed to glide over a smooth surface at a very precise height. You will almost certainly need some sort of suspension, and even that may not be enough for guaranteeing you don’t lose track when traversing bumps and seams in the carpet. (EDIT: StephenB’s post goes into more detail).
I wouldn’t give up totally on using encoders on mecanum wheels for positioning. Although the rollers on the perimeters can spin freely, any movement in x, y, or theta will result in proportional rotation on a sensed axle (assuming your rollers may have rotated, but have not lost traction). However, unlike determining the necessary wheel speeds for a mecanum drive to achieve a given x, y, theta velocity (inverse kinematics), taking 4 independent velocity measurements and obtaining an x, y, theta velocity (forward kinematics) has 4 inputs and 3 unknowns; it is an overdetermined system. Due to noise you are virtually guaranteed not to have measured 4 velocities that exactly solve the equations. One common approach to deal with this is to use a least-squares solution, which would find 4 new velocities that are consistent and minimize the mean square “error” between what you measured and what your kinematic model says is possible. (A nice side effect is that you can measure this error - “residual” - and use it as a signal that you might be less certain about the vehicle’s motion and could be colliding with something, etc.).
You don’t even need a ball; two passive omniwheels mounted at right angles and connected to encoders could give you unambiguous X and Y measurements, even if your mecanum wheels have lost traction. As in the optical mouse case you are gonna want some sort of simple suspension to deal with irregularities in the carpet, but omniwheels at least have no trouble reliably rolling over typical imperfections in FRC field surfaces.
In 2015 we experimented with using a mouse to track position for our kiwi drive but we found that even with a gaming mouse that had a fast refresh rate we still couldn’t get accurate distance measurements when driving on the field. We ended up using an encoder on each wheel and a gyro to track our x,y position on the field but we had to be careful and not accelerate too fast or the wheels would slip and we would lose our position. A mecanum drive would also have similar problems with wheels slipping so you might want to implement some acceleration control or for a quick solution just slowly ramp up the power when you first start moving which is what we did in our 2015 code posted here https://www.chiefdelphi.com/media/papers/3180?
The math for the above is discussed here, starting at the bottom of page 7.
If you let A = R/r, b = Ω, and x = V,
then the inverse kinematic equation for b (given A and x) is:
b = Ax
… and the least-squares forward kinematic solution is given by solving the inverse kinematic equation for x (given A and b), which can be done several different ways (one of which is shown in the linked paper).
The residuals are then a straightforward computation:
residuals = b - Ax
If you intend to pursue this further and need help with the math you can start a new thread.
The quick answer is many have tried to use optical mice for Odometry, and the consensus is it doesn’t work. Commercial robots (Fetch, Savioke) solve SLAM(Simultaneous Location and Mapping) using a variety of techniques combined with odometry. These include lidar, sonar, optical flow, compasses, accelorometers, etc. Some use wifi and optical features (Fiducuals). While odometry is ok for rough distance, it is not so good for determining pose.
It is interesting to consider why this is. Typically, a differential robot is controlled by adjusting the speed of the motors via PID. So getting the robot to go straight in autonomous should be easy (Ha.). Even if you could get motors to respond perfectly to speed commands, wheel alignment, slipping, mechanical differences all conspire to mess with you. Turning is worse. to turn in place with a diff drive, the wheels need to rotate opposite directions simultaneously for preciously .
With good odometry you should be able to get an FRC robot to go forward 10ft within about 6 inches in autonomous. Try to make a 10ft square and see you’ll be 3 feet off and turned 30 degrees.
Bottom line, after the math is done and the smoke clears:
Mecanum “forward kinematics” (least squares fit of vehicle motion given wheel speeds):
*
*FWD = r*(w1+w2+w3+w4)/4
STR = r*(w1-w2+w3-w4)/4
Wv = (1/k)*(w1+w2-w3-w4)/4
r is wheel radius
k is |trackwidth/2| + |wheelbase/2|
w1,w2,w3,w4 are FL,BL,BR,&FR wheel speeds in rads/sec
Wv is robot clockwise rotation rate in radians per second