Optical Mouse Navigation

A little after build season was over for the 2004 season I started thinking about new ideas for this year’s season, particularly autonomous mode. I was poking around CD when I saw a post by Craig Putnam in which he mentioned he was working on an optical mouse based navigation system. What a great idea, I thought! I thought about it periodically during the off season and eventually I had come up with how I wanted to do this, and had almost everything figured out except how to interface the mouse, or mice, to the RC. So I emailed Craig Putnam and he explained his progress on making such a system and much to my delight I found that much of what he had done was what I was planning to do, so I couldn’t be too far off track. He also mentioned something mind-boggling…

But before I talk about that I’d like to mention something else. When I first got this piece of information from Mr. Putnam a few days ago, I didn’t tell anyone and hoped he wouldn’t either. This was pretty stupid for a variety of reasons, which don’t need to be enumerated. Anyway, in the past few days I’ve seen several posts that here that are dancing around the problem, namely the following:

I realized that a lot of people are now working on Optical Mouse Navigation, and it wouldn’t be fair to not share everything. So then…

There is a chip that converts PS/2 to RS232, made specifically for interfacing PICs to mice! It’s from Al-Williams and it’s called the PAK-XI. It slices, it dices, it makes a mean Crème Brûlée! Mr. Putnam, a mentor from team 42, P.A.R.T.S., tipped me off to it after I emailed him about it. But wait, theres more! It’s also cheap! Hey! So what’s the catch? Well, it may or may not be legal.

Here are a few snippets from my correspondence with Mr. Putnam:

(I’ve already ordered some of those chips, and I’m not quite sure if I’m going to go ahead using them and risk their illegality, ideally I can find another solution)

Mr. Putnam has also put together a great presentation on the topic which can be found here.

I’m currently building a prototype mouse assembly, and the problem of the moment is sufficent illumination. Here are some more snippets from my correspondance with Mr. Putnam.

So this is where things stand at the moment. The purpose of this thread is to have a central place for everyone working on this to talk about it. Please post anything you find that’s relevant.

(Innumerable thanks to Mr. Putnam, who doesn’t seem to post on CD to often. I hope he doesn’t mind me quoting him! Note: These are out of chronological order, for the sake of clarity)

I’ll admit that I did read almost any of that post, but… I had this idea last year. We ruled out that idea becuse of one main reason: The maximum rate of speed that the mouse can mesure was well under the speed of our robot. That might be somthing to look into if your thinking about a mouse for navigation.

If you had bothered to read Mr. Putnam’s excellent presentation, you would have noticed that the problem is solved by raising the mouse off the ground and the addition of some optics. Furthermore, if the PAK-XI chip is allowed, it can supposedly adjust the resolution of the mouse it’s connected to. The chip isn’t anything special either, and any coprocessor based solution could do this. (Some?) Mice are programmable.

Well now that it’s out in the open, we’ve toyed with this concept for some time as a terrain-mapping navigational system, but we never got past the conceptual stages because of the specs on the sensor devices. I was pushing for some research into this the past two years on my team actually, and we did (we believe) figure out how to get two mouse inputs into the controller (we would place one on either side of the robot and judge distance and pitch from those readings), but the main problem with this was purely technical on the sensor side. Simply put, your robot would have to be going very slowly for the sensor to be able to position you. A robot that blasts off at say 10 feet per second in autonomous mode is by a large margin (I don’t have the numbers offhand) too fast for such a device. The framerate is not high enough such that the sensor (which is really a camera) can take two frames and compare them. Thus, no movement would be detected. Perhaps we’ve missed something, but our preliminary research into this lead us to that conclusion.

I’m not sure what sensor you are refering to specifically, but I’m fairly confident that a mouse can resolve at that speed accurately with the correct optics. If it’s “zoomed out” by a factor of 5, wouldn’t it logically follow that it’s seeing 1/5 the movement, and hence it’s normally 12 inch per second max become a 5 foot per second max? I’m not saying it’s a solved problem, but it’s certainly not a desperate situation by any means.

Mr. Putnam is using an accellerometer to gauge odometry, and the mouse to measure rotation. I was planning on using a mouse for both, each one going through a PAK-XI and then to a serial port. This is possible through Kevin Watson’s new serial port driver. This would make debugging hard, as Max Lobovsky pointed out, because you couldn’t use printf’s to send data back to your laptop anymore, as all the ports would be used. His idea was to multiplex the two streams…

Questions Questions!

Okay, I think my plan is to go ahead and use the PAK-XI’s on the offchance they’ll be allowed, while also working on code (In C, INSTEAD OF THIS ASM NONSENSE :ahh: ) to replicate the function of the PAK-XI should it be banned. Does anyone have any idea which PIC would be best suited for simultaneous PS/2 and RS232 communication?

Wildstang experimented with optical mice last year as well. I’m having a hard time telling from that long post just how far others have gotten. Has anyone actually communicated with their mouse?

Last year we were able to get an optical mouse connected to our robot controller and read X/Y positioning from the mouse chip. We did not use PS/2, however. Instead we removed the PS/2 driver chip from inside the mouse and connected the RC directly to the optical sensor chip from Agilent (BTW, Agilent is the only company making optical mice components, so if you open one up that is what you’ll find). The Agilent chip can speak in a simple synchronous protocol which is how we communicated with it. We implemented this using the normal input/output pins on the RC and bit-banging the protocol. This allows us to read out the X/Y deltas as well as obtain the actual image captured by the mouse, the surface quality measurement, and a bunch of other goodies. The chip is pretty nice because it will remember how far it’s moved since the last time you queried it, so you can query it at your leisure (as long as it’s fast enough that the counter inside the sensor doesn’t overflow).

We also affixed a different lens to the mouse to change its field of vision to accommodate larger speeds. Illumination is a problem, as was already mentioned. We fixed a ring of superbright LEDs around the lens to combat this. However, that is where the bad news starts. With all that out of the way, we mounted the modified mouse to a cart and started doing tests to see if it accurately tracked motion. We found that it did not. When the cart was moved at different speeds over the same distance, the mouse would report different amounts of measured distance. This was disappointing, because before we fitted the new lens we tested the mouse by affixing it to an X-Y table in our model shop with a very accurate readout and found that without the new lens it was good to something like a few thousandths of an inch. At this point it was getting pretty late in the season so with the mouse concept not looking too promising we had to abandon it and concentrate on reusing our old positioning system that we used in 2003.

Agilent’s site is pretty useful, with datasheets for the various mouse sensors. If you dig around there you will find that they give you lots of info on the optics used as well as what wavelengths the sensors are most sensitive to, etc.

Also something to keep in mind is that even if you get to the point where the mouse can track movement accurately, it does not handle rotation. You’ll still need a compass or something to know your heading which needs to be combined with the vector obtained from the mouse. I think you can substitute two mice on opposite sides of the robot instead of the gyro, but I haven’t yet worked through the math to prove it to myself. There’s some odd cases if you do a tank-style spin and such that I have to think about a little bit to see if it can still work.

Craig Putnam has indeed communicated with his mouse, but using an component that may or may not be legal. The maths are the only part I am sure of, and if I can interface two mice, I’ll be golden. That little step is proving harder than origionally thought. I’ll have to look into this aligent chip business…

Thanks for the info! :smiley:

UPDATE: Looking at some PDFs on their site, it would seem the particular chip I’m looking at outputs PS/2 directly! Was this the case with yours?

I guess its time for me to get back into the conversation…

I’ll begin by correcting one error in the thread so far. We are using the mouse to tell us how far we have moved, not the rotation (as it is pretty insensitive to rotations about its optical axis). We are using one of the old gyro chips to measure the rotation rate and then merging the inputs from the two sensors to give us a measure of the robot’s motion since the last time we looked. All of that is going into a PID feedback loop (the details of which we are still being worked out). The intent is to enable the robot to accurately travel along any mathematically defined path: straight line, circular arc, spline curve, etc.

Using two mice (one on each side of the robot) is indeed another way around the rotation problem as is using one mouse at the center of the robot to measure distance traveled and a second mouse mounted at the edge of the frame to measure turning motion.

Re: the mouse not being able to track the robot’s speed. By lifting the mouse board up and inserting the appropriate optics, we have effectively changed the size of the “Mickey”. So instead of (for example) getting 200 Mickeys per inch, we are getting a much smaller number. While our resolution has gone down, the speed that we can track has gone up. We expect to eventually have a resolution of about 1/4 inch and be able to easily track the top speed of a FIRST robot.

We do indeed speak to the mouse quite well using the PAK-VIa chip. If you go to the Al Williams site you will see that there is now a chip specifically designed for communicating with mice - the PAK-XI. We starting with the PAK-VIa chip however and have found that works well enough for our needs at the moment. ] As has been pointed out however, the use of any PAK chip may well be illegal. So I am very interested in hearing from anyone who has successfully communicated directly with either a PS/2 or USB based mouse (or directly communicated with the Agilent chip as was noted above).

The optical mice are going to have a problem because one blurry patch of grey FIRST carpet is going to look like the next. I agree that this is a cool idea, but think it’ll be hard to get it to work (but I’m willing to help nonetheless <grin>).

-Kevin

Thinking about this, yes optical mice are cool but what came before them? Ball mice! It may not be practical for all games (like climbing a step) but for a game like 2002’s Zone Zeal, maybe you could just stick a ball and a couple wheels under the robot. Think Technokats 2003 Ball robot, but maybe 1/4-1/3 the size and instead of motors powering the little wheels there would be shaft encoders hooked to the little wheels. Yeah, that would work, at least on a flat field. Make a cradle in the center of your robot and put a ball in there with two little rollers at 90 degrees from each other contacting the ball at all times.

We’ve seen both types. The ones we mainly used contained a ADNS2610 chip which only does the sync serial. The other one we saw had 2 interfacecs, either PS/2 or good old quadrature output. If you have trouble with the PS/2 and don’t want to try the sync serial to the Agilent chip, you could always use the quadrature output. That’s really easy to decode.

Max Lobovsky said the exact same thing. But there is one problem with this! Try using a ball mouse with a carpet patch as your mouse pad for an hour and I garuntee your enthusiasm for this idea will be curbed.

By zooming out (demagnifying) with the optics we are going to see more of the surface underneath the robot. This can cut both ways - more surface may give us the chance to see more variation. On the other hand, zooming out means we will see less detail in whatever it is that we do see.

My experience with testing various surfaces under my prototype optical system seemed to support that it will work OK. But I didn’t have any of the FIRST carpet to test with so I can’t say for sure that the modified mouse will work well with that. Time will tell…

The other thing that can really help is the angle of the light. Think about how an optical mouse works. The light is shining not directly down but at a very low angle to the surface. The idea is to cast as many shadows from surface variations as possible in order to give the chip something “interesting” to look at.

Quadrature! Sweeeeeeeeeeeeeeeeeeeeeeet! :smiley: :cool: :smiley:

That’s the geekiest thing I’ve ever said.

Yeah I was figuring the same thing. Just use a ball mouse. Optical mice are not meant to detect any fast movements. You can prove this by moving your optical mouse very fast on the mouse pad you will see that the curser on the screen seems to go in random patterns.

The only problem with ball mice that I can figure is the problem with resistance, but that should be negligible

Okay, let me play devil’s advocate.

Wheel encoders have been done before, and I assume they work. If a robot uses differential (two-wheel) drive and we keep track of each wheel’s motion, and the tires are pneumatic and don’t ever slip on the carpet, can’t we measure not only x and y but rotation as well?

So my question is this: what great advantage does an optical mouse have over wheel encoders that makes us want to make this work? Other than being really cool, that is.

Well, Astronouth and I were actually talking about this last night. The problem is, there is a ton of slippage, all the time. And even if there weren’t it’s less precise in general. You can do it, a lot of people have, but optical will be much more precise and accurate. If you have rotating wheels, you gain some more accuracy: see “StangPS”.

Optical, if you interface to the chip using quaderature, is, from a code standpoint, just as easy as using quaderature wheel encoders. Easier, in fact. The only issue is optics and illumination, both of which I am near solving.

I’m totally psyched, as I believe I’ll be the first to have a working, all optical, nav system. Whether all optical is even a good idea remains to be seen. :rolleyes:

And once this works, I’ve got an even cooler idea to work on, Muhahaha!

The problem is that wheel slippage always occurs, in every turn one makes (not to mention once pushing becomes a factor). Some robots with conventional swerve-style steering have a differential to minimize (but not by any means eliminate) this, but therein lies the problem. An encoder or hall effect sensor is placed somewhere on the drivetrain, which propels the robot, but does not reflect its actual movement as accurately as some would like. You are absolutely right that this usually doesn’t prove troublesome to robots that steer as you describe, but they represent a minority in FIRST. This problem is especially the case with tank-style steering robots, i.e. most robots.

Enter terrain-following. Instead of looking at what the propulsion device is doing to estimate where the robot is, we are following the movement of the robot. Assuming the camera doesn’t skip a beat and screw up, we get a much more accurate guidance system that opens up possibilities of pinpoint accuracy.

What is terrain following, and how is it different than optical mouse based navigation?