Camera Encoder for Robot

I am trying to write some code that will use a Camera to track texture on the ground and identify how much the robot has moved. The robot is all directional so the software needs to keep track of the horizontal and vertical movement through only a down facing camera. Is there any existing code which accomplishes this? Preferable using openCV.

I have a feeling that you’re not going to be able to get any reasonable localization out of this, but that’s just my opinion.

What you’re looking for is some kind of optical flow algorithm. It seems to work best when there’s some sparse features to track, which isn’t exactly what a carpet under a robot looks like.

If I were you, I would use omni follower wheels with encoders to determine your distance traveled.

Thank you!!

I had a similar feeling after setting up a basic algorithm that used Optical Flow.
It did find features but it failed to give any useful accuracy.

You might want to look into something like this: https://www.seeedstudio.com/Flow-Breakout-Board-p-2949.html

Optical flow is an interesting technology, I’ve looked into it a bit myself. In case you’re still in interested in pursuing it someday, here are the issues I’ve run into, and possible mitigation:

  1. Scale accuracy. So the issue here is that in order to calculate displacement (change in x/y position) accurately, the distance from the sensor to the surface (the carpet) needs to be known very precisely. If the sensor bounces up or down just a little bit, that changes the distance and introduces error. So a possible solution here is to include a fast (e.g., time of flight) sensor that can measure distance at close range to an accuracy of ~less than a centimeter~.

  2. Focus. This issue can be overcome with effort, but whatever the distance from the surface is, the camera focus should be configured to match that.

  3. Signal-to-Noise Ratio. A typical sensor performance statistic is “Signal-to-Noise Ratio” - which indicates how much error will be introduced into the “signal” acquired by the sensor. In the case of optical flow, this translates to three things: surface texture, framerate and contrast.

  • Surface Texture. An optical flow sensor won’t detect motion unless there are patterns created by texture in the surface being observed that exhibit enough contrast to be detected by the “edge-detection” portion of the optical flow algorithm.

  • Framerate. Since the “edge-detection” needs to see edges, a blurry image won’t work well. In addition to focus, the framerate needs to be fast enough so that the image is crisp even when the sensor is moving at a high speed.

  • Contrast. Since cameras detect photons, there need to be photons to see - so for a camera in the human-visible spectrum, there needs to be sufficient light reflecting off of the surface so the camera can see it. Software can be used to increase contrast somewhat, but artificial lighting definitely helps.

So to address these issues, good focus, good lighting and high framerate are important. These would address all the above except the fundamental issue that there needs to be enough texture in the observed surface to work well.

Putting together prototypes for optical flow has been a hobby of mine - I’m working on a 4th generation prototype to overcome the issues mentioned above. I think someday it will work enough for FRC & FTC, but we’re not there yet. It’s complicated!

So in summary, I agree with the advice to use encoders to measure distance for FRC. And if you ever want to chat about it more, please feel free to personal message me.

  • scott

We tried that board last year, and wrote a white paper on it. Here is the accumulated information.
https://www.chiefdelphi.com/forums/showthread.php?p=1781756&highlight=flow+motion#post1781756