Optical Flow Tracking, Help!

Hello all,
We have a problem. We have a underwater robot that has two cameras. One front camera and one downward facing camera. We want to use the downward facing camera to use the bottom of the pool at the transdec facility in Point Loma, San Diego, at the naval base, to track the motion of our AUV. The pool bottom is patchy and algae covered so there are patterns that can be seen. Assuming we are a a given height, above the pool bottom, we want to use the same type of program that a mouse uses to track a table top for us to basically have our auv be the mouse in the pool and for us to able to tell what direction and how fast we are traveling. Does anybody out there know how to help us, or know somebody else out there that can help us? Thanks! This is for the ROBOSUB event for next year, so we have some time to develop this and we really want to. The teams with the big bucks are using Doppler Velocity Loggers which cost 20k and uses acoustic Doppler shifting to track the velocity of the AUV. We feel that we can achieve a similar capability using our downward camera and the “mouse code”

I am unclear on what you are entirely asking.
Are you saying this device will be kept at a given fixed height in the pool or that this device will need to use the bottom of the pool to put itself at a certain height from the bottom of the pool?

If your intent is to put yourself a fixed height from the bottom of the pool you would be much better off using sonar not video for that purpose.
The depth measurement is much easier to achieve with sonar (a Doppler technology).
In this case though you are moving perpendicular to the plane of measurement so it will be much easier and cheaper.

As far as tracking the pattern on the floor for X/Y movement the technology used by optical mice is DIC / DDIT and it is derived in part from military technology.

What is the resolution of your camera and what do you have to process the image?
Start with Moire, MatLab and Mathematica.

We want to use the bottom the pool to track our movements and then compare it to a virtual map of the pool.

  1. The pool floor has a patter of markings caused by algae patches and the pool floor.

  2. We are assuming a given height for the time being because different distances from the pool floor will result in different velocities. We can add that to the formula after we are able to track our velocities. Different heights will require different adjustments to the pattern tracking algorithm. We can determine this by experimentation

  3. We know a mouse uses this type of algorithm to move the arrow on a computer screen, this is the code we are trying to get our hands on. We think if we have this, we will be able to use it to accomplish tracking goal.

  4. Basically it is comparing one frame of video to another frame of video and measuring the differences between the two images and calculating the direction, the degree of difference is compared to the known frame rate and this will give us speed.

  5. We will use this data of movement to locate where we are in a virtual pool that we will build in our computer. We do know where we start so we should know where we are if this works.

Does this help? Now can you help? Or Do you know someone who can help us get this code?

Thanks in advance!!!

I believe the technique you describe is used to stabilize the Parrot Drone as well.

I don’t know if it will be helpful, but NI-Vision has an Optical flow function callable from LV, and typically the C entry points (CVI) functions match. If you guys are looking for a new controller, like the myRIO, please let me know via PM.

Greg McKaskle

We do not need a new controller, the AUV uses an Ivy Bridge Mini ITX computer and we are currently using C sharp and A Forge Vision libraries but are converting to C++ and Open CV vision libraries. Thanks though for the info.

Found this on optical mouse, we are basically trying to use our AUV as a mouse in a pool.

Modern surface-independent optical mice work by using an optoelectronic sensor (essentially, a tiny low-resolution video camera) to take successive images of the surface on which the mouse operates. As computing power grew cheaper, it became possible to embed more powerful special-purpose image-processing chips in the mouse itself. This advance enabled the mouse to detect relative motion on a wide variety of surfaces, translating the movement of the mouse into the movement of the cursor and eliminating the need for a special mouse-pad.
The first commercially successful optical computer mice were the Microsoft IntelliMouse with IntelliEye and IntelliMouse Explorer, introduced in 1999 using technology developed by Hewlett-Packard.[9] It worked on almost any surface, and represented a welcome improvement over mechanical mice, which would pick up dirt, track capriciously, invite rough handling, and need to be taken apart and cleaned frequently. Other manufacturers soon followed Microsoft’s lead using components manufactured by the HP spin-off Agilent Technologies, and over the next several years mechanical mice became obsolete.
The technology underlying the modern optical computer mouse is known as digital image correlation, a technology pioneered by the defense industry for tracking military targets. Optical mice use image sensors to image naturally occurring texture in materials such as wood, cloth, mouse pads and Formica. These surfaces, when lit at a grazing angle by a light emitting diode, cast distinct shadows that resemble a hilly terrain lit at sunset. Images of these surfaces are captured in continuous succession and compared with each other to determine how far the mouse has moved.
To understand how optical mice work, imagine two photographs of the same object except slightly offset from each other. Place both photographs on a light table to make them transparent, and slide one across the other until their images line up. The amount that the edges of one photograph overhang the other represents the offset between the images, and in the case of an optical computer mouse the distance it has moved.
Optical mice capture one thousand successive images or more per second. Depending on how fast the mouse is moving, each image will be offset from the previous one by a fraction of a pixel or as many as several pixels. Optical mice mathematically process these images using cross correlation to calculate how much each successive image is offset from the previous one.
An optical mouse might use an image sensor having an 18 × 18 pixel array of monochromatic pixels. Its sensor would normally share the same ASIC as that used for storing and processing the images. One refinement would be accelerating the correlation process by using information from previous motions, and another refinement would be preventing deadbands when moving slowly by adding interpolation or frame-skipping.
The invention of the modern optical mouse at HP was made more likely by a succession of related projects during the 1990s at its central research laboratory. In 1992 John Ertel, William Holland, Kent Vincent, Rueiming Jamp and Richard Baldwin were awarded US Patent 5,149,980 for measuring paper advance in a printer by correlating images of paper fibers. In 1998 Travis N. Blalock, Richard A. Baumgartner, Thomas Hornak, Mark T. Smith, and Barclay J. Tullis were awarded US Patent 5,729,008 for tracking motion in a hand-held scanner by correlating images of paper fibers and document features, a technology commercialized in 1998 with the HP 920 Capshare handheld scanner. In 2002 Gary Gordon, Derek Knee, Rajeev Badyal and Jason Hartlove were awarded US Patent 6,433,780 for the modern optical computer mouse using image correlation.

Does anybody have the code for a mouse?

Here is a video that is similar to what we want to do

They are using it on a forward camera and a car, we would be using in on a downward facing camera and an AUV, it will still give us direction. Can anybody help us?

I can provide you some help though it has been a few years since I did this kind of work.

Do you want something that does that like Greg offered but that works on a PC and works out of the box?
Do you want an actual source code example?

If you look back in this topic I already suggested: Moire, MatLab and Mathematica all of which have existing work to demonstrate the mathematics.

I really will need more information about your cameras and illumination to give you specific advice it makes a huge difference in the result.

This should help you quite a bit and quite directly:
http://robots.stanford.edu/cs223b05/notes/CS%20223-B%20T1%20stavens_opencv_optical_flow.pdf

What else can I offer you?

Not meaning to be the LV guy, but it is the hammer in my hand …

If you have LV installed, examples/vision/Motion Estimation shows two similar approaches. They have images in a directory and overlay the vectors of each particle to show how the different HS and LK implementations compare.

If you have images or video to test with, I may be able to validate the approach, even if you later use a different implementation of the algorithm.

Attached screenshot shows the examples. Top one had the particles rotating on a turntable – obvious from the vectors.

Bottom is some sort of spark/flame with particles moving at different speeds and less applicable I’d assume.

You may also want to look at the other vision SW that came in the FIRST kit, I don’t remember the name, but had screen steps to document how to use it and it is pretty cool.

Greg McKaskle





I will have to get my programmers to get in on this, when can we set up a conference call?

I will have to get my programmers get on this, when can we set up a conference call?

Sent via PM.

Greg McKaskle

My original intent here was to find people that know how to do this and to see if there was any code that we could use to make this work on our windows 7 based Ivy Bridge processor running C sharp and A-Forge vision libraries. I am trying to light a fire under my programmers butts! One is a Freshman, but he is sharp the other a junior who needs some pushing sometime. I just want it done! If we can do this, there are many other AUV teams out there that could bennefit that are being out teched by the big guns in the competition like Cornel and Florida because they have DVLs and I feel that a vision based tracking system will work in this particular environment. It will give the rest of us a better chance of being in the top three or possibly winning. Here is the event website
Robosub
http://www.auvsifoundation.org/foundation/competitions/robosub/
We finished 9th out of 32 teams, our best showing yet, but we aren’t finished yet!
Here is our AUV team website
https://sites.google.com/site/falconroboticsauvteam/home

Here’s what you said you’d be using:
“but are converting to C++ and Open CV vision libraries.”

If you read the end of the link from Stanford I provided there is working source code that tells you all you need to really know to make this happen.
http://robots.stanford.edu/cs223b05/notes/CS%20223-B%20T1%20stavens_opencv_optical_flow.pdf

I note again image quality is your issue here. Particulate in the water that is moving will mess with your tracking.

I will also caution you to remove your contact numbers from this topic before something reads it and starts calling you all the time.
I’ve PMed you my contact information.

Thanks, I removed them. I think people don’t prank call as much now that cell phones have numbers show up, harder to hide…anyway I took them off thanks!

1 Like

So now we finally have the FIRST water game we’ve all joked about for so long.

:yikes:

I see from the link you are using Guppy firewire cameras.

Hey, we are ready! Bring on the water!

Yup Guppy Fire wire cameras, we have been using HSL values to do object tracking, also use flood fill for forward cameras when background is uniform. Works great, auto calibrates with lighting conditions. If we can somehow do the same thing for the bottom then the light from sun or cloudy day will not matter. Right now you calibrate for one lighting condition and then a clod comes and you are blind! Auto fill for bottom needs to first get rid of all white patches on floor of pool by taking whit pixels and making them dark like the dark pixels on the floor then do auto fill. We are working on that too…

If you have a variety of images, cloudy and clear, it would be easy to use an adaptive threshold and see if that is as effective, or I suspect more effective, than the flood.

Greg McKaskle

You posted

“This should help you quite a bit and quite directly:
http://robots.stanford.edu/cs223b05/...tical_flow.pdf

good stuff in here, will show my programmers! Thanks!