View Single Post
  #9   Spotlight this post!  
Unread 28-10-2005, 00:09
Rickertsen2 Rickertsen2 is offline
Umm Errr...
None #1139 (Chamblee Gear Grinders)
Team Role: Alumni
 
Join Date: Dec 2002
Rookie Year: 2002
Location: ATL
Posts: 1,421
Rickertsen2 has a brilliant futureRickertsen2 has a brilliant futureRickertsen2 has a brilliant futureRickertsen2 has a brilliant futureRickertsen2 has a brilliant futureRickertsen2 has a brilliant futureRickertsen2 has a brilliant futureRickertsen2 has a brilliant futureRickertsen2 has a brilliant futureRickertsen2 has a brilliant futureRickertsen2 has a brilliant future
Send a message via AIM to Rickertsen2 Send a message via Yahoo to Rickertsen2
Re: ceiling navigation

Quote:
Originally Posted by Gdeaver
You want to look up. A cruise missile looks down. Your thinking of terrain following systems. The US terrain following cruise missile is an interesting system and could have some civilian applications in machine vision. To bad the critical parts are tied up under national security. I believe Scientific American had a article on it a few years ago. Current machine vision has gone down the path of capturing more and more digital information from an image and throwing more and more computer horse power at it. The cruise missile is different. It takes an optical image and through optical and analog processing - filtering reduces the images information to just what is important. The edges and boundaries of objects which at the last stage are digitized and and compared to a processed image data base.
Indeed. I have been investigating some of the algoritms used by motion trackers used for video compositing and they are very very very computationally intensive. I am starting to look for algorithms specifically designed for machine vision. I have a few of my own in mind, but i am no expert on the subject and do not know whether or not they will work.

I envision something that operates in a few steps
*grab a frame
*generate new images in that are in terms of things like change in color vs position space instead of color vs position space.
*look for well defined areas. record their position as well as characteristic like thier shape, size, contrast range and relative location to other features. Store these things.
*query database of previously detected features in the general vicinity that correspond to currently seen features.
*calculate position.
__________________
1139 Alumni

Last edited by Rickertsen2 : 28-10-2005 at 00:22.