View Single Post
  #2   Spotlight this post!  
Unread 16-01-2006, 01:06
JoelP JoelP is offline
whats the P for? Pazhayampallil
FRC #1155 (Bronx Science Sciborgs)
Team Role: Leadership
 
Join Date: Dec 2004
Rookie Year: 2005
Location: bronx, new york
Posts: 62
JoelP is a jewel in the roughJoelP is a jewel in the roughJoelP is a jewel in the rough
Send a message via AIM to JoelP
Re: Vision algorithms?

I read a WIRED article a week or two ago about Stanely and it gave me a whole new perspective to programming. After reading it, I had a great number of ideas on how to improve the reliability of the CMUcam we use.

Now to answer your question, using what I've read about Stanley from that article and a Popular mechanics article. I believe they "trained" the computer on how to interpret data from its cameras by comparing what its programming instructed it to do, to the driving style of a person. I think they actually had their program running while they drove Stanley around, and the program compared the drivers reactions to the surroundings to what the original program would have done in the same situation. Then ,believe it or not, the program refined itself to closely match the reactions of a human driver. In addition, they had LIDAR (LIght Detection And Ranging) laser sensors mounted to the front as well. The program compared what it saw through the cameras to what the LIDAR identified as clear drivable roadway ahead of it. Then the program again refined itself so that it could detect similar road conditions beyond the range of the LIDAR with the cameras.

This new method of programming, in which the program refines itself by comparing data from various inputs, rather than following set rules that are pre-programmed, opens many new possibilites. I believe this is the method to true AI, where the program can change itself and "learn" from experience.

Edit: In regards to your last question about the scaled down version of what Stanley does, I have a few ideas. Last year the main problems with the CMUcam was that it could not track the target with varying light conditions because the color values would change. So, the robot would have to be able to change the color values itself, until it finds the right values to track the target accurately. Then to actually find the target, some simple shape detection could be used. For example, if the target was triangular like the yellow triangles in the goals last year, the camera could find the number of tracked pixels and the size of the box drawn around it. Then if the number of tracked pixels was about half the amount of pixels within that rectangular bounding box, the robot would know that the target it was tracking was triangular. As I said before, now the robot can vary the color values until it finds the correct target.

This is just one idea, the possibilites are endless.

Last edited by JoelP : 16-01-2006 at 01:19.