Go to Post Wait hold the phone. Do you mean its physically possible to not spend time on robotics during the season?!!! Too much.......for brain......to compute!...goes against.....beliefs........aaaaarrrrrrghhhhhh!!! [syntax error] - BuddyB309 [more]
Home
Go Back   Chief Delphi > Technical > Programming
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
Closed Thread
Thread Tools Rate Thread Display Modes
  #1   Spotlight this post!  
Unread 16-01-2006, 00:49
Mike's Avatar
Mike Mike is offline
has common ground with Matt Krass
AKA: Mike Sorrenti
FRC #0237 (Sie-H2O-Bots (See-Hoe-Bots) [T.R.I.B.E.])
Team Role: Programmer
 
Join Date: Dec 2004
Rookie Year: 2004
Location: Watertown, CT
Posts: 1,003
Mike has a reputation beyond reputeMike has a reputation beyond reputeMike has a reputation beyond reputeMike has a reputation beyond reputeMike has a reputation beyond reputeMike has a reputation beyond reputeMike has a reputation beyond reputeMike has a reputation beyond reputeMike has a reputation beyond reputeMike has a reputation beyond reputeMike has a reputation beyond repute
Vision algorithms?

I was reading about Stanley (the Stanford winning entry for the DARPA Grand Challenge) today, and it mentioned that Stanley used a video camera to scan the ground and see any obstacles. How does that work? Obviously, it's probably much to complex to explain in a single post, but could anybody explain the basics?

Is it a normal camera connected to a computer?
Or is it a specialized camera (kinda like the CMUCam)?
How do you even begin designing an image analysis algorithm?
Any information on how to do a simple, scaled down, version of the Stanley system?

Thanks
__________________
http://www.mikesorrenti.com/
  #2   Spotlight this post!  
Unread 16-01-2006, 01:06
JoelP JoelP is offline
whats the P for? Pazhayampallil
FRC #1155 (Bronx Science Sciborgs)
Team Role: Leadership
 
Join Date: Dec 2004
Rookie Year: 2005
Location: bronx, new york
Posts: 62
JoelP is a jewel in the roughJoelP is a jewel in the roughJoelP is a jewel in the rough
Send a message via AIM to JoelP
Re: Vision algorithms?

I read a WIRED article a week or two ago about Stanely and it gave me a whole new perspective to programming. After reading it, I had a great number of ideas on how to improve the reliability of the CMUcam we use.

Now to answer your question, using what I've read about Stanley from that article and a Popular mechanics article. I believe they "trained" the computer on how to interpret data from its cameras by comparing what its programming instructed it to do, to the driving style of a person. I think they actually had their program running while they drove Stanley around, and the program compared the drivers reactions to the surroundings to what the original program would have done in the same situation. Then ,believe it or not, the program refined itself to closely match the reactions of a human driver. In addition, they had LIDAR (LIght Detection And Ranging) laser sensors mounted to the front as well. The program compared what it saw through the cameras to what the LIDAR identified as clear drivable roadway ahead of it. Then the program again refined itself so that it could detect similar road conditions beyond the range of the LIDAR with the cameras.

This new method of programming, in which the program refines itself by comparing data from various inputs, rather than following set rules that are pre-programmed, opens many new possibilites. I believe this is the method to true AI, where the program can change itself and "learn" from experience.

Edit: In regards to your last question about the scaled down version of what Stanley does, I have a few ideas. Last year the main problems with the CMUcam was that it could not track the target with varying light conditions because the color values would change. So, the robot would have to be able to change the color values itself, until it finds the right values to track the target accurately. Then to actually find the target, some simple shape detection could be used. For example, if the target was triangular like the yellow triangles in the goals last year, the camera could find the number of tracked pixels and the size of the box drawn around it. Then if the number of tracked pixels was about half the amount of pixels within that rectangular bounding box, the robot would know that the target it was tracking was triangular. As I said before, now the robot can vary the color values until it finds the correct target.

This is just one idea, the possibilites are endless.

Last edited by JoelP : 16-01-2006 at 01:19.
  #3   Spotlight this post!  
Unread 16-01-2006, 02:13
Eldarion's Avatar
Eldarion Eldarion is offline
Electrical Engineer / Computer Geek
AKA: Eldarion Telcontar
no team (Teamless Orphan)
Team Role: Alumni
 
Join Date: Nov 2005
Rookie Year: 2005
Location: Númenor
Posts: 558
Eldarion has a reputation beyond reputeEldarion has a reputation beyond reputeEldarion has a reputation beyond reputeEldarion has a reputation beyond reputeEldarion has a reputation beyond reputeEldarion has a reputation beyond reputeEldarion has a reputation beyond reputeEldarion has a reputation beyond reputeEldarion has a reputation beyond reputeEldarion has a reputation beyond reputeEldarion has a reputation beyond repute
Send a message via AIM to Eldarion Send a message via Yahoo to Eldarion
Re: Vision algorithms?

Yes, it is possible to do things like this. However, a word of caution. Vision algorithms are very costly computationally. If you want an example of this, do a search for "generalized Hough transform". That is about the most robust shape recognition system in existence. Only problem is, it would take over a minute to test one frame on most people's desktops!

Good luck!
__________________
CMUCam not working? Tracks sporadically? Try this instead: http://www.falconir.com!
PM me for more information if you are interested (it's open source!).

Want the FIRST Email blasts? See here: http://www.chiefdelphi.com/forums/sh...ad.php?t=50809

"The harder the conflict, the more glorious the triumph. What we obtain too cheaply, we esteem too lightly; it is dearness only that gives everything its value."
-- Thomas Paine

If it's falling apart it's a mechanical problem. If it's spewing smoke it's a electrical problem.
If it's rampaging around destroying things it's a programming problem.

"All technology is run on 'Magic Smoke' contained within the device. As everyone knows, whenever the magic smoke is released, the device ceases to function."
-- Anonymous

I currently speak: English, some German, Verilog, x86 and 8051 Assembler, C, C++, VB, VB.NET, ASP, PHP, HTML, UNIX and SQL
  #4   Spotlight this post!  
Unread 16-01-2006, 09:31
Mike's Avatar
Mike Mike is offline
has common ground with Matt Krass
AKA: Mike Sorrenti
FRC #0237 (Sie-H2O-Bots (See-Hoe-Bots) [T.R.I.B.E.])
Team Role: Programmer
 
Join Date: Dec 2004
Rookie Year: 2004
Location: Watertown, CT
Posts: 1,003
Mike has a reputation beyond reputeMike has a reputation beyond reputeMike has a reputation beyond reputeMike has a reputation beyond reputeMike has a reputation beyond reputeMike has a reputation beyond reputeMike has a reputation beyond reputeMike has a reputation beyond reputeMike has a reputation beyond reputeMike has a reputation beyond reputeMike has a reputation beyond repute
Re: Vision algorithms?

Quote:
Originally Posted by Eldarion
Yes, it is possible to do things like this. However, a word of caution. Vision algorithms are very costly computationally. If you want an example of this, do a search for "generalized Hough transform". That is about the most robust shape recognition system in existence. Only problem is, it would take over a minute to test one frame on most people's desktops!

Good luck!
I'm not necessarily going towards shape recognition. This isn't for FIRST. I am trying to develop an autonomous robot (based on a RC truck chassis) that can find the easiest way to a beacon that will be hidden in the woods. So it's going to have to drive around trees/rocks/etc. but still not run away from a dust cloud and the like.

Should I be looking into technologies other than video?
__________________
http://www.mikesorrenti.com/
  #5   Spotlight this post!  
Unread 16-01-2006, 13:23
Eldarion's Avatar
Eldarion Eldarion is offline
Electrical Engineer / Computer Geek
AKA: Eldarion Telcontar
no team (Teamless Orphan)
Team Role: Alumni
 
Join Date: Nov 2005
Rookie Year: 2005
Location: Númenor
Posts: 558
Eldarion has a reputation beyond reputeEldarion has a reputation beyond reputeEldarion has a reputation beyond reputeEldarion has a reputation beyond reputeEldarion has a reputation beyond reputeEldarion has a reputation beyond reputeEldarion has a reputation beyond reputeEldarion has a reputation beyond reputeEldarion has a reputation beyond reputeEldarion has a reputation beyond reputeEldarion has a reputation beyond repute
Send a message via AIM to Eldarion Send a message via Yahoo to Eldarion
Re: Vision algorithms?

Quote:
Originally Posted by Mike
I'm not necessarily going towards shape recognition. This isn't for FIRST. I am trying to develop an autonomous robot (based on a RC truck chassis) that can find the easiest way to a beacon that will be hidden in the woods. So it's going to have to drive around trees/rocks/etc. but still not run away from a dust cloud and the like.

Should I be looking into technologies other than video?
Yes, probably the best (and most expensive, unfortunately) would be some kind of laser range-finder. The next-best would probably be ultrasonic "pingers" looking for obstacles.

I don't think any of the DARPA cars used vision to do very much (correct me if I'm wrong ). They mainly relied on the laser range finders and GPS location being fed into those fancy route-plotting algorithms you mentioned earlier.

What kind of beacon were you thinking of?
__________________
CMUCam not working? Tracks sporadically? Try this instead: http://www.falconir.com!
PM me for more information if you are interested (it's open source!).

Want the FIRST Email blasts? See here: http://www.chiefdelphi.com/forums/sh...ad.php?t=50809

"The harder the conflict, the more glorious the triumph. What we obtain too cheaply, we esteem too lightly; it is dearness only that gives everything its value."
-- Thomas Paine

If it's falling apart it's a mechanical problem. If it's spewing smoke it's a electrical problem.
If it's rampaging around destroying things it's a programming problem.

"All technology is run on 'Magic Smoke' contained within the device. As everyone knows, whenever the magic smoke is released, the device ceases to function."
-- Anonymous

I currently speak: English, some German, Verilog, x86 and 8051 Assembler, C, C++, VB, VB.NET, ASP, PHP, HTML, UNIX and SQL
  #6   Spotlight this post!  
Unread 16-01-2006, 13:34
6600gt's Avatar
6600gt 6600gt is offline
Registered User
AKA: Lohit
FRC #0226 (Hammerhead)
Team Role: Alumni
 
Join Date: Jan 2006
Rookie Year: 2004
Location: Troy, MI
Posts: 221
6600gt is a jewel in the rough6600gt is a jewel in the rough6600gt is a jewel in the rough
Re: Vision algorithms?

Put a range finder on a servo and ever certain incriments record the value to create a map of the area around you.
What type of microcontroller are you going to use? The 18f series pics inside the RC would work just fine. You can order a sample pack of the microchip website for free. These are more work to get started with but they are far more power full then then the Basic Stamp and cheaper. In fact you can get the processing power of the RC.

If you want to use cameras you need far more powerful controllers, possibly a laptop on the RC car.
I wanted to build something like this too but haven't had time. Though I have been playing around with PICs.

Look at the Mars Rovers. They use black and white cameras, for navigation, which are probably much easier and faster to process.

Last edited by 6600gt : 16-01-2006 at 13:37.
  #7   Spotlight this post!  
Unread 16-01-2006, 13:37
Kevin Watson's Avatar
Kevin Watson Kevin Watson is offline
La Cañada High School
FRC #2429
Team Role: Mentor
 
Join Date: Jan 2002
Rookie Year: 2001
Location: La Cañada, California
Posts: 1,335
Kevin Watson has a reputation beyond reputeKevin Watson has a reputation beyond reputeKevin Watson has a reputation beyond reputeKevin Watson has a reputation beyond reputeKevin Watson has a reputation beyond reputeKevin Watson has a reputation beyond reputeKevin Watson has a reputation beyond reputeKevin Watson has a reputation beyond reputeKevin Watson has a reputation beyond reputeKevin Watson has a reputation beyond reputeKevin Watson has a reputation beyond repute
Re: Vision algorithms?

Quote:
Originally Posted by Mike
I'm not necessarily going towards shape recognition. This isn't for FIRST. I am trying to develop an autonomous robot (based on a RC truck chassis) that can find the easiest way to a beacon that will be hidden in the woods. So it's going to have to drive around trees/rocks/etc. but still not run away from a dust cloud and the like.
Machine vision and autonomous navigation are a couple of technologies that I work on in my day gig. If you can ignore the first thirty seconds or so, have a look at this movie which discusses a few of the algorithms we used on the MERs that enable autonomous navigation on Mars. To get started on autonomous path planning, google on "D* path planning" and you should be able to find quite a bit of material. There is also an open source vision library that Intel started.

-Kevin
__________________
Kevin Watson
Engineer at stealth-mode startup
http://kevin.org

Last edited by Kevin Watson : 04-05-2006 at 10:58.
  #8   Spotlight this post!  
Unread 16-01-2006, 17:53
Avarik Avarik is offline
Registered User
#0022
 
Join Date: Jan 2004
Location: Chatsworth, CA
Posts: 75
Avarik is an unknown quantity at this point
Re: Vision algorithms?

I was lucky enough to be able to take a close up look at Stanely and a lot of other DARPA vehicles. I was told that Stanely implemented an algorithm which decided which data was good, and which was bad - and thats one of the reasons theyw ere so successful. In addition, I don't recall seeing any camera on that vehicle, but 5 Lidar sensors instead, which were mounted to the top.

In fact, almost all of the DARPA vehicles used Lidar sensors. Some tried to implement other sensors as well. The only one that I can recall that I didn't see a Lidar on, but only 2 cameras, was Berkeley's Motor Cycle.
  #9   Spotlight this post!  
Unread 16-01-2006, 20:51
Mike's Avatar
Mike Mike is offline
has common ground with Matt Krass
AKA: Mike Sorrenti
FRC #0237 (Sie-H2O-Bots (See-Hoe-Bots) [T.R.I.B.E.])
Team Role: Programmer
 
Join Date: Dec 2004
Rookie Year: 2004
Location: Watertown, CT
Posts: 1,003
Mike has a reputation beyond reputeMike has a reputation beyond reputeMike has a reputation beyond reputeMike has a reputation beyond reputeMike has a reputation beyond reputeMike has a reputation beyond reputeMike has a reputation beyond reputeMike has a reputation beyond reputeMike has a reputation beyond reputeMike has a reputation beyond reputeMike has a reputation beyond repute
Re: Vision algorithms?

Thanks for all the help guys
Quote:
Originally Posted by Eldarion
I don't think any of the DARPA cars used vision to do very much (correct me if I'm wrong [img]images/smilies/smile.gif[/img] ). They mainly relied on the laser range finders and GPS location being fed into those fancy route-plotting algorithms you mentioned earlier.
Ahh, I was under the assumption that they used mainly vision with redundancy of LIDAR.

Quote:
Originally Posted by Eldarion
What kind of beacon were you thinking of?
It would have to have a range of a couple hundred feet, without the line of sight that IR requires. Ultrasonic is the next option, but if I choose to use ultrasonic range sensors that may present an interference problem. Overall, I'm not too worried about it right now. One thing I've learned from FIRST planning is to learn what you want to do, and then figure out how to do it.

Quote:
Originally Posted by 6600gt
Put a range finder on a servo and ever certain incriments record the value to create a map of the area around you.
I'd like to have my 'bot be able to run at least 15mph. This would make it have to go ten feet, stop, take a reading, go ten feet, etc. It is a good idea. If it comes between spending hundreds on laser rangefinders or going slower, I think I'll take your idea.

Quote:
Originally Posted by 6600gt
What type of microcontroller are you going to use?
Personally, I'm a fan of the AVR ATMega series. That, plus the fact that I have a nifty STK500 sitting in my basement is leading me away from the PICs.

Quote:
Originally Posted by 6600gt
If you want to use cameras you need far more powerful controllers, possibly a laptop on the RC car.
Yeah, I was thinking about this as well. If I had the money I would purchase a Mini-ITX and mount it. Unfortunately, my budget is exponentially smaller than what that would require.

Thanks for the help Kevin and Avarik
__________________
http://www.mikesorrenti.com/
  #10   Spotlight this post!  
Unread 16-01-2006, 21:00
mechanicalbrain's Avatar
mechanicalbrain mechanicalbrain is offline
The red haired Dremel gnome!
FRC #0623 (Ohm robotics)
Team Role: Electrical
 
Join Date: Apr 2005
Rookie Year: 2004
Location: Virginia
Posts: 1,221
mechanicalbrain has a reputation beyond reputemechanicalbrain has a reputation beyond reputemechanicalbrain has a reputation beyond reputemechanicalbrain has a reputation beyond reputemechanicalbrain has a reputation beyond reputemechanicalbrain has a reputation beyond reputemechanicalbrain has a reputation beyond reputemechanicalbrain has a reputation beyond reputemechanicalbrain has a reputation beyond reputemechanicalbrain has a reputation beyond reputemechanicalbrain has a reputation beyond repute
Send a message via AIM to mechanicalbrain Send a message via Yahoo to mechanicalbrain
Re: Vision algorithms?

Here's the Wired article on Stanley. http://www.wired.com/wired/archive/14.01/stanley.html
__________________
"Oh my God! There's an axe in my head."
623's 2006 home page
random mechanicalbrain slogans

  #11   Spotlight this post!  
Unread 19-01-2006, 09:24
ohararp ohararp is offline
Registered User
FRC #0042
Team Role: Mentor
 
Join Date: Jan 2006
Rookie Year: 2006
Location: Nashua, NH, USA
Posts: 1
ohararp is an unknown quantity at this point
Re: Vision algorithms?

Gentleman, after playing with the CMUCAM2 I have discovered what a great piece of kit this is. While working at the AIR FORCE RESEARCH LABORATORY I had attempted to track a laser dot using two webcams and a stereo vision algorithm. Ultimately, I ran into trouble due to automation issues. With the CMUCAM2 and its color tracking these issues can be solved.

Mainly for Kevin here, but I am open to other input: I am attempting to leverage some of the new PIC 18F microprocessors to perform the stereo triangulation of the 2 Mx My values received when tracking a color. Most of the work has already been done at:

http://www.vision.caltech.edu/bougue.../example5.html

concerning Camera Calibration (accounting for Lens Distortion). The tough part is actually computing the stereo_trinagulation algorithm. There is a lot of floating point math involved. I highly recommend using an offboard processor for this. After doing an initial feasibility investigation I don't see this being a problem, especially concerning the 50 FPS of the CMUCAM2.

Kevin, maybe we could coordinate on adding a stereo vision controller to the CMUCAM2's next year?
  #12   Spotlight this post!  
Unread 19-04-2006, 11:57
Salik Syed Salik Syed is offline
Registered User
FRC #0701 (RoboVikes)
Team Role: Alumni
 
Join Date: Jan 2003
Rookie Year: 2001
Location: Stanford CA.
Posts: 514
Salik Syed has much to be proud ofSalik Syed has much to be proud ofSalik Syed has much to be proud ofSalik Syed has much to be proud ofSalik Syed has much to be proud ofSalik Syed has much to be proud ofSalik Syed has much to be proud ofSalik Syed has much to be proud ofSalik Syed has much to be proud of
Send a message via AIM to Salik Syed
Re: Vision algorithms?

Actually what they did was use the laser range finder to scan the close by objects, they analyzed the 3d data from the range finder to see what areas did not have obstacles. They then looked for areas with a similar texture up ahead using the camera. thus they were able to calculate how much curve / straight there was up ahead.
__________________
Team 701
  #13   Spotlight this post!  
Unread 19-04-2006, 17:43
lemoneasy lemoneasy is offline
Registered User
AKA: Evan Crawford
FRC #1334
Team Role: Programmer
 
Join Date: Feb 2006
Rookie Year: 2004
Location: Oakville, Ontario
Posts: 21
lemoneasy is on a distinguished road
Re: Vision algorithms?

Yup, I have the maganize in front of me and the data from stanley is as follows:
1. GPS antenna, a rooftop GPS antenna recieves data that has actually traveled twice into space - once to receive an initial position that is accurate up to a meter, and a second to correct, which makes the position accurate up to 1 cm.

2. Lidar, scans terrain 30 meters ahead and to either side 5 times a second, and the data builds a map of the terrain. (I've seen the map it makes, it looks great and accurate)

3. Video camera, scans the road beyond lidar's range. If the lasers have identified drivable ground, the computer searches for the same characteristics in the video data, extending the vision to 80 meters with safe acceleration.

4. Odometry, which is just really encoders on the wheels so the computer knows the position if GPS is interrupted.

For the most part though, it is the GPS waypoints that are used on the car. I don't think you will have an RC car driving around at 15 mph without a Mini-ITX though, since the whole pathfinding algorithim is CPU intensive. As for sonar on a servo, something tells me that data will be inaccurate, a pencil beam sonar maybe, but a low-end sonar on a servo will not give you a map that is really useful. Using it just for larger obstacles may be better.

As for lidar, I have never heard of any suppliers of low-cost or hobby lidar, if someone has seen it for sale, I'd be interested to read about it.
__________________
  #14   Spotlight this post!  
Unread 19-04-2006, 23:33
TimCraig TimCraig is offline
Registered User
AKA: Tim Craig
no team
 
Join Date: Aug 2004
Rookie Year: 2003
Location: San Jose, CA
Posts: 221
TimCraig is a splendid one to beholdTimCraig is a splendid one to beholdTimCraig is a splendid one to beholdTimCraig is a splendid one to beholdTimCraig is a splendid one to beholdTimCraig is a splendid one to beholdTimCraig is a splendid one to behold
Re: Vision algorithms?

Here's a link to the technical papers on most of the Grand Challenge entries, including Stanford's. You may find them interesting. I'm slowly working my way through them.

http://www.darpa.mil/grandchallenge05/techpapers.html
  #15   Spotlight this post!  
Unread 20-04-2006, 08:06
Gdeaver Gdeaver is offline
Registered User
FRC #1640
Team Role: Mentor
 
Join Date: Mar 2004
Rookie Year: 2001
Location: West Chester, Pa.
Posts: 1,370
Gdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond repute
Re: Vision algorithms?

Digital machine vision is the major field of study now. However, don't forget one of the greatest terrain following achievements - the cruise missile. The cruise missile system was developed before powerful digital electronic were available. The cruise missile used optical processing followed by some analog signal processing and filtering. the processed image wasn't digitized till the last stage. The image was optically analog filtered down to the most important data in an image. The edges. The edges of objects are all that are needed for navigation. To see why try this. Go into a dark room where you can just barely see. If the light is just right, your eye will just perceive the edges of objects in the room. You can walk around. Notice that there is no color info in your perceived image. This is basically what the cruise missile system does. optical and analog systems are very hard to develop. I understand some real break thrues were achieve and the next generation system was being developed. Then GPS came along and the project was scraped. If any of you kids are going to college and are interest in machine vision you may want to pay attention to optical processing more. The cruise missile tech is still tied up in NS but may start coming out in pieces over the next few years.
Closed Thread


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
YMTC: Bluabot Pushes Red Vision Tetra Natchez You Make The Call 23 14-04-2005 14:48
Building the field - Vision tetras (PLEASE HELP ME) capenga General Forum 6 25-01-2005 18:54
Stealing Vision Tetras ten3brousone Rules/Strategy 4 18-01-2005 00:05
Just where are vision "bonus" tetras placed? David Brinza Rules/Strategy 17 10-01-2005 20:50


All times are GMT -5. The time now is 02:01.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi