Chief Delphi

Chief Delphi (http://www.chiefdelphi.com/forums/index.php)
-   Programming (http://www.chiefdelphi.com/forums/forumdisplay.php?f=51)
-   -   Autonomous Perception (http://www.chiefdelphi.com/forums/showthread.php?t=85072)

ideasrule 12-04-2010 14:01

Re: Autonomous Perception
 
Quote:

Originally Posted by Radical Pi (Post 952554)
I don't really trust using encoders until you can zero into a known position after crossing a bump. Even if the wheels are coming down with the robot, it looks like most robots do slip a fair amount from momentum. If you can get a VERY accurate sonar that would be okay for re-detection after clearing the bump, but I think the risk of another robot interfering is a bit too high with that.

Also, what would happen if you were to land on another robot after crossing the bump. It's certainly possible with the tunnel bots that are about the same height as the bumps. Is there any way of detecting and preventing this?

Once you cross the bump, you know your exact y-position. Assuming the robot is perpendicular to the bump when it goes across--and I have yet to see a robot capable of going across at an angle--the x-position shouldn't change no matter how much the wheels slip.

kamocat 12-04-2010 18:29

Re: Autonomous Perception
 
During practice, I've seen many robots come off the bump at an angle. Yes, you have to go up it head-on, but on the way down it doesn't seem to matter so much (at least with mecanum wheels).
This usually only happens with the trailing wheels, however, and not until they've passed over the top of the bump.
(the angle was typically about 15 degrees – enough that only three wheels contact the ground.)

Tom Line 12-04-2010 19:28

Re: Autonomous Perception
 
Quote:

Originally Posted by kamocat (Post 951639)
Well, when you're ready, Maxbotix makes some great SONAR.
http://www.maxbotix.com/Performance_Data.html

FYI, while a STATIC maxbotix sensor will give you a nice shot off a wall or other flat surface, you're going to be a in a world of hurt if you try to use one in motion or sweep it around.

The sensors have some lag, and any large change in distance creates a large 'overshoot' in the sensor. You can filter that out, however you reduce the reaction time of the sensor.

Here's a little food for thought. If your robot is traveling at 10 feet a second, how much time do you have to 'see' an object? What is the processing time of your cpu, AFTER the sensor has processed it? What is the overshoot of the particular sensor involved?

The end result is that for a Maxbotix sensor, moving at 10 feet per second, you better be hitting the brakes when it says an object is at 6 feet. Because of the lag, that object is actually at 2 feet.

Ultrasonic sensors also interfere with eachother. You'll have to daisy-chain them to ping if you using multiples. If you're using one, you'll have to hold it steady for it to get a reading, then move it. How long can you hold it steady? How long does it take to move? How long does it take to stabilize before taking another reading? Thus, how long will it take you to actually make a circuit of whatever angle you want and then start over again? Ultrasonic sensors are also flakey. If they see something in the 'edge' of their zone, they may read it, then drop out, then read it again. The reading you get returned if the object is 20 feet away may be 10 feet.

IR sensors get flakey when the reflectivity of the object you're shooting changes. Flat plates vs. angled surfaces vs. shiny vs. matte.

I would STRONGLY suggest getting a working knowledge of sensors before you start making decisions in how you think you might want to use them. You may spend a great deal of money only to discover that what you thought would work will not.

davidthefat 12-04-2010 19:37

Re: Autonomous Perception
 
With stereo vision, you can make the traditional red/blue glasses 3d effect. One side is red only and the other one is blue, put the images together, Bam, you have 3d...

davidthefat 12-04-2010 19:48

Re: Autonomous Perception
 
You can take the main idea of the IR range finder and have a array of IR diodes transmit the IR beams at the target and the camera picks up the data and can be used to find the distance using trigonometry.

kamocat 12-04-2010 20:48

Re: Autonomous Perception
 
Quote:

Originally Posted by davidthefat (Post 952911)
With stereo vision, you can make the traditional red/blue glasses 3d effect. One side is red only and the other one is blue, put the images together, Bam, you have 3d...

Are you referring to robots or humans here?
If you want 3D perception from two images on the robot, it's called a disparity map, and that WILL load down your processor.

The IR camera thing sounds fun, but you might have to investigate the IR output of the stage-lights they use for competition.

davidthefat 12-04-2010 22:04

Re: Autonomous Perception
 
http://www.seattlerobotics.org/encod...110/vision.htm

This is very interesting read

kamocat 12-04-2010 22:07

Re: Autonomous Perception
 
Quote:

Originally Posted by Tom Line (Post 952903)
FYI, while a STATIC maxbotix sensor will give you a nice shot off a wall or other flat surface, you're going to be a in a world of hurt if you try to use one in motion or sweep it around.

The sensors have some lag, and any large change in distance creates a large 'overshoot' in the sensor. You can filter that out, however you reduce the reaction time of the sensor.

Here's a little food for thought. If your robot is traveling at 10 feet a second, how much time do you have to 'see' an object? What is the processing time of your cpu, AFTER the sensor has processed it? What is the overshoot of the particular sensor involved?

The end result is that for a Maxbotix sensor, moving at 10 feet per second, you better be hitting the brakes when it says an object is at 6 feet. Because of the lag, that object is actually at 2 feet.

Ultrasonic sensors also interfere with eachother. You'll have to daisy-chain them to ping if you using multiples. If you're using one, you'll have to hold it steady for it to get a reading, then move it. How long can you hold it steady? How long does it take to move? How long does it take to stabilize before taking another reading? Thus, how long will it take you to actually make a circuit of whatever angle you want and then start over again? Ultrasonic sensors are also flakey. If they see something in the 'edge' of their zone, they may read it, then drop out, then read it again. The reading you get returned if the object is 20 feet away may be 10 feet.

IR sensors get flakey when the reflectivity of the object you're shooting changes. Flat plates vs. angled surfaces vs. shiny vs. matte.

I would STRONGLY suggest getting a working knowledge of sensors before you start making decisions in how you think you might want to use them. You may spend a great deal of money only to discover that what you thought would work will not.

I was going to say that your estimation is overshot, but then I figured I'd calculate it out.
The SONAR has a sample rate of 20hz, with the analog signal updated between 38 and 42ms into the cycle. If you sample just before it updates the analog, there can be up to 92ms lag after the signal is measured.
At 10 ft/s, that puts you at about 10 inches.
Now the processing time:
If it takes you 500ms to process, then you're in serious trouble.
I think 100ms is reasonable and achievable, so we'll go with that. You've added 12 inches to your distance.
How far does it take to slow down?
If you have a coefficient of friction of 1, then you can decelerate at 1g, or 32 ft/s/s.
10ft/s ÷ 32ft/s/s = ~300ms. 300ms * 10ft/s * 1/2 = 1.5 feet, or 18 inches.

Combined, it's taken 40 inches to slow down; a bit over 3 feet.
Let's hope the other robot isn't travelling your way.

davidthefat 12-04-2010 22:11

Re: Autonomous Perception
 
http://www.youtube.com/watch?v=SPywg...eature=related

Really useful too

reversed_rocker 12-04-2010 22:13

Re: Autonomous Perception
 
here's my message from a different thread for kamocat's request

Well lets just say that we are going to play a fully autonomous robot for this years game. Many games have the same theme involving a ball that needs to be picked up, thrown, tossed, ect. so some of the same ideas are likely to apply

First thing, find a ball.
You could have 4 sonic rangers across the front of the ball on the front of the robot. These sensors would be spaced apart just under the diameter of a ball. What you could do is have it so that the robot spins until there is an object that gives approximately the same distance for two and only two of the sensors, this would be a ball.

Getting the ball
For this robot it would be difficult to guide the robot so that the ball would hit a particular point to be picked up by a vacuum or small ball roller, so i would suggest a double ball roller that runs as far across the robot as possible (I'm thinking a robot very simular to 1918 or 1986). When the robot finds something that it thinks is a ball, it would stop spinning and drive forward. On the both sides of robot you could have 2 photo transistors lined up parallel to the ball roller about 1.5-2 in inside the frame. This way the robot could tell when it has a ball and approximately where on the robot the ball has stuck (we use the same sensor to detect when a ball is in our vacuum, easy to use and very reliable).

Shooting the ball
Since the photo transistors aren't that accurate, you would have the code split the ball roller into 3 sections: left side of the robot, middle of the robot, right side of the robot. The robot would then spin until the camera sees the goal. The gyro would have to be set at the beginning of the match so that the robot knows which side of the field to shoot at. Once the robot sees the target, you can line up your shot using the camera again and fire. Then you start over with the ball collection phase of the code

special considerations:
This would take some playing around with, you would probly have to throw in some timing aspects so that the robot doesnt get stuck on one part of the code. Things like "if you saw a ball 10 seconds ago and you havent picked it up, go back to finding balls" or "if you dont have a ball anymore, go back to finding balls" or "if it takes your more than 5 seconds to find the goal, drive forward and try again". The sonic rangers could be used for basic driving manuevers, If more than 2 of the sonic rangers sees an object less than 3 feet away then it would turn around.

Radical Pi 12-04-2010 23:01

Re: Autonomous Perception
 
Quote:

Originally Posted by reversed_rocker (Post 953048)
The gyro would have to be set at the beginning of the match so that the robot knows which side of the field to shoot at.

I'm still going to lobby for my 2nd image analysis method of goal detection. Instead of depending on a gyro and keeping everything in memory, If you do a color detection below the goal you should be able to determine alliance easily, and also take care of figuring out whether the goal is blocked at the same time

reversed_rocker 12-04-2010 23:44

Re: Autonomous Perception
 
wouldnt the goal be the same color as team's bumpers? could that confuse the color detection? i think it would be much easier to use a gyro for field orientation rather than color detection, thats going a bit overboard on something that should be relatively easy. gyro drift shouldnt be too bad because you only need to be accurate to 180 degrees

davidthefat 12-04-2010 23:46

Re: Autonomous Perception
 
Quote:

Originally Posted by reversed_rocker (Post 953104)
wouldnt the goal be the same color as team's bumpers? could that confuse the color detection? i think it would be much easier to use a gyro for field orientation rather than color detection, thats going a bit overboard on something that should be relatively easy. gyro drift shouldnt be too bad because you only need to be accurate to 180 degrees

Can be simple as edge detection

kamocat 12-04-2010 23:49

Re: Autonomous Perception
 
Well, it's about time we made a testing list.
Here are a list of things that need to be tested before they can be used or discarded.
  • Location tracking (general):
  • Using an image of the goals targets. (How accurate is it? How fast is it? Where on the field can you use it?)
  • Drive encoders and gyro (How much does it drift/slip? Does it get off when you go over the bump?)
  • Non-drive wheel with encoders and gyro (same info as previous)
  • Accelerometer and gyro
    Bump compensation: (low priority)
  • Detect bump with accelerometer
  • Reference against base of bump
  • Reference agains top of bump
  • Find latitude w/ SONAR
  • Find latitude w/ gyro
  • Encoder/gyro accuracy in the descent from bump
    Ball Detection
  • Gyro/IR/camera on gimble
  • Bristles / whiskers?
    Robot detection
  • Serial color sensor
  • Light source and colored phototransistors (aka home-made color sensor)
  • Camera

The first few have examples of what metrics would be useful.
Is there interest in testing these? Are there already-existing quantitative data for any of these?

davidthefat 12-04-2010 23:52

Re: Autonomous Perception
 
Quote:

Originally Posted by kamocat (Post 953108)
Well, it's about time we made a testing list.
Here are a list of things that need to be tested before they can be used or discarded.
  • Location tracking (general):
  • Using an image of the goals targets. (How accurate is it? How fast is it? Where on the field can you use it?)
  • Drive encoders and gyro (How much does it drift/slip? Does it get off when you go over the bump?)
  • Non-drive wheel with encoders and gyro (same info as previous)
  • Accelerometer and gyro
    Bump compensation: (low priority)
  • Detect bump with accelerometer
  • Reference against base of bump
  • Reference agains top of bump
  • Find latitude w/ SONAR
  • Find latitude w/ gyro
  • Encoder/gyro accuracy in the descent from bump
    Ball Detection
  • Gyro/IR/camera on gimble
  • Bristles / whiskers?
    Robot detection
  • Serial color sensor
  • Light source and colored phototransistors (aka home-made color sensor)
  • Camera

The first few have examples of what metrics would be useful.
Is there interest in testing these? Are there already-existing quantitative data for any of these?

http://www.chiefdelphi.com/forums/sh...ad.php?t=85197

If the object is identified, edge detection cam be used to distinguish from wall, ball and robot. Then if it is distinguished as a robot, then the camera can get bumper color


All times are GMT -5. The time now is 05:23.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi