View Full Version : Autonomous Perception
With the goal of making a robot fully autonomous (http://www.chiefdelphi.com/forums/showthread.php?t=84797):
What information would be useful to a robot during autonomous?
How can that information be measured or acquired?
Some things a robot might want to know are:
Where am I on the field?
Where are the robots around me? (what alliance are they?)
Where are the balls around me? (on the floor, presumably)
Where are the goals? The bumps? The towers? The walls?
Have I flipped over?
What are the other robots doing? Do they need help? (Inter-robot communication)
Some things a robot might want to know are:
Where am I on the field?
Where are the robots around me? (what alliance are they?)
Where are the balls around me? (on the floor, presumably)
Where are the goals? The bumps? The towers? The walls?
Have I flipped over?
What are the other robots doing? Do they need help? (Inter-robot communication)
I've copied this message from http://www.chiefdelphi.com/forums/showthread.php?t=84797&page=3
This is where the GDC may play nice again, and bring back something like the 2 freq. IR beacons.
But, even if they don't there are simple ways for determining where you are on a field using encoders (Assuming the wheels don't slip. Hard for last year).
Using kinematic formulas:
S = (Delta Left + Delta Right) / 2
Delta Theta = (Delta left - Delta right) / wheelbase
Theta = Theta + Delta Theta
X = X + (S * cosine ( Theta ) )
Y = Y + (S * sine ( Theta ) )
This satisfies one item on the list, but only slightly.
As far as knowing where other robots are, that would need a lot of DSP, or an external observer telling the bots where they are, or all bots to communicate with each other
billbo911
08-04-2010, 16:07
Continuing:
Q1) What direction am I heading.
Q2) How far have I traveled.
Q3) How fast am I moving.
Q4) Am I turning.
Q5) How fast am I turning.
Q6) Can I visualize my objective.
To answer some of these question:
A1) Read from a Gyro or Compass.
A2) Read from an Encoder.
A3) Read rate from an Encoder.
A4) Read from a Gyro or compass and compare to a previous reading.
A5) See answer "A4" and divide by time between measurements.
A6) Use camera and analyze the image.
Did I run into something? (limit switches on bumper?)
Am I going to run into something? (sonar? Infrared? Something else?)
Here's a clarified list of things that need to be measured for field-awareness, based on the Autonomous Planning discussion.
What region of the field am I in?
How available are the gamepieces?
What is the current score?
Are there any 'bots blocking me?
Are there any 'bot helping me?
(1)"region of the field" has been pretty much covered: track your location with the gyro and encoders. However, there's still a high possibility that this would get thrown off when a robot goes over a bump. Is there a way to compensate?
(2)"where are the game pieces" could probably be done with a camera. Is there a faster or more efficient way to find soccer balls?
(3) "what's the current score?" The camera has been brought up as a method of reading the scores off of the screen. I think that would be extremely difficult (due to lack of testing) to have working at competition. Would this information have to be supplied by the drivers?
(4 -5)"Finding nearby 'bots". This year, the bumpers were the key to determining a robot and their alliance. I'm still reluctant to use the camera for that, but I wonder about the possibility of a directional color sensor. I2C would probably be ideal, since it won't take up analog channels, and therefore several could be used for each side of the 'bot.
Robototes2412
08-04-2010, 22:33
actually, this would be a good use for an arduino board, communicating with the cRIO over serial.
the arduino would have a camera hooked up to the analog inputs, running a sketch that looks for either red or blue, depending on what the cRIO passes to the arduino.
davidthefat
08-04-2010, 22:44
I am not trusting any 8 bit micro controllers on this project, I am going for the robot boards that are pretty much the bare minimum computer boards, 8 bit MCs do not have enough juice to process all that data for autonomous IMHO
Robototes2412
08-04-2010, 22:51
it has the juice, but its not fast. if you bump the image quality down to QQVGA, it can find the blue/red bumpers in 2 seconds
davidthefat
08-04-2010, 22:55
it has the juice, but its not fast. if you bump the image quality down to QQVGA, it can find the blue/red bumpers in 2 seconds
Then its use less during the fast paced matches.
Robototes2412
08-04-2010, 23:05
oh, yeah.
isn't there such a thing as a color sensor?
oh, yeah.
isn't there such a thing as a color sensor?
I'd love for there to be an inexpensive I2C color sensor, but all the ones I can find are serial.
http://www.sparkfun.com/commerce/product_info.php?products_id=8924
http://www.sparkfun.com/commerce/product_info.php?products_id=8663
Robototes2412
08-04-2010, 23:57
The cRIO has a serial port, as does an arduino
Wouldn't serial limit us to one color sensor? Or can multiple sensors be on the same serial line?
If we connected an arduino to a color sensor via serial, what would we use to connect the color sensor to the robot? Analog?
Could we use three phototransistors instead?
Going back to the gamepieces, is the camera really the best way of finding them?
Could a robot have "whiskers" along the sides, so it could tell if it brushed up against a ball?
For now, I'm going to ignore how much IO each sensor (or set of sensors) might require.
davidthefat
10-04-2010, 02:27
Going back to the gamepieces, is the camera really the best way of finding them?
Could a robot have "whiskers" along the sides, so it could tell if it brushed up against a ball?
For now, I'm going to ignore how much IO each sensor (or set of sensors) might require.
I was going with the idea of using a technology that allows you to see any shape using the photon particles reflected from the object... And also sound waves... AKA a sonar and IR and a camera
I was going with the idea of using a technology that allows you to see any shape using the photon particles reflected from the object... And also sound waves... AKA a sonar and IR and a camera
Well, one of the factors I'm trying to work with is cost. I want to get the most out of what I spend.
If I line my robot with SONAR, that's $250 for only two per side.
If I use IR, it's still $120 for the same thing (though lower range, non-linear, and unknown view angle)
The camera has a view angle of +-15 degrees.
Please show me a diagram that demonstrates that 8 SONAR or 8 IR (plus the camera) are all I need to find a game piece anywhere around the 'bot. Feel free to demonstrate why they're the best solution.
ideasrule
10-04-2010, 11:21
I would suggest implementing autonomous with the minimum possible number of sensors. The camera seems very promising, but it's very hard to use, especially in real time. Does anybody have an idea of how to find a robot using the camera? How about a ball resting against a robot or wall?
davidthefat
10-04-2010, 11:24
Well, one of the factors I'm trying to work with is cost. I want to get the most out of what I spend.
If I line my robot with SONAR, that's $250 for only two per side.
If I use IR, it's still $120 for the same thing (though lower range, non-linear, and unknown view angle)
The camera has a view angle of +-15 degrees.
Please show me a diagram that demonstrates that 8 SONAR or 8 IR (plus the camera) are all I need to find a game piece anywhere around the 'bot. Feel free to demonstrate why they're the best solution.
I think you are over spending here, I was going to put 2 or 3 IR sensors on a servo, so it sweeps side to side, the sonar would be the general guideline and there only will be 1 on a servo that goes 360 degrees, if it detects something, the irs and camera turn that way to check up on it... Not sure if its the fastest, but that is a pretty cheap way to do it
If the sensors are at the same height as the balls, won't the robot be in the way of seeing 360 degrees?
If they're on top of the robot, won't they be in the way of each other? How do you spot a gamepiece if it's right up against your 'bot?
davidthefat
10-04-2010, 12:27
If the sensors are at the same height as the balls, won't the robot be in the way of seeing 360 degrees?
If they're on top of the robot, won't they be in the way of each other? How do you spot a gamepiece if it's right up against your 'bot?
Who says it will be a ball? IDK I want to make it as generic as possible and add if I need more stuff, also the robots will be one of the things I want to track
Oh, sorry, I misunderstood you.
I thought you were talking specifically about the game piece.
You're saying that these SONAR and IR will be an excellent method of tracking other robots?
davidthefat
10-04-2010, 13:32
Oh, sorry, I misunderstood you.
I thought you were talking specifically about the game piece.
You're saying that these SONAR and IR will be an excellent method of tracking other robots?
I would just use it to get a general sense of direction and the camer will get the color of the bumper
I would just use it to get a general sense of direction and the camer will get the color of the bumper
A general sense of direction for navigation?
Does that mean you would treat walls, bumps, towers, and gamepieces just as obstacles?
Or would you compare the data to where you are on the field, to determine what is a bump, what is a wall, what is a tower, and what must be something else?
Would these sonar and IR generally be angled down, or would they be straight out?
This sounds like an interesting method, and I'd love to see some working code for it. Would you be willing to do that and post your results?
davidthefat
10-04-2010, 14:41
A general sense of direction for navigation?
Does that mean you would treat walls, bumps, towers, and gamepieces just as obstacles?
Or would you compare the data to where you are on the field, to determine what is a bump, what is a wall, what is a tower, and what must be something else?
Would these sonar and IR generally be angled down, or would they be straight out?
This sounds like an interesting method, and I'd love to see some working code for it. Would you be willing to do that and post your results?
They will be angled down, but not too much. Its just theory right now, can't start since Im on spring break and I have not yet met with our group to actually start
Well, when you're ready, Maxbotix makes some great SONAR.
http://www.maxbotix.com/Performance_Data.html
Range finders are a good idea, but they're kind of limited. Unless you plan to have a whole row of them, you probably need a better way to distinguish between gamepieces, robots, and obstacles.
I imagine a possible "find gamepiece" routine going something like this:
Use the camera to find the nearest blob matching certain parameters (depending on gamepiece). If one exists, compute the angle from the x-position in the image, rotate the bot to that angle using gyro, confirm that the gamepiece is still there, then use a rangefinder on the front of the robot to find the distance and drive up to it. If none are found, rotate 30 degrees or so and try again.
That sounds like a good routine for finding gamepieces.
However, I feel that the gimbals still limit the 'bot to dealing with one gamepiece at a time.
Radical Pi
10-04-2010, 18:01
That sounds like a good routine for finding gamepieces.
However, I feel that the gimbals still limit the 'bot to dealing with one gamepiece at a time.
And this is bad? If the bot is trying to deal with multiple balls at the same time, by the time it takes care of one ball there's a good chance the other ball it noticed isn't there any more, and will have to seek for it again anyways
It depends whether your goal is to acquire the gamepiece, or to keep another 'bot away from them.
However, I would still argue that if you can keep track of multiple gamepieces around you at the same time, then you immediately know where to go when one is taken, and you don't waste time looking for another. I think gamepiece detection should be passive most of the time.
Also, by not using the camera for finding gamepieces, then the robot could focus on the target to fire, but still be aware of what's happening around it.
EDIT: I replaced all instances of "ball" with "gamepiece". Isn't there PHP for this?
davidthefat
10-04-2010, 19:07
It depends whether your goal is to acquire the ball, or to keep another 'bot away from them.
However, I would still argue that if you can keep track of multiple balls around you at the same time, then you immediately know where to go when one is taken, and you don't waste time looking for another. I think ball detection should be passive most of the time.
Also, by not using the camera for finding balls, then the robot could focus on the target to fire, but still be aware of what's happening around it.
I am kind of disturbed by you saying "ball", the game will be different next year and you have to be as open minded as you can and generic as you can, so don't say ball...
Radical Pi
10-04-2010, 20:15
Well if you don't want us designing with this year in mind, then what do you want us to do? All decision-making is based on the game, so if we don't know next year's game then we should go with this year's game, learn what we can from it, and hope some of it is reusable. We can't even be sure drive code still works next year if the GDC throws a 2009-style curveball
I am kind of disturbed by you saying "ball", the game will be different next year and you have to be as open minded as you can and generic as you can, so don't say ball...
I understand what you're saying.
While the games in the past are our best estimates of what the GDC will do in the future, it's true that we should plan this in a way that it will work with any game. (Perhaps the planning algorithms will differ, but the perception and control should be very similar)
Here's some basic commonalities between games:
robot size and weight
six robots on the field, two alliances
27' by 54' field
Robot must display their number and alliance in some way easily distinguishable by humans
Multiple gamepieces (all similar)
Multiple locations to score
davidthefat
10-04-2010, 21:09
I understand what you're saying.
While the games in the past are our best estimates of what the GDC will do in the future, it's true that we should plan this in a way that it will work with any game. (Perhaps the planning algorithms will differ, but the perception and control should be very similar)
Here's some basic commonalities between games:
robot size and weight
six robots on the field, two alliances
27' by 54' field
Robot must display their number and alliance in some way easily distinguishable by humans
Multiple gamepieces (all similar)
Multiple goals
Ok, so the field size has been constant for all these years, so Theoretically, you can make a "map" of the field and draw the robot relative to the field position using landmarks like the walls (this year the goals and target and bumps) and like RTS games, theres is a fog of war constantly since the robot has a limited sight range with teh sensors, so the robot can identify the object and send back a theoretical coordinates of the object and that can be drawn on the screen of the laptop...
This is starting to sound more like a videogame than an autonomous 'bot.
Couldn't you just track your location with the encoders and gyro?
The only problem I see with using the encoders for navigation is that they're almost guaranteed to get "off" when they go over the bump this past year. Is that what you're saying this "fog of war" location detection should be used for?
Ian Curtis
10-04-2010, 21:42
Ok, so the field size has been constant for all these years, so Theoretically, you can make a "map" of the field and draw the robot relative to the field position using landmarks like the walls (this year the goals and target and bumps) and like RTS games, theres is a fog of war constantly since the robot has a limited sight range with teh sensors, so the robot can identify the object and send back a theoretical coordinates of the object and that can be drawn on the screen of the laptop...
I think you'd like StangPS... (http://www.wildstang.org/main/stangps.php)
There's no need for a "fog of war" you can get plenty of information from an accelerometer, gyro, and maybe even a compass!
An issue I see with these "mapping" approaches is that they can easily be thrown off by drift and unforeseen situations. What if the wheels slide or lose contact with the ground (e.g. bump, pushing matches)?
davidthefat
10-04-2010, 21:54
This is starting to sound more like a videogame than an autonomous 'bot.
Couldn't you just track your location with the encoders and gyro?
The only problem I see with using the encoders for navigation is that they're almost guaranteed to get "off" when they go over the bump this past year. Is that what you're saying this "fog of war" location detection should be used for?
I think you'd like StangPS... (http://www.wildstang.org/main/stangps.php)
There's no need for a "fog of war" you can get plenty of information from an accelerometer, gyro, and maybe even a compass!
Yea but you cant track all the other robot/objects on the field with a gyro and ect, but I guess that would make it alot easier than traking with the walls... BTW before going into robotics, I played around with game programming a lot... I didn't get anything significant done.
Because I haven't had much success with the accelerometer positioning yet, I'm going to make the assumption that the gyro and encoders will provide a fairly consistant and accurate data as to where your position is on the field...
EXCEPT when you're going over a bump.
If you're going over a bump, what do you do? How do you detect that you're going over a bump?
Similarly, some robots got balls underneath them this past year. How do you detect that you're not moving as you should, and then determine what you're *actually* doing? I don't think any of us have a 3-axis gyro on hand.
davidthefat
10-04-2010, 22:18
Because I haven't had much success with the accelerometer positioning yet, I'm going to make the assumption that the gyro and encoders will provide a fairly consistant and accurate data as to where your position is on the field...
EXCEPT when you're going over a bump.
If you're going over a bump, what do you do? How do you detect that you're going over a bump?
Similarly, some robots got balls underneath them this past year. How do you detect that you're not moving as you should, and then determine what you're *actually* doing? I don't think any of us have a 3-axis gyro on hand.
I wish there was some super accurate, small GPS system thing that exists... THAT would simplify everything:ahh: I guess we have to go with the method I posted... Or there can be a hybrid type of thing going on, by default it tracks using the gyro and all that, but resets the coordinates using the IR method every 10 seconds or something
ideasrule
10-04-2010, 23:55
Another way to determine your location on the field, at least this year, is to look at the goals. Once you know what angle the goals are at, it's very easy to use triangulation to determine the robot's position.
Because I haven't had much success with the accelerometer positioning yet, I'm going to make the assumption that the gyro and encoders will provide a fairly consistant and accurate data as to where your position is on the field...
EXCEPT when you're going over a bump.
If you're going over a bump, what do you do? How do you detect that you're going over a bump?
Similarly, some robots got balls underneath them this past year. How do you detect that you're not moving as you should, and then determine what you're *actually* doing? I don't think any of us have a 3-axis gyro on hand.
Vertically mounted accelerometer?
That's true, you could use the Z accellerometer to tell when you're not on the bump. (You could also use it to tell when you land. Hard.)
I think simply "resetting" your position off another technique defeats the purpose of the technique in the first place.
What about zeroing up *against* the bump after you've gone over? Or against a wall? That should tell you your angle, and it would tell you your location in at least one plane.
Has anyone tried using the line down the middle of the field? I know in FLL, it's very common to have a line-follower. I don't think it'd be hard to have a light and a phototransistor down near the ground so you can tell when you pass by the center line.
I think triangulation off the goals would work pretty well, except that it's an inverse sine function, and so your accuracy decreases drastically as you get further away. I think you may have to look all the way across the field to see two goals at once, though. Perhaps it would require taking a full-res image (and recording the timestamp), processing it a bit later, and then readjusting the last few seconds to coincide with your new data.
The question that goes along with this is, will the robot very often look at goals on the other side of the field?
It's certainly something you can do in disabled mode, if you're already looking that way.
ideasrule
11-04-2010, 13:19
Isn't this year's accelerometer three-axis? That would easily tell you when you're going over the bump. It might also be possible to do accurate inertial navigation with the accelerometer: the position estimate could reset every time it touches a bump and every time the camera sees two goals, and the velocity estimate could reset every time the encoders record a speed of 0.
Has anyone tried dragging a ball mouse on the floor (and communicating with it)?
An optical mouse?
What's the communication standard before it's converted to USB? (With a ball mouse, you could actually just rewire it and connect it to the digital sidecar like any other encoder. Perhaps it would need a little mechanical adjustment to have good contact with the floor.)
davidthefat
11-04-2010, 18:16
Has anyone tried dragging a ball mouse on the floor (and communicating with it)?
An optical mouse?
What's the communication standard before it's converted to USB? (With a ball mouse, you could actually just rewire it and connect it to the digital sidecar like any other encoder. Perhaps it would need a little mechanical adjustment to have good contact with the floor.)
Its carpet, so IDK if the optical one will work, but the ball definitely will, another way is using the encoders on 2 non motorized wheels facing east and south...
Radical Pi
11-04-2010, 19:33
Has anyone tried dragging a ball mouse on the floor (and communicating with it)?
An optical mouse?
What's the communication standard before it's converted to USB? (With a ball mouse, you could actually just rewire it and connect it to the digital sidecar like any other encoder. Perhaps it would need a little mechanical adjustment to have good contact with the floor.)
Probably would want to make a larger-scale version. That ball is tiny compared to the field it's on.
The ball could also solve the crossing the bump problem. If we set 4 encoders on the ball at 90 degree angles from eachother, then if 2 opposite eachother go the same direction for a period of time (hence the larger scale version to expand the window of time), then the bot would know it's not on the ground anymore. Even if it happens when not crossing the bump, we know that something has just caused the robot to leave the ground and tracking would need to reset anyways.
Should it also have a horizontal roller to detect changes in angle? With all of this combined we can get most of our position detection done with just one sensor (plus a gyro or compass for sanity checks?)
davidthefat
11-04-2010, 19:48
Probably would want to make a larger-scale version. That ball is tiny compared to the field it's on.
The ball could also solve the crossing the bump problem. If we set 4 encoders on the ball at 90 degree angles from eachother, then if 2 opposite eachother go the same direction for a period of time (hence the larger scale version to expand the window of time), then the bot would know it's not on the ground anymore. Even if it happens when not crossing the bump, we know that something has just caused the robot to leave the ground and tracking would need to reset anyways.
Should it also have a horizontal roller to detect changes in angle? With all of this combined we can get most of our position detection done with just one sensor (plus a gyro or compass for sanity checks?)
Only 2 encoders are needed
Alan Anderson
11-04-2010, 20:12
Has anyone tried dragging a ball mouse on the floor (and communicating with it)?
An optical mouse?
A ball mouse doesn't get good "traction" on the carpet. It skips. Perhaps a largish rubber ball could be used as an intermediary between the floor and the mouse, to keep things moving well.
A typical optical mouse can't keep up with the speed of the robot. A couple of us tried doing a "telephoto mouse" system a number of years ago, but it turns out that any variation in height above the surface changes the scale of the image enough to mess with the sensed travel distance.
A ball mouse doesn't get good "traction" on the carpet. It skips. Perhaps a largish rubber ball could be used as an intermediary between the floor and the mouse, to keep things moving well.
A typical optical mouse can't keep up with the speed of the robot. A couple of us tried doing a "telephoto mouse" system a number of years ago, but it turns out that any variation in height above the surface changes the scale of the image enough to mess with the sensed travel distance.
We've been there:
http://www.chiefdelphi.com/media/photos/30154
http://www.chiefdelphi.com/media/photos/30740
It's a billiard ball, about 55 mm in diameter if I remember correctly. It took us a while to get it to work reliably, and even then required some petting (cleaning carpet lint after every match, watching the disks, etc.). It was a very good experience for the team, but I think two omni-wheels perpendicular to each other would work better.
davidthefat
11-04-2010, 20:50
We've been there:
http://www.chiefdelphi.com/media/photos/30154
http://www.chiefdelphi.com/media/photos/30740
It's a billiard ball, about 55 mm in diameter if I remember correctly. It took us a while to get it to work reliably, and even then required some petting (cleaning carpet lint after every match, watching the disks, etc.). It was a very good experience for the team, but I think two omni-wheels perpendicular to each other would work better.
Is the middle wheel required? but I also think the wheels are the best choice
Is the middle wheel required? but I also think the wheels are the best choice
You mean the spring loaded arm that contacts the northern hemisphere of the ball? If so, yes, it kept the ball from popping out of the socket.
Radical Pi
11-04-2010, 23:04
Only 2 encoders are needed
Why? The rise-fall detection would never work with only 2 unless other sensors are added, and even then you can't detect rotation.
davidthefat
11-04-2010, 23:09
Why? The rise-fall detection would never work with only 2 unless other sensors are added, and even then you can't detect rotation.
The rotation can be checked with the gyro and then only 2 encoders are needed
davidthefat
11-04-2010, 23:18
Great article on 3d sensing http://www.cs.stanford.edu/people/ang/papers/icra09-3dSensingMobileManipulation.pdf
I think I just found the solution for telling where you are when you go over the bump:
Robots usually only slip on the way *up* the bump. On the way down, their back end may get some air, but all the wheels are still moving at the same rate.
A robot can use the Z accelerometer to tell when it's at the top of the bump, and use IR (http://www.sparkfun.com/commerce/product_info.php?products_id=8958) (because it has a very narrow beam) to tell where you are horizontally. (It's assumed you know *which* bump you're on, but if you like, you could use a colored phototransistor to tell the color of the bump.)
Alternately, SONAR could be used once the robot has gotten down off the bump.
Radical Pi
12-04-2010, 00:24
I don't really trust using encoders until you can zero into a known position after crossing a bump. Even if the wheels are coming down with the robot, it looks like most robots do slip a fair amount from momentum. If you can get a VERY accurate sonar that would be okay for re-detection after clearing the bump, but I think the risk of another robot interfering is a bit too high with that.
Also, what would happen if you were to land on another robot after crossing the bump. It's certainly possible with the tunnel bots that are about the same height as the bumps. Is there any way of detecting and preventing this?
I know the Maxbotix SONAR is accurate to an inch, and it would take +-9mV of jitter to throw it off (if you round the value after you receive it).
This sounds like something that would be easy to test with current robot configurations. I'll make a list of things I need to test.
Perhaps the surest option is to re-square yourself on the bump after you go over.
Late-night thought:
The XL-MaxSonar (http://maxbotix.com/uploads/Small_Industrial_Ultrasonic_Sensors_Pack_a_Big_Pun ch.pdf) give you the "real-time envelope" in analog, so you can do your own processing. If you had one with a very narrow beam, you could record the acoustic signature (amplitude over time) of the gamepiece, and use that to determine if you're pointed at a gamepiece.
The problem with this is, if the gamepiece is a sphere, this may make the robot think that every sphere is a gamepiece.
davidthefat
12-04-2010, 11:05
http://www.societyofrobots.com/robottheory.shtml
Very Very good articles to read about
ideasrule
12-04-2010, 14:01
I don't really trust using encoders until you can zero into a known position after crossing a bump. Even if the wheels are coming down with the robot, it looks like most robots do slip a fair amount from momentum. If you can get a VERY accurate sonar that would be okay for re-detection after clearing the bump, but I think the risk of another robot interfering is a bit too high with that.
Also, what would happen if you were to land on another robot after crossing the bump. It's certainly possible with the tunnel bots that are about the same height as the bumps. Is there any way of detecting and preventing this?
Once you cross the bump, you know your exact y-position. Assuming the robot is perpendicular to the bump when it goes across--and I have yet to see a robot capable of going across at an angle--the x-position shouldn't change no matter how much the wheels slip.
During practice, I've seen many robots come off the bump at an angle. Yes, you have to go up it head-on, but on the way down it doesn't seem to matter so much (at least with mecanum wheels).
This usually only happens with the trailing wheels, however, and not until they've passed over the top of the bump.
(the angle was typically about 15 degrees – enough that only three wheels contact the ground.)
Tom Line
12-04-2010, 19:28
Well, when you're ready, Maxbotix makes some great SONAR.
http://www.maxbotix.com/Performance_Data.html
FYI, while a STATIC maxbotix sensor will give you a nice shot off a wall or other flat surface, you're going to be a in a world of hurt if you try to use one in motion or sweep it around.
The sensors have some lag, and any large change in distance creates a large 'overshoot' in the sensor. You can filter that out, however you reduce the reaction time of the sensor.
Here's a little food for thought. If your robot is traveling at 10 feet a second, how much time do you have to 'see' an object? What is the processing time of your cpu, AFTER the sensor has processed it? What is the overshoot of the particular sensor involved?
The end result is that for a Maxbotix sensor, moving at 10 feet per second, you better be hitting the brakes when it says an object is at 6 feet. Because of the lag, that object is actually at 2 feet.
Ultrasonic sensors also interfere with eachother. You'll have to daisy-chain them to ping if you using multiples. If you're using one, you'll have to hold it steady for it to get a reading, then move it. How long can you hold it steady? How long does it take to move? How long does it take to stabilize before taking another reading? Thus, how long will it take you to actually make a circuit of whatever angle you want and then start over again? Ultrasonic sensors are also flakey. If they see something in the 'edge' of their zone, they may read it, then drop out, then read it again. The reading you get returned if the object is 20 feet away may be 10 feet.
IR sensors get flakey when the reflectivity of the object you're shooting changes. Flat plates vs. angled surfaces vs. shiny vs. matte.
I would STRONGLY suggest getting a working knowledge of sensors before you start making decisions in how you think you might want to use them. You may spend a great deal of money only to discover that what you thought would work will not.
davidthefat
12-04-2010, 19:37
With stereo vision, you can make the traditional red/blue glasses 3d effect. One side is red only and the other one is blue, put the images together, Bam, you have 3d...
davidthefat
12-04-2010, 19:48
You can take the main idea of the IR range finder and have a array of IR diodes transmit the IR beams at the target and the camera picks up the data and can be used to find the distance using trigonometry.
With stereo vision, you can make the traditional red/blue glasses 3d effect. One side is red only and the other one is blue, put the images together, Bam, you have 3d...
Are you referring to robots or humans here?
If you want 3D perception from two images on the robot, it's called a disparity map, and that WILL load down your processor.
The IR camera thing sounds fun, but you might have to investigate the IR output of the stage-lights they use for competition.
davidthefat
12-04-2010, 22:04
http://www.seattlerobotics.org/encoder/200110/vision.htm
This is very interesting read
FYI, while a STATIC maxbotix sensor will give you a nice shot off a wall or other flat surface, you're going to be a in a world of hurt if you try to use one in motion or sweep it around.
The sensors have some lag, and any large change in distance creates a large 'overshoot' in the sensor. You can filter that out, however you reduce the reaction time of the sensor.
Here's a little food for thought. If your robot is traveling at 10 feet a second, how much time do you have to 'see' an object? What is the processing time of your cpu, AFTER the sensor has processed it? What is the overshoot of the particular sensor involved?
The end result is that for a Maxbotix sensor, moving at 10 feet per second, you better be hitting the brakes when it says an object is at 6 feet. Because of the lag, that object is actually at 2 feet.
Ultrasonic sensors also interfere with eachother. You'll have to daisy-chain them to ping if you using multiples. If you're using one, you'll have to hold it steady for it to get a reading, then move it. How long can you hold it steady? How long does it take to move? How long does it take to stabilize before taking another reading? Thus, how long will it take you to actually make a circuit of whatever angle you want and then start over again? Ultrasonic sensors are also flakey. If they see something in the 'edge' of their zone, they may read it, then drop out, then read it again. The reading you get returned if the object is 20 feet away may be 10 feet.
IR sensors get flakey when the reflectivity of the object you're shooting changes. Flat plates vs. angled surfaces vs. shiny vs. matte.
I would STRONGLY suggest getting a working knowledge of sensors before you start making decisions in how you think you might want to use them. You may spend a great deal of money only to discover that what you thought would work will not.
I was going to say that your estimation is overshot, but then I figured I'd calculate it out.
The SONAR has a sample rate of 20hz, with the analog signal updated between 38 and 42ms into the cycle. If you sample just before it updates the analog, there can be up to 92ms lag after the signal is measured.
At 10 ft/s, that puts you at about 10 inches.
Now the processing time:
If it takes you 500ms to process, then you're in serious trouble.
I think 100ms is reasonable and achievable, so we'll go with that. You've added 12 inches to your distance.
How far does it take to slow down?
If you have a coefficient of friction of 1, then you can decelerate at 1g, or 32 ft/s/s.
10ft/s ÷ 32ft/s/s = ~300ms. 300ms * 10ft/s * 1/2 = 1.5 feet, or 18 inches.
Combined, it's taken 40 inches to slow down; a bit over 3 feet.
Let's hope the other robot isn't travelling your way.
davidthefat
12-04-2010, 22:11
http://www.youtube.com/watch?v=SPywgDBjM1Y&feature=related
Really useful too
reversed_rocker
12-04-2010, 22:13
here's my message from a different thread for kamocat's request
Well lets just say that we are going to play a fully autonomous robot for this years game. Many games have the same theme involving a ball that needs to be picked up, thrown, tossed, ect. so some of the same ideas are likely to apply
First thing, find a ball.
You could have 4 sonic rangers across the front of the ball on the front of the robot. These sensors would be spaced apart just under the diameter of a ball. What you could do is have it so that the robot spins until there is an object that gives approximately the same distance for two and only two of the sensors, this would be a ball.
Getting the ball
For this robot it would be difficult to guide the robot so that the ball would hit a particular point to be picked up by a vacuum or small ball roller, so i would suggest a double ball roller that runs as far across the robot as possible (I'm thinking a robot very simular to 1918 or 1986). When the robot finds something that it thinks is a ball, it would stop spinning and drive forward. On the both sides of robot you could have 2 photo transistors lined up parallel to the ball roller about 1.5-2 in inside the frame. This way the robot could tell when it has a ball and approximately where on the robot the ball has stuck (we use the same sensor to detect when a ball is in our vacuum, easy to use and very reliable).
Shooting the ball
Since the photo transistors aren't that accurate, you would have the code split the ball roller into 3 sections: left side of the robot, middle of the robot, right side of the robot. The robot would then spin until the camera sees the goal. The gyro would have to be set at the beginning of the match so that the robot knows which side of the field to shoot at. Once the robot sees the target, you can line up your shot using the camera again and fire. Then you start over with the ball collection phase of the code
special considerations:
This would take some playing around with, you would probly have to throw in some timing aspects so that the robot doesnt get stuck on one part of the code. Things like "if you saw a ball 10 seconds ago and you havent picked it up, go back to finding balls" or "if you dont have a ball anymore, go back to finding balls" or "if it takes your more than 5 seconds to find the goal, drive forward and try again". The sonic rangers could be used for basic driving manuevers, If more than 2 of the sonic rangers sees an object less than 3 feet away then it would turn around.
Radical Pi
12-04-2010, 23:01
The gyro would have to be set at the beginning of the match so that the robot knows which side of the field to shoot at.
I'm still going to lobby for my 2nd image analysis method of goal detection. Instead of depending on a gyro and keeping everything in memory, If you do a color detection below the goal you should be able to determine alliance easily, and also take care of figuring out whether the goal is blocked at the same time
reversed_rocker
12-04-2010, 23:44
wouldnt the goal be the same color as team's bumpers? could that confuse the color detection? i think it would be much easier to use a gyro for field orientation rather than color detection, thats going a bit overboard on something that should be relatively easy. gyro drift shouldnt be too bad because you only need to be accurate to 180 degrees
davidthefat
12-04-2010, 23:46
wouldnt the goal be the same color as team's bumpers? could that confuse the color detection? i think it would be much easier to use a gyro for field orientation rather than color detection, thats going a bit overboard on something that should be relatively easy. gyro drift shouldnt be too bad because you only need to be accurate to 180 degrees
Can be simple as edge detection
Well, it's about time we made a testing list.
Here are a list of things that need to be tested before they can be used or discarded.
Location tracking (general):
Using an image of the goals targets. (How accurate is it? How fast is it? Where on the field can you use it?)
Drive encoders and gyro (How much does it drift/slip? Does it get off when you go over the bump?)
Non-drive wheel with encoders and gyro (same info as previous)
Accelerometer and gyro
Bump compensation: (low priority)
Detect bump with accelerometer
Reference against base of bump
Reference agains top of bump
Find latitude w/ SONAR
Find latitude w/ gyro
Encoder/gyro accuracy in the descent from bump
Ball Detection
Gyro/IR/camera on gimble
Bristles / whiskers?
Robot detection
Serial color sensor
Light source and colored phototransistors (aka home-made color sensor)
Camera
The first few have examples of what metrics would be useful.
Is there interest in testing these? Are there already-existing quantitative data for any of these?
davidthefat
12-04-2010, 23:52
Well, it's about time we made a testing list.
Here are a list of things that need to be tested before they can be used or discarded.
Location tracking (general):
Using an image of the goals targets. (How accurate is it? How fast is it? Where on the field can you use it?)
Drive encoders and gyro (How much does it drift/slip? Does it get off when you go over the bump?)
Non-drive wheel with encoders and gyro (same info as previous)
Accelerometer and gyro
Bump compensation: (low priority)
Detect bump with accelerometer
Reference against base of bump
Reference agains top of bump
Find latitude w/ SONAR
Find latitude w/ gyro
Encoder/gyro accuracy in the descent from bump
Ball Detection
Gyro/IR/camera on gimble
Bristles / whiskers?
Robot detection
Serial color sensor
Light source and colored phototransistors (aka home-made color sensor)
Camera
The first few have examples of what metrics would be useful.
Is there interest in testing these? Are there already-existing quantitative data for any of these?
http://www.chiefdelphi.com/forums/showthread.php?t=85197
If the object is identified, edge detection cam be used to distinguish from wall, ball and robot. Then if it is distinguished as a robot, then the camera can get bumper color
Tom Line
13-04-2010, 01:08
I was going to say that your estimation is overshot, but then I figured I'd calculate it out.
The SONAR has a sample rate of 20hz, with the analog signal updated between 38 and 42ms into the cycle. If you sample just before it updates the analog, there can be up to 92ms lag after the signal is measured.
At 10 ft/s, that puts you at about 10 inches.
Now the processing time:
If it takes you 500ms to process, then you're in serious trouble.
I think 100ms is reasonable and achievable, so we'll go with that. You've added 12 inches to your distance.
How far does it take to slow down?
If you have a coefficient of friction of 1, then you can decelerate at 1g, or 32 ft/s/s.
10ft/s ÷ 32ft/s/s = ~300ms. 300ms * 10ft/s * 1/2 = 1.5 feet, or 18 inches.
Combined, it's taken 40 inches to slow down; a bit over 3 feet.
Let's hope the other robot isn't travelling your way.
No doubt! The one thing you didn't add in is the pure lag in the maxbotix sensor (there's more than just processing lag). For whatever reason, not only does it take some time for it to see something moving quickly to you, it ends up overshooting when you stop moving.
We had to work hard to use these until we figured out a few tricks for orienting them. Mainly, in 2008, we had the best results when we bounced the sensor off the inside wall. Our robot would constantly adjust to be within 1.5 feet of that inside wall. When the the reading jumped to more than 3 feet we hung a sharp left because we knew we had passed it.
The trick there is that the inside wall doesn't move in relation to the robot very much or very quickly assuming you're close to driving parallel to it.
In addition, we had a sensor on the front of the robot and were sweeping it with a servo. However, we found that (as you calculated) you can simply not get enough time to sweep it. You're going to smash into something before you get a full check of whats in front of you (we were sweeping 135 degrees). So we went to a static sensor and tuned when to hit the breaks based on the idea that in overdrive it's unlikely anyone will be driving toward you - they will probably simply be stopped. So we adjusted it by putting cardboard boxes in the line of travel.
There's a couple very neat videos of our robot stopping to let another one go by, then continuing to drive :D. It LOOKS awesome, like our robot is thinking about it, but it really was just luck that it worked so well.
Tom Line
13-04-2010, 01:13
wouldnt the goal be the same color as team's bumpers? could that confuse the color detection? i think it would be much easier to use a gyro for field orientation rather than color detection, thats going a bit overboard on something that should be relatively easy. gyro drift shouldnt be too bad because you only need to be accurate to 180 degrees
The problem with a gyro is that it can easily be knocked 180 degrees out by a hard collision. So what you'll need to do is use a combination of the gyro and the encoders on your drivetrain, and you'll need to correlate the two to wipe out any potential errors.
For instance, if your gyro is showing you at 170 degrees and you get hit, and your encoders are showing 210 afterwards but your gyro is showing 0, you can guess which is more likely.
This is, of course, assuming a very stable very non-slipping drivetrain. Once you add that in you'll need to go elsewhere other than drivetrain encoders.
This will still end up varying over the period of a match, so you may want to go a step further and find one other way to re-zero your system, perhaps correlating it to the camera and a sensor you can check your distance to the wall with.
AdamHeard
13-04-2010, 01:23
In robocup, each team is given access to the feed from an overhead camera. Each robot has a distinct color pattern on top of it, and teams can use this to determine position and other information about all the robots on the field. Some real cool things could be done if FIRST implemented such a system
Ian Curtis
13-04-2010, 01:30
I don't think there's a need to actively search for balls. They always end up against the walls. Always. And it seems that teams never realize this, and always have trouble picking balls when they are resting against the edge of the field. They ended up against the walls in 2004, 2006, 2008, 2009, and now in 2010. When will we learn? :p
We used the Sharp IR sensors with great success to detect where balls were in our mechanism last year. We put the Maxbotix Sonar sensors on servos to try and find other robots to knock in autonomous in 2006. It even worked once! (http://www.thebluealliance.net/tbatv/match/2006gal_qm92)
The problem with a gyro is that it can easily be knocked 180 degrees out by a hard collision. So what you'll need to do is use a combination of the gyro and the encoders on your drivetrain, and you'll need to correlate the two to wipe out any potential errors.
For instance, if your gyro is showing you at 170 degrees and you get hit, and your encoders are showing 210 afterwards but your gyro is showing 0, you can guess which is more likely.
This is, of course, assuming a very stable very non-slipping drivetrain. Once you add that in you'll need to go elsewhere other than drivetrain encoders.
This will still end up varying over the period of a match, so you may want to go a step further and find one other way to re-zero your system, perhaps correlating it to the camera and a sensor you can check your distance to the wall with.
For zeroing the angle, how about having two ultrasonics bouncing off the walls on either side? You can calculate the distance between the walls by adding the two readings and the width of the bot, and you know that the field is 27' wide, so you can do some sort of trig calculation to find an angle.
This is of course assuming (1) you're bouncing off the right walls, (2) there aren't any obstacles on either side, and (3) the angle is small enough that you get a proper/accurate reading.
The problem with a gyro is that it can easily be knocked 180 degrees out by a hard collision. So what you'll need to do is use a combination of the gyro and the encoders on your drivetrain, and you'll need to correlate the two to wipe out any potential errors.
For instance, if your gyro is showing you at 170 degrees and you get hit, and your encoders are showing 210 afterwards but your gyro is showing 0, you can guess which is more likely.
A magnetic compass might be an option, e.g. NXT CMPS-Nx, either to use directly or to make gyro corrections.
reversed_rocker
13-04-2010, 13:26
compasses are cool assuming that they work with all the electronic noise that gets stirred up at the comps. I havent used one myself tho so i guess that makes my opinion void.
anyway, if this is going to be done by any team its almost a give that theres going to need to be more than 1 experienced programmer on each participating team. I'm sure theres a couple people out there who think they could pound this out with 4 mountain dews, an extra large bag of munchies and a good weeks work, but the sheer amount of testing required to get this to work will be impossible without a fairly large and experienced group of programmers.
so my suggestion is to use this whole coopertition hogwash that FIRST teams keep on talking about. I dont like the idea of any robot being the same as another, but if we share some basic mechanism with each other (maybe some robots have a common drive train, ball control, or kicker, but hopefully not all 3 in common). Anyone who writes code for a mechanism and shares it online will be privvy to mechanical specs and code for mechanisms for other participating teams. maybe we should make a website completely devoted to teams working on all autonomous robots.
Tom Line
13-04-2010, 14:59
A magnetic compass might be an option, e.g. NXT CMPS-Nx, either to use directly or to make gyro corrections.
Teams have attempted to use Compasses but have had difficulties due to the large amounts of metal we use in our robots, the large amounts of electrical fields our motors and wiring put out, and the large metal field and building that usually surround them.
That is a non-trivial pursuit. I believe Al from 111 has experience with them and has commented on how difficult they were to apply.
Tom Line
13-04-2010, 15:00
For zeroing the angle, how about having two ultrasonics bouncing off the walls on either side? You can calculate the distance between the walls by adding the two readings and the width of the bot, and you know that the field is 27' wide, so you can do some sort of trig calculation to find an angle.
This is of course assuming (1) you're bouncing off the right walls, (2) there aren't any obstacles on either side, and (3) the angle is small enough that you get a proper/accurate reading.
Unfortunately, most ultrasonics become inaccurate (+/- a foot) when you start moving them around due to processing lag and non-perpendicular surfaces. That would be a difficult proposition unless you picked specific times that the robot would do some special action to try to rezero itself.
The problem with a gyro is that it can easily be knocked 180 degrees out by a hard collision. So what you'll need to do is use a combination of the gyro and the encoders on your drivetrain, and you'll need to correlate the two to wipe out any potential errors.
For instance, if your gyro is showing you at 170 degrees and you get hit, and your encoders are showing 210 afterwards but your gyro is showing 0, you can guess which is more likely.
This is, of course, assuming a very stable very non-slipping drivetrain. Once you add that in you'll need to go elsewhere other than drivetrain encoders.
This will still end up varying over the period of a match, so you may want to go a step further and find one other way to re-zero your system, perhaps correlating it to the camera and a sensor you can check your distance to the wall with.
I think you might be able to use an analog trigger to tell you when your gyro is maxed out, and then you know you have to reorient yourself against a wall or a bump.
Another option is having a hi-rate gyro that is only used during impacts, for approximate positioning.
http://www.sparkfun.com/commerce/product_info.php?products_id=9425
(I think 1500º/s is enough, but it does require a 3.3v regulator.)
AFTERTHOUGHT:
Getting a robot to track its location most of the time is pretty easy, but getting it to track its location correctly ALL of the time is quite difficult.
vBulletin® v3.6.4, Copyright ©2000-2017, Jelsoft Enterprises Ltd.