![]() |
Autonomous Perception
With the goal of making a robot fully autonomous:
What information would be useful to a robot during autonomous? How can that information be measured or acquired? |
Re: Autonomous Perception
Some things a robot might want to know are:
|
Re: Autonomous Perception
Quote:
Quote:
|
Re: Autonomous Perception
Continuing:
Q1) What direction am I heading. Q2) How far have I traveled. Q3) How fast am I moving. Q4) Am I turning. Q5) How fast am I turning. Q6) Can I visualize my objective. To answer some of these question: A1) Read from a Gyro or Compass. A2) Read from an Encoder. A3) Read rate from an Encoder. A4) Read from a Gyro or compass and compare to a previous reading. A5) See answer "A4" and divide by time between measurements. A6) Use camera and analyze the image. |
Re: Autonomous Perception
Did I run into something? (limit switches on bumper?)
Am I going to run into something? (sonar? Infrared? Something else?) |
Re: Autonomous Perception
Here's a clarified list of things that need to be measured for field-awareness, based on the Autonomous Planning discussion.
(1)"region of the field" has been pretty much covered: track your location with the gyro and encoders. However, there's still a high possibility that this would get thrown off when a robot goes over a bump. Is there a way to compensate? (2)"where are the game pieces" could probably be done with a camera. Is there a faster or more efficient way to find soccer balls? (3) "what's the current score?" The camera has been brought up as a method of reading the scores off of the screen. I think that would be extremely difficult (due to lack of testing) to have working at competition. Would this information have to be supplied by the drivers? (4 -5)"Finding nearby 'bots". This year, the bumpers were the key to determining a robot and their alliance. I'm still reluctant to use the camera for that, but I wonder about the possibility of a directional color sensor. I2C would probably be ideal, since it won't take up analog channels, and therefore several could be used for each side of the 'bot. |
Re: Autonomous Perception
actually, this would be a good use for an arduino board, communicating with the cRIO over serial.
the arduino would have a camera hooked up to the analog inputs, running a sketch that looks for either red or blue, depending on what the cRIO passes to the arduino. |
Re: Autonomous Perception
I am not trusting any 8 bit micro controllers on this project, I am going for the robot boards that are pretty much the bare minimum computer boards, 8 bit MCs do not have enough juice to process all that data for autonomous IMHO
|
Re: Autonomous Perception
it has the juice, but its not fast. if you bump the image quality down to QQVGA, it can find the blue/red bumpers in 2 seconds
|
Re: Autonomous Perception
Quote:
|
Re: Autonomous Perception
oh, yeah.
isn't there such a thing as a color sensor? |
Re: Autonomous Perception
Quote:
http://www.sparkfun.com/commerce/pro...oducts_id=8924 http://www.sparkfun.com/commerce/pro...oducts_id=8663 |
Re: Autonomous Perception
The cRIO has a serial port, as does an arduino
|
Re: Autonomous Perception
Wouldn't serial limit us to one color sensor? Or can multiple sensors be on the same serial line?
If we connected an arduino to a color sensor via serial, what would we use to connect the color sensor to the robot? Analog? Could we use three phototransistors instead? |
Re: Autonomous Perception
Going back to the gamepieces, is the camera really the best way of finding them?
Could a robot have "whiskers" along the sides, so it could tell if it brushed up against a ball? For now, I'm going to ignore how much IO each sensor (or set of sensors) might require. |
Re: Autonomous Perception
Quote:
|
Re: Autonomous Perception
Quote:
If I line my robot with SONAR, that's $250 for only two per side. If I use IR, it's still $120 for the same thing (though lower range, non-linear, and unknown view angle) The camera has a view angle of +-15 degrees. Please show me a diagram that demonstrates that 8 SONAR or 8 IR (plus the camera) are all I need to find a game piece anywhere around the 'bot. Feel free to demonstrate why they're the best solution. |
Re: Autonomous Perception
I would suggest implementing autonomous with the minimum possible number of sensors. The camera seems very promising, but it's very hard to use, especially in real time. Does anybody have an idea of how to find a robot using the camera? How about a ball resting against a robot or wall?
|
Re: Autonomous Perception
Quote:
|
Re: Autonomous Perception
If the sensors are at the same height as the balls, won't the robot be in the way of seeing 360 degrees?
If they're on top of the robot, won't they be in the way of each other? How do you spot a gamepiece if it's right up against your 'bot? |
Re: Autonomous Perception
Quote:
|
Re: Autonomous Perception
Oh, sorry, I misunderstood you.
I thought you were talking specifically about the game piece. You're saying that these SONAR and IR will be an excellent method of tracking other robots? |
Re: Autonomous Perception
Quote:
|
Re: Autonomous Perception
Quote:
Does that mean you would treat walls, bumps, towers, and gamepieces just as obstacles? Or would you compare the data to where you are on the field, to determine what is a bump, what is a wall, what is a tower, and what must be something else? Would these sonar and IR generally be angled down, or would they be straight out? This sounds like an interesting method, and I'd love to see some working code for it. Would you be willing to do that and post your results? |
Re: Autonomous Perception
Quote:
|
Re: Autonomous Perception
Well, when you're ready, Maxbotix makes some great SONAR.
http://www.maxbotix.com/Performance_Data.html |
Re: Autonomous Perception
Range finders are a good idea, but they're kind of limited. Unless you plan to have a whole row of them, you probably need a better way to distinguish between gamepieces, robots, and obstacles.
I imagine a possible "find gamepiece" routine going something like this: Use the camera to find the nearest blob matching certain parameters (depending on gamepiece). If one exists, compute the angle from the x-position in the image, rotate the bot to that angle using gyro, confirm that the gamepiece is still there, then use a rangefinder on the front of the robot to find the distance and drive up to it. If none are found, rotate 30 degrees or so and try again. |
Re: Autonomous Perception
That sounds like a good routine for finding gamepieces.
However, I feel that the gimbals still limit the 'bot to dealing with one gamepiece at a time. |
Re: Autonomous Perception
Quote:
|
Re: Autonomous Perception
It depends whether your goal is to acquire the gamepiece, or to keep another 'bot away from them.
However, I would still argue that if you can keep track of multiple gamepieces around you at the same time, then you immediately know where to go when one is taken, and you don't waste time looking for another. I think gamepiece detection should be passive most of the time. Also, by not using the camera for finding gamepieces, then the robot could focus on the target to fire, but still be aware of what's happening around it. EDIT: I replaced all instances of "ball" with "gamepiece". Isn't there PHP for this? |
Re: Autonomous Perception
Quote:
|
Re: Autonomous Perception
Well if you don't want us designing with this year in mind, then what do you want us to do? All decision-making is based on the game, so if we don't know next year's game then we should go with this year's game, learn what we can from it, and hope some of it is reusable. We can't even be sure drive code still works next year if the GDC throws a 2009-style curveball
|
Re: Autonomous Perception
Quote:
While the games in the past are our best estimates of what the GDC will do in the future, it's true that we should plan this in a way that it will work with any game. (Perhaps the planning algorithms will differ, but the perception and control should be very similar) Here's some basic commonalities between games:
|
Re: Autonomous Perception
Quote:
|
Re: Autonomous Perception
This is starting to sound more like a videogame than an autonomous 'bot.
Couldn't you just track your location with the encoders and gyro? The only problem I see with using the encoders for navigation is that they're almost guaranteed to get "off" when they go over the bump this past year. Is that what you're saying this "fog of war" location detection should be used for? |
Re: Autonomous Perception
Quote:
There's no need for a "fog of war" you can get plenty of information from an accelerometer, gyro, and maybe even a compass! |
Re: Autonomous Perception
An issue I see with these "mapping" approaches is that they can easily be thrown off by drift and unforeseen situations. What if the wheels slide or lose contact with the ground (e.g. bump, pushing matches)?
|
Re: Autonomous Perception
Quote:
Quote:
|
Re: Autonomous Perception
Because I haven't had much success with the accelerometer positioning yet, I'm going to make the assumption that the gyro and encoders will provide a fairly consistant and accurate data as to where your position is on the field...
EXCEPT when you're going over a bump. If you're going over a bump, what do you do? How do you detect that you're going over a bump? Similarly, some robots got balls underneath them this past year. How do you detect that you're not moving as you should, and then determine what you're *actually* doing? I don't think any of us have a 3-axis gyro on hand. |
Re: Autonomous Perception
Quote:
|
Re: Autonomous Perception
Another way to determine your location on the field, at least this year, is to look at the goals. Once you know what angle the goals are at, it's very easy to use triangulation to determine the robot's position.
|
Re: Autonomous Perception
Quote:
|
Re: Autonomous Perception
That's true, you could use the Z accellerometer to tell when you're not on the bump. (You could also use it to tell when you land. Hard.)
I think simply "resetting" your position off another technique defeats the purpose of the technique in the first place. What about zeroing up *against* the bump after you've gone over? Or against a wall? That should tell you your angle, and it would tell you your location in at least one plane. Has anyone tried using the line down the middle of the field? I know in FLL, it's very common to have a line-follower. I don't think it'd be hard to have a light and a phototransistor down near the ground so you can tell when you pass by the center line. I think triangulation off the goals would work pretty well, except that it's an inverse sine function, and so your accuracy decreases drastically as you get further away. I think you may have to look all the way across the field to see two goals at once, though. Perhaps it would require taking a full-res image (and recording the timestamp), processing it a bit later, and then readjusting the last few seconds to coincide with your new data. The question that goes along with this is, will the robot very often look at goals on the other side of the field? It's certainly something you can do in disabled mode, if you're already looking that way. |
Re: Autonomous Perception
Isn't this year's accelerometer three-axis? That would easily tell you when you're going over the bump. It might also be possible to do accurate inertial navigation with the accelerometer: the position estimate could reset every time it touches a bump and every time the camera sees two goals, and the velocity estimate could reset every time the encoders record a speed of 0.
|
Re: Autonomous Perception
Has anyone tried dragging a ball mouse on the floor (and communicating with it)?
An optical mouse? What's the communication standard before it's converted to USB? (With a ball mouse, you could actually just rewire it and connect it to the digital sidecar like any other encoder. Perhaps it would need a little mechanical adjustment to have good contact with the floor.) |
Re: Autonomous Perception
Quote:
|
Re: Autonomous Perception
Quote:
The ball could also solve the crossing the bump problem. If we set 4 encoders on the ball at 90 degree angles from eachother, then if 2 opposite eachother go the same direction for a period of time (hence the larger scale version to expand the window of time), then the bot would know it's not on the ground anymore. Even if it happens when not crossing the bump, we know that something has just caused the robot to leave the ground and tracking would need to reset anyways. Should it also have a horizontal roller to detect changes in angle? With all of this combined we can get most of our position detection done with just one sensor (plus a gyro or compass for sanity checks?) |
Re: Autonomous Perception
Quote:
|
Re: Autonomous Perception
Quote:
A typical optical mouse can't keep up with the speed of the robot. A couple of us tried doing a "telephoto mouse" system a number of years ago, but it turns out that any variation in height above the surface changes the scale of the image enough to mess with the sensed travel distance. |
Re: Autonomous Perception
Quote:
http://www.chiefdelphi.com/media/photos/30154 http://www.chiefdelphi.com/media/photos/30740 It's a billiard ball, about 55 mm in diameter if I remember correctly. It took us a while to get it to work reliably, and even then required some petting (cleaning carpet lint after every match, watching the disks, etc.). It was a very good experience for the team, but I think two omni-wheels perpendicular to each other would work better. |
Re: Autonomous Perception
Quote:
|
Re: Autonomous Perception
Quote:
|
Re: Autonomous Perception
Quote:
|
Re: Autonomous Perception
Quote:
|
Re: Autonomous Perception
Great article on 3d sensing http://www.cs.stanford.edu/people/an...nipulation.pdf
|
Re: Autonomous Perception
I think I just found the solution for telling where you are when you go over the bump:
Robots usually only slip on the way *up* the bump. On the way down, their back end may get some air, but all the wheels are still moving at the same rate. A robot can use the Z accelerometer to tell when it's at the top of the bump, and use IR (because it has a very narrow beam) to tell where you are horizontally. (It's assumed you know *which* bump you're on, but if you like, you could use a colored phototransistor to tell the color of the bump.) Alternately, SONAR could be used once the robot has gotten down off the bump. |
Re: Autonomous Perception
I don't really trust using encoders until you can zero into a known position after crossing a bump. Even if the wheels are coming down with the robot, it looks like most robots do slip a fair amount from momentum. If you can get a VERY accurate sonar that would be okay for re-detection after clearing the bump, but I think the risk of another robot interfering is a bit too high with that.
Also, what would happen if you were to land on another robot after crossing the bump. It's certainly possible with the tunnel bots that are about the same height as the bumps. Is there any way of detecting and preventing this? |
Re: Autonomous Perception
I know the Maxbotix SONAR is accurate to an inch, and it would take +-9mV of jitter to throw it off (if you round the value after you receive it).
This sounds like something that would be easy to test with current robot configurations. I'll make a list of things I need to test. Perhaps the surest option is to re-square yourself on the bump after you go over. |
Re: Autonomous Perception
Late-night thought:
The XL-MaxSonar give you the "real-time envelope" in analog, so you can do your own processing. If you had one with a very narrow beam, you could record the acoustic signature (amplitude over time) of the gamepiece, and use that to determine if you're pointed at a gamepiece. The problem with this is, if the gamepiece is a sphere, this may make the robot think that every sphere is a gamepiece. |
Re: Autonomous Perception
http://www.societyofrobots.com/robottheory.shtml
Very Very good articles to read about |
Re: Autonomous Perception
Quote:
|
Re: Autonomous Perception
During practice, I've seen many robots come off the bump at an angle. Yes, you have to go up it head-on, but on the way down it doesn't seem to matter so much (at least with mecanum wheels).
This usually only happens with the trailing wheels, however, and not until they've passed over the top of the bump. (the angle was typically about 15 degrees – enough that only three wheels contact the ground.) |
Re: Autonomous Perception
Quote:
The sensors have some lag, and any large change in distance creates a large 'overshoot' in the sensor. You can filter that out, however you reduce the reaction time of the sensor. Here's a little food for thought. If your robot is traveling at 10 feet a second, how much time do you have to 'see' an object? What is the processing time of your cpu, AFTER the sensor has processed it? What is the overshoot of the particular sensor involved? The end result is that for a Maxbotix sensor, moving at 10 feet per second, you better be hitting the brakes when it says an object is at 6 feet. Because of the lag, that object is actually at 2 feet. Ultrasonic sensors also interfere with eachother. You'll have to daisy-chain them to ping if you using multiples. If you're using one, you'll have to hold it steady for it to get a reading, then move it. How long can you hold it steady? How long does it take to move? How long does it take to stabilize before taking another reading? Thus, how long will it take you to actually make a circuit of whatever angle you want and then start over again? Ultrasonic sensors are also flakey. If they see something in the 'edge' of their zone, they may read it, then drop out, then read it again. The reading you get returned if the object is 20 feet away may be 10 feet. IR sensors get flakey when the reflectivity of the object you're shooting changes. Flat plates vs. angled surfaces vs. shiny vs. matte. I would STRONGLY suggest getting a working knowledge of sensors before you start making decisions in how you think you might want to use them. You may spend a great deal of money only to discover that what you thought would work will not. |
Re: Autonomous Perception
With stereo vision, you can make the traditional red/blue glasses 3d effect. One side is red only and the other one is blue, put the images together, Bam, you have 3d...
|
Re: Autonomous Perception
You can take the main idea of the IR range finder and have a array of IR diodes transmit the IR beams at the target and the camera picks up the data and can be used to find the distance using trigonometry.
|
Re: Autonomous Perception
Quote:
If you want 3D perception from two images on the robot, it's called a disparity map, and that WILL load down your processor. The IR camera thing sounds fun, but you might have to investigate the IR output of the stage-lights they use for competition. |
Re: Autonomous Perception
|
Re: Autonomous Perception
Quote:
The SONAR has a sample rate of 20hz, with the analog signal updated between 38 and 42ms into the cycle. If you sample just before it updates the analog, there can be up to 92ms lag after the signal is measured. At 10 ft/s, that puts you at about 10 inches. Now the processing time: If it takes you 500ms to process, then you're in serious trouble. I think 100ms is reasonable and achievable, so we'll go with that. You've added 12 inches to your distance. How far does it take to slow down? If you have a coefficient of friction of 1, then you can decelerate at 1g, or 32 ft/s/s. 10ft/s ÷ 32ft/s/s = ~300ms. 300ms * 10ft/s * 1/2 = 1.5 feet, or 18 inches. Combined, it's taken 40 inches to slow down; a bit over 3 feet. Let's hope the other robot isn't travelling your way. |
Re: Autonomous Perception
|
Re: Autonomous Perception
here's my message from a different thread for kamocat's request
Well lets just say that we are going to play a fully autonomous robot for this years game. Many games have the same theme involving a ball that needs to be picked up, thrown, tossed, ect. so some of the same ideas are likely to apply First thing, find a ball. You could have 4 sonic rangers across the front of the ball on the front of the robot. These sensors would be spaced apart just under the diameter of a ball. What you could do is have it so that the robot spins until there is an object that gives approximately the same distance for two and only two of the sensors, this would be a ball. Getting the ball For this robot it would be difficult to guide the robot so that the ball would hit a particular point to be picked up by a vacuum or small ball roller, so i would suggest a double ball roller that runs as far across the robot as possible (I'm thinking a robot very simular to 1918 or 1986). When the robot finds something that it thinks is a ball, it would stop spinning and drive forward. On the both sides of robot you could have 2 photo transistors lined up parallel to the ball roller about 1.5-2 in inside the frame. This way the robot could tell when it has a ball and approximately where on the robot the ball has stuck (we use the same sensor to detect when a ball is in our vacuum, easy to use and very reliable). Shooting the ball Since the photo transistors aren't that accurate, you would have the code split the ball roller into 3 sections: left side of the robot, middle of the robot, right side of the robot. The robot would then spin until the camera sees the goal. The gyro would have to be set at the beginning of the match so that the robot knows which side of the field to shoot at. Once the robot sees the target, you can line up your shot using the camera again and fire. Then you start over with the ball collection phase of the code special considerations: This would take some playing around with, you would probly have to throw in some timing aspects so that the robot doesnt get stuck on one part of the code. Things like "if you saw a ball 10 seconds ago and you havent picked it up, go back to finding balls" or "if you dont have a ball anymore, go back to finding balls" or "if it takes your more than 5 seconds to find the goal, drive forward and try again". The sonic rangers could be used for basic driving manuevers, If more than 2 of the sonic rangers sees an object less than 3 feet away then it would turn around. |
Re: Autonomous Perception
Quote:
|
Re: Autonomous Perception
wouldnt the goal be the same color as team's bumpers? could that confuse the color detection? i think it would be much easier to use a gyro for field orientation rather than color detection, thats going a bit overboard on something that should be relatively easy. gyro drift shouldnt be too bad because you only need to be accurate to 180 degrees
|
Re: Autonomous Perception
Quote:
|
Re: Autonomous Perception
Well, it's about time we made a testing list.
Here are a list of things that need to be tested before they can be used or discarded.
The first few have examples of what metrics would be useful. Is there interest in testing these? Are there already-existing quantitative data for any of these? |
Re: Autonomous Perception
Quote:
If the object is identified, edge detection cam be used to distinguish from wall, ball and robot. Then if it is distinguished as a robot, then the camera can get bumper color |
Re: Autonomous Perception
Quote:
We had to work hard to use these until we figured out a few tricks for orienting them. Mainly, in 2008, we had the best results when we bounced the sensor off the inside wall. Our robot would constantly adjust to be within 1.5 feet of that inside wall. When the the reading jumped to more than 3 feet we hung a sharp left because we knew we had passed it. The trick there is that the inside wall doesn't move in relation to the robot very much or very quickly assuming you're close to driving parallel to it. In addition, we had a sensor on the front of the robot and were sweeping it with a servo. However, we found that (as you calculated) you can simply not get enough time to sweep it. You're going to smash into something before you get a full check of whats in front of you (we were sweeping 135 degrees). So we went to a static sensor and tuned when to hit the breaks based on the idea that in overdrive it's unlikely anyone will be driving toward you - they will probably simply be stopped. So we adjusted it by putting cardboard boxes in the line of travel. There's a couple very neat videos of our robot stopping to let another one go by, then continuing to drive :D. It LOOKS awesome, like our robot is thinking about it, but it really was just luck that it worked so well. |
Re: Autonomous Perception
Quote:
For instance, if your gyro is showing you at 170 degrees and you get hit, and your encoders are showing 210 afterwards but your gyro is showing 0, you can guess which is more likely. This is, of course, assuming a very stable very non-slipping drivetrain. Once you add that in you'll need to go elsewhere other than drivetrain encoders. This will still end up varying over the period of a match, so you may want to go a step further and find one other way to re-zero your system, perhaps correlating it to the camera and a sensor you can check your distance to the wall with. |
Re: Autonomous Perception
In robocup, each team is given access to the feed from an overhead camera. Each robot has a distinct color pattern on top of it, and teams can use this to determine position and other information about all the robots on the field. Some real cool things could be done if FIRST implemented such a system
|
Re: Autonomous Perception
I don't think there's a need to actively search for balls. They always end up against the walls. Always. And it seems that teams never realize this, and always have trouble picking balls when they are resting against the edge of the field. They ended up against the walls in 2004, 2006, 2008, 2009, and now in 2010. When will we learn? :p
We used the Sharp IR sensors with great success to detect where balls were in our mechanism last year. We put the Maxbotix Sonar sensors on servos to try and find other robots to knock in autonomous in 2006. It even worked once! |
Re: Autonomous Perception
Quote:
This is of course assuming (1) you're bouncing off the right walls, (2) there aren't any obstacles on either side, and (3) the angle is small enough that you get a proper/accurate reading. |
Re: Autonomous Perception
Quote:
A magnetic compass might be an option, e.g. NXT CMPS-Nx, either to use directly or to make gyro corrections. |
Re: Autonomous Perception
compasses are cool assuming that they work with all the electronic noise that gets stirred up at the comps. I havent used one myself tho so i guess that makes my opinion void.
anyway, if this is going to be done by any team its almost a give that theres going to need to be more than 1 experienced programmer on each participating team. I'm sure theres a couple people out there who think they could pound this out with 4 mountain dews, an extra large bag of munchies and a good weeks work, but the sheer amount of testing required to get this to work will be impossible without a fairly large and experienced group of programmers. so my suggestion is to use this whole coopertition hogwash that FIRST teams keep on talking about. I dont like the idea of any robot being the same as another, but if we share some basic mechanism with each other (maybe some robots have a common drive train, ball control, or kicker, but hopefully not all 3 in common). Anyone who writes code for a mechanism and shares it online will be privvy to mechanical specs and code for mechanisms for other participating teams. maybe we should make a website completely devoted to teams working on all autonomous robots. |
Re: Autonomous Perception
Quote:
That is a non-trivial pursuit. I believe Al from 111 has experience with them and has commented on how difficult they were to apply. |
Re: Autonomous Perception
Quote:
|
Re: Autonomous Perception
Quote:
Another option is having a hi-rate gyro that is only used during impacts, for approximate positioning. http://www.sparkfun.com/commerce/pro...oducts_id=9425 (I think 1500º/s is enough, but it does require a 3.3v regulator.) AFTERTHOUGHT: Getting a robot to track its location most of the time is pretty easy, but getting it to track its location correctly ALL of the time is quite difficult. |
| All times are GMT -5. The time now is 05:23. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi