![]() |
Re: Autonomous Perception
Quote:
|
Re: Autonomous Perception
During practice, I've seen many robots come off the bump at an angle. Yes, you have to go up it head-on, but on the way down it doesn't seem to matter so much (at least with mecanum wheels).
This usually only happens with the trailing wheels, however, and not until they've passed over the top of the bump. (the angle was typically about 15 degrees – enough that only three wheels contact the ground.) |
Re: Autonomous Perception
Quote:
The sensors have some lag, and any large change in distance creates a large 'overshoot' in the sensor. You can filter that out, however you reduce the reaction time of the sensor. Here's a little food for thought. If your robot is traveling at 10 feet a second, how much time do you have to 'see' an object? What is the processing time of your cpu, AFTER the sensor has processed it? What is the overshoot of the particular sensor involved? The end result is that for a Maxbotix sensor, moving at 10 feet per second, you better be hitting the brakes when it says an object is at 6 feet. Because of the lag, that object is actually at 2 feet. Ultrasonic sensors also interfere with eachother. You'll have to daisy-chain them to ping if you using multiples. If you're using one, you'll have to hold it steady for it to get a reading, then move it. How long can you hold it steady? How long does it take to move? How long does it take to stabilize before taking another reading? Thus, how long will it take you to actually make a circuit of whatever angle you want and then start over again? Ultrasonic sensors are also flakey. If they see something in the 'edge' of their zone, they may read it, then drop out, then read it again. The reading you get returned if the object is 20 feet away may be 10 feet. IR sensors get flakey when the reflectivity of the object you're shooting changes. Flat plates vs. angled surfaces vs. shiny vs. matte. I would STRONGLY suggest getting a working knowledge of sensors before you start making decisions in how you think you might want to use them. You may spend a great deal of money only to discover that what you thought would work will not. |
Re: Autonomous Perception
With stereo vision, you can make the traditional red/blue glasses 3d effect. One side is red only and the other one is blue, put the images together, Bam, you have 3d...
|
Re: Autonomous Perception
You can take the main idea of the IR range finder and have a array of IR diodes transmit the IR beams at the target and the camera picks up the data and can be used to find the distance using trigonometry.
|
Re: Autonomous Perception
Quote:
If you want 3D perception from two images on the robot, it's called a disparity map, and that WILL load down your processor. The IR camera thing sounds fun, but you might have to investigate the IR output of the stage-lights they use for competition. |
Re: Autonomous Perception
|
Re: Autonomous Perception
Quote:
The SONAR has a sample rate of 20hz, with the analog signal updated between 38 and 42ms into the cycle. If you sample just before it updates the analog, there can be up to 92ms lag after the signal is measured. At 10 ft/s, that puts you at about 10 inches. Now the processing time: If it takes you 500ms to process, then you're in serious trouble. I think 100ms is reasonable and achievable, so we'll go with that. You've added 12 inches to your distance. How far does it take to slow down? If you have a coefficient of friction of 1, then you can decelerate at 1g, or 32 ft/s/s. 10ft/s ÷ 32ft/s/s = ~300ms. 300ms * 10ft/s * 1/2 = 1.5 feet, or 18 inches. Combined, it's taken 40 inches to slow down; a bit over 3 feet. Let's hope the other robot isn't travelling your way. |
Re: Autonomous Perception
|
Re: Autonomous Perception
here's my message from a different thread for kamocat's request
Well lets just say that we are going to play a fully autonomous robot for this years game. Many games have the same theme involving a ball that needs to be picked up, thrown, tossed, ect. so some of the same ideas are likely to apply First thing, find a ball. You could have 4 sonic rangers across the front of the ball on the front of the robot. These sensors would be spaced apart just under the diameter of a ball. What you could do is have it so that the robot spins until there is an object that gives approximately the same distance for two and only two of the sensors, this would be a ball. Getting the ball For this robot it would be difficult to guide the robot so that the ball would hit a particular point to be picked up by a vacuum or small ball roller, so i would suggest a double ball roller that runs as far across the robot as possible (I'm thinking a robot very simular to 1918 or 1986). When the robot finds something that it thinks is a ball, it would stop spinning and drive forward. On the both sides of robot you could have 2 photo transistors lined up parallel to the ball roller about 1.5-2 in inside the frame. This way the robot could tell when it has a ball and approximately where on the robot the ball has stuck (we use the same sensor to detect when a ball is in our vacuum, easy to use and very reliable). Shooting the ball Since the photo transistors aren't that accurate, you would have the code split the ball roller into 3 sections: left side of the robot, middle of the robot, right side of the robot. The robot would then spin until the camera sees the goal. The gyro would have to be set at the beginning of the match so that the robot knows which side of the field to shoot at. Once the robot sees the target, you can line up your shot using the camera again and fire. Then you start over with the ball collection phase of the code special considerations: This would take some playing around with, you would probly have to throw in some timing aspects so that the robot doesnt get stuck on one part of the code. Things like "if you saw a ball 10 seconds ago and you havent picked it up, go back to finding balls" or "if you dont have a ball anymore, go back to finding balls" or "if it takes your more than 5 seconds to find the goal, drive forward and try again". The sonic rangers could be used for basic driving manuevers, If more than 2 of the sonic rangers sees an object less than 3 feet away then it would turn around. |
Re: Autonomous Perception
Quote:
|
Re: Autonomous Perception
wouldnt the goal be the same color as team's bumpers? could that confuse the color detection? i think it would be much easier to use a gyro for field orientation rather than color detection, thats going a bit overboard on something that should be relatively easy. gyro drift shouldnt be too bad because you only need to be accurate to 180 degrees
|
Re: Autonomous Perception
Quote:
|
Re: Autonomous Perception
Well, it's about time we made a testing list.
Here are a list of things that need to be tested before they can be used or discarded.
The first few have examples of what metrics would be useful. Is there interest in testing these? Are there already-existing quantitative data for any of these? |
Re: Autonomous Perception
Quote:
If the object is identified, edge detection cam be used to distinguish from wall, ball and robot. Then if it is distinguished as a robot, then the camera can get bumper color |
| All times are GMT -5. The time now is 05:23. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi