View Full Version : Depth Perception
ianonavy
10-01-2012, 15:50
We have decided on a robot design that requires sensing how far the basketball hoops are from our robot, but we are not sure how to go about doing that. We were thinking about mounting the Kinect on our robot, but that proved to be a lot more complicated than we expected. I've read some posts here about using lasers to detect how far the hoops are. How would our team go about doing that? What is the simplest/easiest way of going about doing this?
tl;dr; What's the easiest way to sense how far away the basketball hoops are from the robot?
Thanks for your input, good luck on your build season!
andreboos
10-01-2012, 15:57
You could use the ultrasonic range sensor in the Kit of Parts.
DoctorWhom93
10-01-2012, 16:00
If you open up the Video Processing LabVIEW example it has a function defined in there called "Distance". You can open that up and get what FIRST put in their for processing distance using the camera
Note that maximum range of the Maxbotix LV-MaxSonarŽ-EZ1 (http://www.maxbotix.com/documents/MB1010_Datasheet.pdf) sonar range finder is 254 inches (21 ft / 6.5m).
plnyyanks
10-01-2012, 16:04
You could use the axis camera instead of the kinect and process the image to determine the distance. If you have LabVIEW, one of the examples shows how to do this. Otherwise, these (http://www.chiefdelphi.com/forums/showthread.php?t=99424) threads (http://www.chiefdelphi.com/forums/showthread.php?t=99618) might (http://www.chiefdelphi.com/forums/showthread.php?t=99120) be useful, along with this whitepaper (http://firstforge.wpi.edu/sf/docman/do/downloadDocument/projects.wpilib/docman.root/doc1302/1).
ianonavy
10-01-2012, 16:40
Well, we're going to be programming in either C++ or Java, so LabVIEW is out of the question. I don't understand how you could process an image to determine depth with a single image. Wouldn't it be more accurate to have two cameras to have stereo vision?
plnyyanks
10-01-2012, 17:12
Attached is a screenshot of the LV example VI. If you don't use LV, then just pay attention to the triangles drawn out on the bottom.
We can calculate the distance from the target using a couple known values and some trigonometry. We know the camera's resolution, its field of view (the angle at which it can view, or 2Θ), and the width of the target in real life, and the target's position in the camera image. Here's a comment from the VI that goes over the math:
Since we know that the target width is 2', we can use its pixel width to determine the width
of the camera field of view in ft at that working distance from the camera. W is half of that.
Divide by the tangent of theta (half the view angle), to determine d.
So we take the width of the target box in pixals, and determine the width of the whole image (2/width*xresolution/2). Then, we can divide that by the tangent of .5Θ (where Θ = the view angle, as found on the Axis camera datasheet (http://www.axis.com/files/datasheet/ds_206_33168_en_0904_lo.pdf) [about 47˚ for the M1011, and 54˚ for the 206]) to get the distance in feet.
11311
Also, all this is explained in NI's Whitepaper on the subject (http://firstforge.wpi.edu/sf/docman/do/downloadDocument/projects.wpilib/docman.root/doc1302/1)
rich2202
13-01-2012, 00:48
It gets a little more complicated when you are looking at the basket from an angle. You have to examine the shape of the square to calculate the viewing angle. Then use the viewing angle to adjust the size of the box before calculating the distance.
windtakers
14-01-2012, 22:58
Attached is a screenshot of the LV example VI.
where is this example at in labview
Tylernol
04-02-2012, 11:23
We're currently using the axis camera to find the rectangle of the reflective tape. We'll analyze the aspect ratio of the square, then count the pixels to find our distance.
...or at least that's what our programmers told me. I'm not one myself and most of it is very confusing to me.
:edit Oh, and we're using Java. I could probably ask our programmers for the code.
nssheepster
04-02-2012, 12:15
Our team wants to do that, but upon reflection and research, we feel that the camera or "laser" sensing method is not consistently reliable enough. We thought of a filtered gyroscope - accelerometer system, but we don't have the time and manpower to finish it before our first regional. So, we are doing without. But reliability is going to be a problem, no matter what almost - incomprehensible algorithms you use.
Be sure to test how well the ultrasonic sensors work in a noisy environment. What happens when a robot with a chain driven wheel turning at 5000 rpm is next to your robot or when your own shooter is running. It may work fine, it may not! We're going to test that today.
DonRotolo
04-02-2012, 19:04
Be sure to test how well the ultrasonic sensors work in a noisy environment.
Excellent idea. Other robots may have similar sensors pinging as well.
In any case, it is possible to reduce much of the potential/actual interference using standard audio techniques.
Wolfgang
09-02-2012, 19:49
The ultrasonic sensor included in the kit of parts has proven to be harder than expected to use effectively, at least ours was. It appears to get noisy and distorted data after about 8 ft of distance. This happens even if it is quite around the sensor, so we believe it would be tough to use unless you were shooting very close range.
We had some problems using the camera for distance using just straight math after detection and filtering. However we switched gears and take the fuzzy logic route by creating a graph of distance and values calculated from points to come up with a curve. The curve made things super accurate and precise (due to what ever error exsisted at that distance, ie the algorithm finding the outside of the square or such). Im not sure exactly what was going on that caused those issues but now we have 2 inch accuracy at 54 feet.
ianonavy
11-02-2012, 12:42
We had some problems using the camera for distance using just straight math after detection and filtering. However we switched gears and take the fuzzy logic route by creating a graph of distance and values calculated from points to come up with a curve. The curve made things super accurate and precise (due to what ever error exsisted at that distance, ie the algorithm finding the outside of the square or such). Im not sure exactly what was going on that caused those issues but now we have 2 inch accuracy at 54 feet.
How do you calculate distance from points? Do you just look at the size of the target?
Chris is me
11-02-2012, 13:11
Are you attempting to write "general" camera tracking code? With or without human adjustment / compensation? Is this for autonomous mode?
There are many situations you could be working toward that have different constraints. Perhaps you may find that you're best served by a different solution for each mode. One potential example: In autonomous mode, you have a more or less known starting position, and the rules prohibit opponent interference. If you can shoot from there reliably with relatively simple camera aiming, then you have "figured out" autonomous scoring without camera depth perception.
Teams in Aim High often reported shooting from particular "sweet spots", nice little areas on the field that the robot was tuned to score most reliably from. Obviously a different game, from the different size hoop to a very different game piece. But this is the kind of thing one might need to look into if, for whatever reason, camera depth perception isn't reliable enough to work for your desired constraints.
Do keep in mind that this post is coming from the perspective of a member of a team that doesn't have the expertise to do complex software control loops, so my instinct is to look for as many ways to eliminate a need for software as possible. Take this post as a reminder that there may be more than one way to solve the problem of reliably making shots with your particular design - not just camera depth perception. Whether or not you give that up, and when, is one of the many, many challenges of this particular game.
We used the particle analysis function to find the height of the box after filtering it out with HSL comparison. after finding data for every foot from 3-23 feet and applying a regression with my TI-84 (equation came to something like 470/x^(.89)) our camera is accurate at predicting distance from any angle to a little more than an inch. sweet spot for distance measurement is 8 ft or so. comparing the same measurement using the width to the measurement from height gives us what angle we are at compared to the target.
ianonavy
11-02-2012, 14:45
We used the particle analysis function to find the height of the box after filtering it out with HSL comparison. after finding data for every foot from 3-23 feet and applying a regression with my TI-84 (equation came to something like 470/x^(.89)) our camera is accurate at predicting distance from any angle to a little more than an inch. sweet spot for distance measurement is 8 ft or so. comparing the same measurement using the width to the measurement from height gives us what angle we are at compared to the target.
Okay, that makes a lot more sense to me than trying to find distances based on targets. I'll have to try that later today with our test robot. It's a linear regression, right? (as opposed to a polynomial regression).
Thanks a lot!
KennyLives
11-02-2012, 15:27
Note that maximum range of the Maxbotix LV-MaxSonarŽ-EZ1 (http://www.maxbotix.com/documents/MB1010_Datasheet.pdf) sonar range finder is 254 inches (21 ft / 6.5m).
This little guy came in the KOP and works pretty well. I say use it.
xhawaii808
11-02-2012, 22:08
A touch sensor,because you that its touching the touch sensor.::rtm:: ::rtm:: ::rtm:: ::rtm:: ::rtm::
If you have the camera mounted at a fixed angle, you can use the screen coordinates of the hoop in the image to determine the angle between the horizontal plane the camera occupies and the line between the camera and the hoop. As you know the height of the hoop from the ground, you can then use trig to calculate the distance.
dakaufma
18-02-2012, 08:22
We are using that method and so far it is working quite well for us. Just make sure there is a sufficient height difference between the camera and the target.
vBulletin® v3.6.4, Copyright ©2000-2017, Jelsoft Enterprises Ltd.