Chief Delphi

Chief Delphi (http://www.chiefdelphi.com/forums/index.php)
-   Rules/Strategy (http://www.chiefdelphi.com/forums/forumdisplay.php?f=6)
-   -   Depth Perception (http://www.chiefdelphi.com/forums/showthread.php?t=99726)

ianonavy 10-01-2012 15:50

Depth Perception
 
We have decided on a robot design that requires sensing how far the basketball hoops are from our robot, but we are not sure how to go about doing that. We were thinking about mounting the Kinect on our robot, but that proved to be a lot more complicated than we expected. I've read some posts here about using lasers to detect how far the hoops are. How would our team go about doing that? What is the simplest/easiest way of going about doing this?

tl;dr; What's the easiest way to sense how far away the basketball hoops are from the robot?

Thanks for your input, good luck on your build season!

andreboos 10-01-2012 15:57

Re: Depth Perception
 
You could use the ultrasonic range sensor in the Kit of Parts.

DoctorWhom93 10-01-2012 16:00

Re: Depth Perception
 
If you open up the Video Processing LabVIEW example it has a function defined in there called "Distance". You can open that up and get what FIRST put in their for processing distance using the camera

DuaneB 10-01-2012 16:04

Re: Depth Perception
 
Note that maximum range of the Maxbotix LV-MaxSonarŽ-EZ1 sonar range finder is 254 inches (21 ft / 6.5m).

plnyyanks 10-01-2012 16:04

Re: Depth Perception
 
You could use the axis camera instead of the kinect and process the image to determine the distance. If you have LabVIEW, one of the examples shows how to do this. Otherwise, these threads might be useful, along with this whitepaper.

ianonavy 10-01-2012 16:40

Re: Depth Perception
 
Well, we're going to be programming in either C++ or Java, so LabVIEW is out of the question. I don't understand how you could process an image to determine depth with a single image. Wouldn't it be more accurate to have two cameras to have stereo vision?

plnyyanks 10-01-2012 17:12

Re: Depth Perception
 
1 Attachment(s)
Attached is a screenshot of the LV example VI. If you don't use LV, then just pay attention to the triangles drawn out on the bottom.

We can calculate the distance from the target using a couple known values and some trigonometry. We know the camera's resolution, its field of view (the angle at which it can view, or 2Θ), and the width of the target in real life, and the target's position in the camera image. Here's a comment from the VI that goes over the math:
Quote:

Since we know that the target width is 2', we can use its pixel width to determine the width
of the camera field of view in ft at that working distance from the camera. W is half of that.
Divide by the tangent of theta (half the view angle), to determine d.
So we take the width of the target box in pixals, and determine the width of the whole image (2/width*xresolution/2). Then, we can divide that by the tangent of .5Θ (where Θ = the view angle, as found on the Axis camera datasheet [about 47˚ for the M1011, and 54˚ for the 206]) to get the distance in feet.

Attachment 11311

Also, all this is explained in NI's Whitepaper on the subject

rich2202 13-01-2012 00:48

Re: Depth Perception
 
It gets a little more complicated when you are looking at the basket from an angle. You have to examine the shape of the square to calculate the viewing angle. Then use the viewing angle to adjust the size of the box before calculating the distance.

windtakers 14-01-2012 22:58

Re: Depth Perception
 
Quote:

Originally Posted by plnyyanks (Post 1101785)
Attached is a screenshot of the LV example VI.

where is this example at in labview

landybr 04-02-2012 11:02

Re: Depth Perception
 
::ouch::

Tylernol 04-02-2012 11:23

Re: Depth Perception
 
We're currently using the axis camera to find the rectangle of the reflective tape. We'll analyze the aspect ratio of the square, then count the pixels to find our distance.

...or at least that's what our programmers told me. I'm not one myself and most of it is very confusing to me.

:edit Oh, and we're using Java. I could probably ask our programmers for the code.

nssheepster 04-02-2012 12:15

Re: Depth Perception
 
Our team wants to do that, but upon reflection and research, we feel that the camera or "laser" sensing method is not consistently reliable enough. We thought of a filtered gyroscope - accelerometer system, but we don't have the time and manpower to finish it before our first regional. So, we are doing without. But reliability is going to be a problem, no matter what almost - incomprehensible algorithms you use.

Dale 04-02-2012 12:36

Re: Depth Perception
 
Be sure to test how well the ultrasonic sensors work in a noisy environment. What happens when a robot with a chain driven wheel turning at 5000 rpm is next to your robot or when your own shooter is running. It may work fine, it may not! We're going to test that today.

DonRotolo 04-02-2012 19:04

Re: Depth Perception
 
Quote:

Originally Posted by Dale (Post 1119751)
Be sure to test how well the ultrasonic sensors work in a noisy environment.

Excellent idea. Other robots may have similar sensors pinging as well.

In any case, it is possible to reduce much of the potential/actual interference using standard audio techniques.

Wolfgang 09-02-2012 19:49

Re: Depth Perception
 
The ultrasonic sensor included in the kit of parts has proven to be harder than expected to use effectively, at least ours was. It appears to get noisy and distorted data after about 8 ft of distance. This happens even if it is quite around the sensor, so we believe it would be tough to use unless you were shooting very close range.


All times are GMT -5. The time now is 04:41.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi