We have decided on a robot design that requires sensing how far the basketball hoops are from our robot, but we are not sure how to go about doing that. We were thinking about mounting the Kinect on our robot, but that proved to be a lot more complicated than we expected. I’ve read some posts here about using lasers to detect how far the hoops are. How would our team go about doing that? What is the simplest/easiest way of going about doing this?
tl;dr; What’s the easiest way to sense how far away the basketball hoops are from the robot?
Thanks for your input, good luck on your build season!
If you open up the Video Processing LabVIEW example it has a function defined in there called “Distance”. You can open that up and get what FIRST put in their for processing distance using the camera
You could use the axis camera instead of the kinect and process the image to determine the distance. If you have LabVIEW, one of the examples shows how to do this. Otherwise, thesethreadsmight be useful, along with this whitepaper.
Well, we’re going to be programming in either C++ or Java, so LabVIEW is out of the question. I don’t understand how you could process an image to determine depth with a single image. Wouldn’t it be more accurate to have two cameras to have stereo vision?
Attached is a screenshot of the LV example VI. If you don’t use LV, then just pay attention to the triangles drawn out on the bottom.
We can calculate the distance from the target using a couple known values and some trigonometry. We know the camera’s resolution, its field of view (the angle at which it can view, or 2Θ), and the width of the target in real life, and the target’s position in the camera image. Here’s a comment from the VI that goes over the math:
Since we know that the target width is 2’, we can use its pixel width to determine the width
of the camera field of view in ft at that working distance from the camera. W is half of that.
Divide by the tangent of theta (half the view angle), to determine d.
So we take the width of the target box in pixals, and determine the width of the whole image (2/width*xresolution/2). Then, we can divide that by the tangent of .5Θ (where Θ = the view angle, as found on the Axis camera datasheet [about 47˚ for the M1011, and 54˚ for the 206]) to get the distance in feet.
It gets a little more complicated when you are looking at the basket from an angle. You have to examine the shape of the square to calculate the viewing angle. Then use the viewing angle to adjust the size of the box before calculating the distance.
We’re currently using the axis camera to find the rectangle of the reflective tape. We’ll analyze the aspect ratio of the square, then count the pixels to find our distance.
…or at least that’s what our programmers told me. I’m not one myself and most of it is very confusing to me.
:edit Oh, and we’re using Java. I could probably ask our programmers for the code.
Our team wants to do that, but upon reflection and research, we feel that the camera or “laser” sensing method is not consistently reliable enough. We thought of a filtered gyroscope - accelerometer system, but we don’t have the time and manpower to finish it before our first regional. So, we are doing without. But reliability is going to be a problem, no matter what almost - incomprehensible algorithms you use.
Be sure to test how well the ultrasonic sensors work in a noisy environment. What happens when a robot with a chain driven wheel turning at 5000 rpm is next to your robot or when your own shooter is running. It may work fine, it may not! We’re going to test that today.
The ultrasonic sensor included in the kit of parts has proven to be harder than expected to use effectively, at least ours was. It appears to get noisy and distorted data after about 8 ft of distance. This happens even if it is quite around the sensor, so we believe it would be tough to use unless you were shooting very close range.
We had some problems using the camera for distance using just straight math after detection and filtering. However we switched gears and take the fuzzy logic route by creating a graph of distance and values calculated from points to come up with a curve. The curve made things super accurate and precise (due to what ever error exsisted at that distance, ie the algorithm finding the outside of the square or such). Im not sure exactly what was going on that caused those issues but now we have 2 inch accuracy at 54 feet.
Are you attempting to write “general” camera tracking code? With or without human adjustment / compensation? Is this for autonomous mode?
There are many situations you could be working toward that have different constraints. Perhaps you may find that you’re best served by a different solution for each mode. One potential example: In autonomous mode, you have a more or less known starting position, and the rules prohibit opponent interference. If you can shoot from there reliably with relatively simple camera aiming, then you have “figured out” autonomous scoring without camera depth perception.
Teams in Aim High often reported shooting from particular “sweet spots”, nice little areas on the field that the robot was tuned to score most reliably from. Obviously a different game, from the different size hoop to a very different game piece. But this is the kind of thing one might need to look into if, for whatever reason, camera depth perception isn’t reliable enough to work for your desired constraints.
Do keep in mind that this post is coming from the perspective of a member of a team that doesn’t have the expertise to do complex software control loops, so my instinct is to look for as many ways to eliminate a need for software as possible. Take this post as a reminder that there may be more than one way to solve the problem of reliably making shots with your particular design - not just camera depth perception. Whether or not you give that up, and when, is one of the many, many challenges of this particular game.
We used the particle analysis function to find the height of the box after filtering it out with HSL comparison. after finding data for every foot from 3-23 feet and applying a regression with my TI-84 (equation came to something like 470/x^(.89)) our camera is accurate at predicting distance from any angle to a little more than an inch. sweet spot for distance measurement is 8 ft or so. comparing the same measurement using the width to the measurement from height gives us what angle we are at compared to the target.
Okay, that makes a lot more sense to me than trying to find distances based on targets. I’ll have to try that later today with our test robot. It’s a linear regression, right? (as opposed to a polynomial regression).