Knowing the size and positions of these reflectors and assuming that the center of the field is (0,0) (or maybe there is one more zero for the camera’s z?), how (mathematically) can I take pictures from a camera and calculate the camera’s position on the field? Thanks in advance!
The height of the camera will be constant on most robots.
how (mathematically) can I take pictures from a camera and calculate the camera’s position on the field? Thanks in advance!
Trigonometry. You have a known length and an angle from the image.
Could you give me a link explaining how I’d do it? I don’t have a super strong math background.
length of target/tangent(angle of target in image) = distance to target
But that only gives me distance, how do i determine where I lie on that circle?
We took a stab at this exact same situation a few different years, and its actually significantly harder than it sounds. Aside from the actual mathematics of it, the cameras we have access to do not produced a “flat” image. It has spherical distortion that isn’t really obvious until you try doing math on different parts of the image. Even high end cameras used for actual aerial survey have this problem…we see it in “orthophotos” that have been corrected to give a reasonably “flat” picture that you can make measurements on. The jagged corrections made to straight line objects like roads gets more and more obvious the farther you get from the center of the frame. Also, parallax and other problems.
We can certainly learn things from the camera image, don’t get me wrong.
Definitely rooting you on for trying to figure this out, and I’m definitely not saying its not objectively possible. Its just really, really hard with the equipment and processing power we have.
How does the camera handle multiple targets (If I’m at an angle where two high goals can be seen by the camera at once)?
OK. Is there a way to accurately track our robot’s position on the field? We tried using a 6DOF IMU from sparkfun last year and the due to the compressor’s vibrations, the readings were so inaccurate they were unusable.
That’s another issue we ran into…the camera is generally not stable enough for detailed calculations…of course, maybe that was just the mecanum wheels…
That’s a very good question. The answer of a how to create a true “FRC GPS” is probably going to be a combination of techniques that might include camera data, inertial guidance data, floor features, range-finders, etc that all can be used to develop a “reasonable” position on the field. But someone’s software team is going to have their work cut out for them, that’s for sure.
We should suggest for next years game that there be a few devices on the outside of the field that your robot can ping to calculate it’s position.
Well, that’s certainly where things get interesting… As you said earlier, you know where in a circle you are right now…all you need is one more circle to narrow it down to only two possible locations. I’d be curious to see what kinds of results someone could get with two cameras…one looking at each tower (assuming you can see them). Dunno if the control system infrastructure could support that, but that would get you going in the right direction.