Chief Delphi

Chief Delphi (http://www.chiefdelphi.com/forums/index.php)
-   Rules/Strategy (http://www.chiefdelphi.com/forums/forumdisplay.php?f=6)
-   -   Depth Perception (http://www.chiefdelphi.com/forums/showthread.php?t=99726)

~Cory~ 09-02-2012 23:40

Re: Depth Perception
 
We had some problems using the camera for distance using just straight math after detection and filtering. However we switched gears and take the fuzzy logic route by creating a graph of distance and values calculated from points to come up with a curve. The curve made things super accurate and precise (due to what ever error exsisted at that distance, ie the algorithm finding the outside of the square or such). Im not sure exactly what was going on that caused those issues but now we have 2 inch accuracy at 54 feet.

ianonavy 11-02-2012 12:42

Re: Depth Perception
 
Quote:

Originally Posted by ~Cory~ (Post 1123534)
We had some problems using the camera for distance using just straight math after detection and filtering. However we switched gears and take the fuzzy logic route by creating a graph of distance and values calculated from points to come up with a curve. The curve made things super accurate and precise (due to what ever error exsisted at that distance, ie the algorithm finding the outside of the square or such). Im not sure exactly what was going on that caused those issues but now we have 2 inch accuracy at 54 feet.

How do you calculate distance from points? Do you just look at the size of the target?

Chris is me 11-02-2012 13:11

Re: Depth Perception
 
Are you attempting to write "general" camera tracking code? With or without human adjustment / compensation? Is this for autonomous mode?

There are many situations you could be working toward that have different constraints. Perhaps you may find that you're best served by a different solution for each mode. One potential example: In autonomous mode, you have a more or less known starting position, and the rules prohibit opponent interference. If you can shoot from there reliably with relatively simple camera aiming, then you have "figured out" autonomous scoring without camera depth perception.

Teams in Aim High often reported shooting from particular "sweet spots", nice little areas on the field that the robot was tuned to score most reliably from. Obviously a different game, from the different size hoop to a very different game piece. But this is the kind of thing one might need to look into if, for whatever reason, camera depth perception isn't reliable enough to work for your desired constraints.

Do keep in mind that this post is coming from the perspective of a member of a team that doesn't have the expertise to do complex software control loops, so my instinct is to look for as many ways to eliminate a need for software as possible. Take this post as a reminder that there may be more than one way to solve the problem of reliably making shots with your particular design - not just camera depth perception. Whether or not you give that up, and when, is one of the many, many challenges of this particular game.

Scimor5 11-02-2012 14:33

Re: Depth Perception
 
We used the particle analysis function to find the height of the box after filtering it out with HSL comparison. after finding data for every foot from 3-23 feet and applying a regression with my TI-84 (equation came to something like 470/x^(.89)) our camera is accurate at predicting distance from any angle to a little more than an inch. sweet spot for distance measurement is 8 ft or so. comparing the same measurement using the width to the measurement from height gives us what angle we are at compared to the target.

ianonavy 11-02-2012 14:45

Re: Depth Perception
 
Quote:

Originally Posted by Scimor5 (Post 1124360)
We used the particle analysis function to find the height of the box after filtering it out with HSL comparison. after finding data for every foot from 3-23 feet and applying a regression with my TI-84 (equation came to something like 470/x^(.89)) our camera is accurate at predicting distance from any angle to a little more than an inch. sweet spot for distance measurement is 8 ft or so. comparing the same measurement using the width to the measurement from height gives us what angle we are at compared to the target.

Okay, that makes a lot more sense to me than trying to find distances based on targets. I'll have to try that later today with our test robot. It's a linear regression, right? (as opposed to a polynomial regression).

Thanks a lot!

KennyLives 11-02-2012 15:27

Re: Depth Perception
 
Quote:

Originally Posted by DuaneB (Post 1101720)
Note that maximum range of the Maxbotix LV-MaxSonarŽ-EZ1 sonar range finder is 254 inches (21 ft / 6.5m).

This little guy came in the KOP and works pretty well. I say use it.

xhawaii808 11-02-2012 22:08

Re: Depth Perception
 
A touch sensor,because you that its touching the touch sensor.::rtm:: ::rtm:: ::rtm:: ::rtm:: ::rtm::

fb39ca4 18-02-2012 00:44

Re: Depth Perception
 
If you have the camera mounted at a fixed angle, you can use the screen coordinates of the hoop in the image to determine the angle between the horizontal plane the camera occupies and the line between the camera and the hoop. As you know the height of the hoop from the ground, you can then use trig to calculate the distance.

dakaufma 18-02-2012 08:22

Re: Depth Perception
 
We are using that method and so far it is working quite well for us. Just make sure there is a sufficient height difference between the camera and the target.


All times are GMT -5. The time now is 01:52.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi