|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
| Thread Tools | Rate Thread | Display Modes |
|
#16
|
||||
|
||||
|
Re: Depth Perception
We had some problems using the camera for distance using just straight math after detection and filtering. However we switched gears and take the fuzzy logic route by creating a graph of distance and values calculated from points to come up with a curve. The curve made things super accurate and precise (due to what ever error exsisted at that distance, ie the algorithm finding the outside of the square or such). Im not sure exactly what was going on that caused those issues but now we have 2 inch accuracy at 54 feet.
|
|
#17
|
|||
|
|||
|
Re: Depth Perception
Quote:
|
|
#18
|
||||
|
||||
|
Re: Depth Perception
Are you attempting to write "general" camera tracking code? With or without human adjustment / compensation? Is this for autonomous mode?
There are many situations you could be working toward that have different constraints. Perhaps you may find that you're best served by a different solution for each mode. One potential example: In autonomous mode, you have a more or less known starting position, and the rules prohibit opponent interference. If you can shoot from there reliably with relatively simple camera aiming, then you have "figured out" autonomous scoring without camera depth perception. Teams in Aim High often reported shooting from particular "sweet spots", nice little areas on the field that the robot was tuned to score most reliably from. Obviously a different game, from the different size hoop to a very different game piece. But this is the kind of thing one might need to look into if, for whatever reason, camera depth perception isn't reliable enough to work for your desired constraints. Do keep in mind that this post is coming from the perspective of a member of a team that doesn't have the expertise to do complex software control loops, so my instinct is to look for as many ways to eliminate a need for software as possible. Take this post as a reminder that there may be more than one way to solve the problem of reliably making shots with your particular design - not just camera depth perception. Whether or not you give that up, and when, is one of the many, many challenges of this particular game. |
|
#19
|
|||
|
|||
|
Re: Depth Perception
We used the particle analysis function to find the height of the box after filtering it out with HSL comparison. after finding data for every foot from 3-23 feet and applying a regression with my TI-84 (equation came to something like 470/x^(.89)) our camera is accurate at predicting distance from any angle to a little more than an inch. sweet spot for distance measurement is 8 ft or so. comparing the same measurement using the width to the measurement from height gives us what angle we are at compared to the target.
|
|
#20
|
|||
|
|||
|
Re: Depth Perception
Quote:
Thanks a lot! |
|
#21
|
||||
|
||||
|
Re: Depth Perception
Quote:
|
|
#22
|
||||
|
||||
|
Re: Depth Perception
A touch sensor,because you that its touching the touch sensor.
![]() |
|
#23
|
|||
|
|||
|
Re: Depth Perception
If you have the camera mounted at a fixed angle, you can use the screen coordinates of the hoop in the image to determine the angle between the horizontal plane the camera occupies and the line between the camera and the hoop. As you know the height of the hoop from the ground, you can then use trig to calculate the distance.
|
|
#24
|
|||
|
|||
|
Re: Depth Perception
We are using that method and so far it is working quite well for us. Just make sure there is a sufficient height difference between the camera and the target.
|
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|