View Single Post
  #4   Spotlight this post!  
Unread 04-12-2010, 11:58 PM
davidthefat davidthefat is offline
Alumni
AKA: David Yoon
FRC #0589 (Falkons)
Team Role: Alumni
 
Join Date: Jan 2011
Rookie Year: 2010
Location: California
Posts: 792
davidthefat has much to be proud ofdavidthefat has much to be proud ofdavidthefat has much to be proud ofdavidthefat has much to be proud ofdavidthefat has much to be proud ofdavidthefat has much to be proud ofdavidthefat has much to be proud ofdavidthefat has much to be proud ofdavidthefat has much to be proud of
Re: IR Scanning Method Of Object Recognition

Quote:
Originally Posted by Radical Pi View Post
Let's see if I have this right (see attached image as well). We know one leg, the relative height of the camera. The IR system sends out the beam across everything in LoS. The camera picks up on this, and determines the angle of the object from the camera (simple y-coordinate conversion). Do a bit of trig, and we end up with the length of the IR beam; the distance to the object from the robot.

Assuming that's your idea, we can expand on this a bit more. We not only have the Y-coordinates of the object in the camera, but also the X. With the X, we can calculate the angle of the object from the camera, giving us 2D vectors of the object.

Honestly, I think the entire 3D idea is a bit over the top. There are only a few things to expect on the field at any point in the game: balls, robots, and static field elements. With such a small sample of things to expect, it shouldn't be that hard to differentiate between those things (static elements will span large areas, balls will be relatively small (IR reflection length), and robots can easily be detected by the camera because of the GDC's convenient bumper requirements. A 2D field is plenty for the AI to work with, and in some ways easier too. It's not like we're ever going to be able to track balls in the air in real-time.

I see a few problems though. Because of the camera's limited FoV combined with physical obstructions such as the bumpers, there are going to be areas that the camera cannot see objects in front of it, breaking this entire system. However, one could argue that those areas are right up next to the robot, and possibly could even be detected by the lack of any IR beam in the areas it covers. Being so close to the robot, the plotting becomes less useful compared to just knowing it is there.

Also, as the object gets farther away, the change in angle for a given unit curves to eventually near-zero, and the accuracy of the reading will follow the trend too. Since the cameras are obviously not perfect in their detection, longer-distance objects will most likely start to return erratic values, diminishing the system's usefulness.
That picture is exactly what I am saying, but you are correct, its only effective to a certain range, but do you think that its really a disadvantage? ID based on the game next year, we can theorize where to go by making a map of the field and tracking the location of the robot and making decisions based on that info. Like in starcraft, there is the Macro and Micro part, micro is the scrimmages that happens, that can be applied to this robot
__________________
Do not say what can or cannot be done, but, instead, say what must be done for the task at hand must be accomplished.