Chief Delphi

Chief Delphi (http://www.chiefdelphi.com/forums/index.php)
-   General Forum (http://www.chiefdelphi.com/forums/forumdisplay.php?f=16)
-   -   Aerial Camera for FIRST matches (http://www.chiefdelphi.com/forums/showthread.php?t=130247)

faust1706 13-08-2014 21:39

Re: Aerial Camera for FIRST matches
 
It isn't pixel coordinates I am transforming, but real world coordinates of the objects from the kinect. x is left and right, y straight distance. Here is an example: http://www.chiefdelphi.com/media/photos/39138

If you're really interested, here is the code:

Code:

std::pair<int, int> Translation(int mapx, int mapy, double robotx, double roboty, double depthx, double depthy, double heading)
{
    mapx = (robotx + depthx);
    mapy = (robotx - depthy);
    mapx = mapx*cos(heading) + mapy*sin(heading);
    mapy = -mapx*sin(heading) + mapy*cos(heading);
    return std::make_pair(mapx, mapy)
}

It requires to know where the robot is on the field, which we know by our vision solution. First we assume the robot has a heading of 0 degrees, facing directly to the left. Then I account for heading by multiplying by the rotation matrix (though it doesn't look like it).

You're right, it is localization. It is a little (a lot) more complex than calculating scaling factors. It is called a pose estimation (http://docs.opencv.org/modules/calib...struction.html)

You are also right about being blind when the goals are out of frame. In 2012 we had a really high camera (relative to the other robot heights) that rotated to always face the goal. In 2013 our camera was rather low, but so were most robots, and all we used was distance. Those pesky pyramids were also a problem. This year, we used 3, 120 degree cameras (http://www.geniusnet.com/Genius/wSit...14&ctNode=161). There were also vision tapes in all 4 corners. We made a custom gps type triangulation (intersection of n circles where 2<n <=8). This proved very accurate, but we didn't use it in the game, just testing for future years and obtaining knowledge for knowledge's sake. Code can be found here: https://www.dropbox.com/sh/arj7y11wf...QfPB8v0EZaff5a

Skew can be accounted for by calibrating the camera: which we didn't do this year (http://www.chiefdelphi.com/media/photos/39466 and for a read: http://docs.opencv.org/doc/tutorials...libration.html) For the pose estimation, it DOES take into account focal length and what not. In 2013, I said screw it when developing the program and didn't add in a fixing feature, same in 2014. If you look closely at the 2012 image I sent, there are purple crosshairs near the center of each target. That is my projection of my projections of where they are in 3d to where they would be in the screen.

As for linearization...I did a (custom) regression on data for the .y value of the pixels vs distance. I don't have the image on this computer, I'll add it in later tonight. I think that is what you mean by linearization.

Your method seems more versatile and robust, which is why I am interested in it.

Sorry for the wall of text.


All times are GMT -5. The time now is 01:33.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi