Quote:
Originally Posted by SoftwareBug2.0
He seems to be assuming that all robots are rectangles with bumpers colors in the obvious locations. If that's the case and he goes through with his plan for an autonomous robot then I'll look forward to seeing what his robot does when going up against a robot that looks like this: http://www.idleloop.com/frctracker/p.../2013/2972.jpg
|
eh. There are multiple ways to track other robots. One is cascade training, but that would require me, or someone else, going around to every other robot at our regional and taking a multitude of pictures of them, which might not be welcomed. Also, cascading is notoriously slow in terms of algorithm speed.
The approach I am going is a depth camera. The ball is going to return a sphere with the closest point (theoretically) being it's center via a nth order moment calculation. So, with this being known, you can do a couple of things. If the object you are looking at isn't a circle, it isn't a ball, simple enough, right? I think so. Another thing to do is the take the moment of a given contour (or just segment of the screen if canny is chosen), then check to see if the center is indeed the closest point within all of the contour. I'm going to bet it isn't, which me and 2 mentors agreed was another fair assumption.
Then you can calculate it's velocity by recording it's position relative to you from the previous frame, then calculating it for the current frame, and doing vector math to get it's velocity because you can calculate how much time passed between the two frame. Same math applies for calculating the velocity of the ball.