This was an early build season project to detect the game pieces on the ground for easy pick up. It calculate rotation in the x degrees and runs using a cheap $4 camera, so the fps isn’t to brag about. It separately tracks each coloured disk, returning the value of the closest disk for each colour. The other programmer on our team was working on automating pick up, but our team decided that we weren’t going to have a floor pick up.
We created a program to do this also. The programmers found a little spare time and figured it might be useful. We ended up not using it, but the concept is very cool!
You just can’t post something like this and leave no details, it’s just not nice!
OK, you got my interest piqued. I have just completed coding up a target tracking system using a PCDuino running Ubuntu and OpenCV. I’m getting approx. 20 fps of solid tracking data back to LabView. That is tracking rectangular targets. Total cost for PCDuino, Microsoft WebCam and class 10 uSD card: ~$95. That is far less expensive than a new Network camera.
While working on this project, I realized it is more than likely that I will want to also locate circular targets some time in the near future. (Maybe sooner than I think (2014???) )
So, how did you go about getting these targets located?
When I ran it with a kinect, they fps was about 20, so it was indeed the camera.
I used ubuntu 12.10 with the newest version of the opencv libraries.
I captured a RGB image, then converted it to HSV, then split the HSV into 3 images, H, S, and V. Then thresholded to eliminate all but one color (red, white or blue). Now it is a binary image, which opencv likes. I found the contours of the image using cvFindContours, and found the centers of each contour by using image moments. (tutorial found here:http://www.aishack.in/2010/07/tracking-colored-objects-in-opencv/) -note- this guy, who I admire a lot, used
CvMoments moments = (CvMoments)malloc(sizeof(CvMoments));
cvMoments(imgYellowThresh, moments, 1);
where I replaced my source from being an entire image to just the contour, so I could track multiple things at once. so mine code read
CvMoments moments = (CvMoments)malloc(sizeof(CvMoments));
cvMoments(Contour, moments, 1);
That gives subpixel accuracy for the center. Moving on. To draw the circle, I used cvApproxPoly, and said that if the result > 5, then I am going to make the assumption it is a circle, if it has < 5 sides, I don’t like it.
The next step is fitting a box around the contours. Then drawing the largest ellipse possible within that box. That ellipse is what is coloured on the image.
I’ll post the program up here whenever I get a chance to take it from a computer at school.
Also, I only saved the frisbee that was closest to the camera per colour. So which one was lowest on the screen. Which one had the largest y value (opencv’s coordinate plane is positive x to the right, positive y down). It was a simple algorithm that I actually used a lot in tracking the alliance wall:
cvPoint CurrentFrisbee;
double PrevClosestFrisbee = 0;
if(CurrentFrisbee.y > PrevClosestFrisbee)
{
PrevClosestFrisbee = CurrentFrisbee;
}
A very simple yet very powerful conditional statement.
If you have anymore questions I’d love to answer them.
<3 ratchet rockers
We did a similar thing. Located here: https://www.youtube.com/watch?v=5HTD8F_1ezM
Check out the description in the youtube link to get information about it.