Help with Locating Disks with Vision

Before I start, I’m not really good with programming vision and have encountered some errors. Vision is something we need this year and I’m stuck right now. I have made a script with the three different colored disks with the labview vision assistant that detects the ‘calipar’ and ‘location’ of each disk. I followed a tutorial found here to basically help me:
https://docs.google.com/viewer?a=v&q=cache:vuYBmZI943gJ:svn.assembla.com/svn/gccrobotics/Image%20Processing%20in%20LabView/Image_Processing_in_LabVIEW_for_FRC[1].pdf+vision+assistant+frc&hl=en&gl=us&pid=bl&srcid=ADGEESgKZqY5rnpEu7oDYfKG-5F1kBKf7DbhmMx5xSgKMTV6rkYKE0wDRlPuqM5wSXnINsOcK-BLKLdqwdGv-ELVLEfmGYKStsw0K5agbVjzEOEBCW19EGMyIXUcCY5kvhzftUdLcXTW&sig=AHIEtbSlXmtgpL8V8ou96Jr3M4FLLswFYQ

I have done exactly what it says in the tutorial, I believe. But when I test and probe the various outputs in the vision processing vi such as "matches (locate blue) I get absolutely no response when I hold a blue frisbee at the camera. The camera is an axis camera and is working fine, streams video that is relatively smooth, and has no issues atm.
Here is my vision processing code, along with the script I made with labview Vision Assistant, and included is the pictures I have taken to make the script. Make sure if you downloaded the .zip to unzip and place in new folder.

Thanks, btw we are doing offboard image processing

You might try looking at example code in LabVIEW such as ‘Follow Blob’. These need a lot of tuning, but are a great starting point for basic imaq processing in LabVIEW.

I think what is going on is that the tutorial you are following uses a very accurate, but very unforgiving method of locating colored objects. It assumes that you have very good control over the type and amount of light, distance to the object, etc.

I’m sure that you can write simple color threshold code to locate a frisbee, and you can do a few additional particle measurements to see if it is round. This approach will be more forgiving in varied lighting conditions. Even then, make sure to follow the recommendations in the white paper regarding camera white balance setting, compressions, focus, etc.

Greg McKaskle

I have a very basic question: why do you need to detect a blue frisbee with the camera?

Is it possible to simplify what you need it to do? During autonomous, all of the frisbees in play are white. During teleop, you can use your drivers eyes to detect the color. Because of this, I can’t see a reason to detect the colored frisbees. Simplify and be relieved.

I second Chris’s suggestion. White discs are available in Autonomous. The driver is available (and more adaptable) in teleop.

this is exactly what we did this year (though we had to scrap the project due to weight issues which causes us to turn into a FCS only).

The steps:

Acquire image

Convert it to HSV (hue, saturation, and value)

(Binary) Threshold image to locate only the desired colour (red, white blue)

This is where vision programming has a lot of diversity…

We found the contour of an image, that is, where black meets white.

Approximate a polygon (I promise I will make this better before next build season for squares for teams that want to do pose)

then I said

if (result > 5))

it’s a circle;
}

result being the amount of sides of the contour.

From then, you ideally should have located the circle.

To find the center:

// Calculate the moments to estimate the position of the frisbee
CvMoments moments = (CvMoments)malloc(sizeof(CvMoments));
cvMoments(yourcontourhere, moments, 1);

    // The actual moment values
    double moment10 = cvGetSpatialMoment(moments, 1, 0);
    double moment01 = cvGetSpatialMoment(moments, 0, 1);
    double area = cvGetCentralMoment(moments, 0, 0);

    posX = moment10/area;
    posY = moment01/area;

This gives subpixel accuracy for the center

Now you have the center of a frisbee based off the specified colour. Congrats. BUT, what if there was more than 1 of the same coloured frisbee?

This simple algorithm solves this problem (keep in mine that I used opencv and changed the center from the top left to the center of the screen to make the logic easier)

Int prevclosestfrisbee = 300 (arbitary pixel value that is high on the screen. *note the greater the y pixel value, the further it will be away.)

if (frisbee.y < prevclosestfrisbee)
{ prevclosestfrisbee = frisbee.y
}
else
{
contours = contours ->h_next:
}

after you go through all the contours (the frisbees that passed the threshold test and the approximate polygon test, then you’re left with one with the lowest y value.

**note, you can do this with all 3 coloured frisbees, I posted an image on here of a frame while tracking all 3 disks. It does take a fairly large amount of processing power (this program ran ~13 fps while the program that tracked the alliance wall ran at ~27, but it is do able!)

Hope this helped!