So our team is currently at a dilemma, we were able to convert the demo code into the FRC advanced framework, only the second color to be masked doesn’t show up! We know the image data is being processed correctly because we can debug in the autonomous mode. Also, we know the color ranges are correctly setup because whatever we choose to track first does get masked, whether it’s pink or green doesn’t matter. Is there any reason why the second color does not get processed (well we see a small glimpse every 30 frames or so, but not enough to track.)
PLEASE! WE NEED HELP FAST!
One thing that I have noticed is that the areas of the two colors need to be roughly the same size. e.g. if your primary color is pink and the pink is on top, and you move your target down so that the green begins to progress off the bottom of the camera’s field of view, then the image processing may disregard the target because it doesn’t see enough green to match the pink.
However, I am going to guess that this is not your problem. There are a number of different parameters that could cause the camera not to track a target in a particular image and it’s difficult to say without actually seeing your images. If you could post a particular camera image where the target is in view, but the image processing does not work, we should be able to provide you with a better diagnosis.
One way to do this is to attach the camera directly to the host PC (via ethernet hub/switch or crossover cable) and run the example. You can probe image wires directly and if you pause the program at the point when an the program should find a target and it doesn’t, the right click on the probe image and Save the image to a file. Then you can post the image file.
You can also right click on an image on the displayed picture and save that to a file (i.e. it doesn’t have to be an image probe).
The other things I’d look at are to open up the FindTwoColor VI. Open up the diagram and probe the images. Just before the loop on the right side is an array of particle info going in. For each particle, it will extract image pieces and mask and measure particles there. Probe the area of the upper and lower and compare to the input particle area. You can also probe the extracted images to see if there is anything wrong with them.
The other suggestion about posting the images that don’t work is a really good one by the way.
Thanks for the Reply!
Well we tried your idea about the primary being bigger then the secondary and it did not work so i posted an image with our two images (The full one and the mask)
I know that you said that both colors mask just fine, but I’m really suspicious that your green limits are too narrow. The pink numbers seem like they are the default, but the green hue that is only 10 wide can easily miss the threshold. Similarly, I would set the upper on both lum and sat to 255. I don’t think you want to exclude the image when a bright light shines on it.
I loaded your image into Vision Assistant and the attached images shows a line profile diagonal across your green target and a histogram of the green rect. The green target via the line profile is roughly hue of 90, saturation of 100, and luminance of 150. This misses your green threshold on both sat and lum. The second photo is a more accurate description of the color showing the spread of each. So at this point I’d encourage you to put your defaults back for green.
It seems like your shop may be a bit dark too, so you may find it useful to up the Brightness parameter on the camera too.
Well the settings we know are correct because if we choose to track green first, then green is found perfectly while it doesn’t find pink at all.
Thanks for the replies, but we’re still in trouble!
Keep in mind that angling your fabric even a small amount will cause it to reflect very different light to the camera. So tracking green works, then the flat fabric is tilted ten degrees and it disappears.
It is your robot, but I’m telling you, and more importantly, the statistical histogram is telling you that your green settings are the problem.
Thanks for all your help so far, but the color is perfect because when we want green to load first (notice how there is a check box in the left side of the VI that says “load pink first”) if that is un-checked then it finds green just fine, but it doesn’t show pink! Whichever color is loaded first is the only one being found.
And thanks again for all of your help.
Also make sure that if you’re creating a new image and passing it in as Image DST to Color Threshold, etc. If you don’t do this, it will do in-place thresholding, which means that it will replace the source image with the filtered red/black image, so when you go to do the filtering for the second color, it won’t find it. To fix this, create a new image using IMAQ Create (make sure to give it a different name) and wire the new image into the Image DST input on the first Color Threshold (to be extra safe, create another one for the second Color Threshold as well).
I haven’t actually looked at the inside of the demo vision code to see what it does, so you may have already done this when converting the code, but this is a problem I’ve had before when writing my own vision code. Seeing a glimpse every 30 frames also sounds similar to what I’ve seen before.
Since I can’t see the diagram, perhaps something else is going on too, but one more time, the measurement above show that in that picture your green was too narrow on all three values. If you do the threshold by hand using the results of the histogram, you will see that the green will not pass.
If you post a picture of the diagram, perhaps we can see if anything else looks fishy.