Help with GRIP HSL values?

Hi all - I’m trying to follow the 2016 GRIP tutorial for vision processing can’t seem to nail the HSL values and as such am not able to target in certain situations. See Below:
http://i.imgur.com/h8ichBA.png

Typically you want the HSL value to be filtering out as much as you can without removing any of the target. In the picture you have, some of the target is blacked out so your HSL values are too narrow. Try widening it and then maybe add filters to the minimum width to get ride of noise that remains.

Do you happen to have an example of a workflow that would correct this? I’ve widened my ranges and it’s helped a little. What do you mean by adding filters to minimum width?

On the “Filter Contours” module there are settings for “Min Width” and “Min Height”. Once you open up your ranges to include all your target, try increasing the values in those filters to get rid of all the noise.

Protip

  1. Download Paint.net (PDN)
  2. Take a screenshot of the camera image
  3. Paste it into PDN
  4. Use the dropper tool on the target you want to select
  5. Select “Advanced” on the color palette
  6. The HSV values will be there for you to play with. Plug the HSV for that pixel in, ± 10-15 degrees depending on how much variance the image has.

E/ my bad couldnt see that image… his exposure seems incredibly low anyway

Your vision target is overexposed in this image. Notice how the green washes out to white on the target on the right. This happens because there are blue (and even a small amount of red) components to the light emitted by your green LED ring, and you are saturating the green subpixels on your imager.

Digital cameras work by having an array of subpixels that measure light levels for a particular frequency band of light and location on the imaging plane. Each subpixel on your camera measures red, green, or blue light separately, and then nearby subpixels are combined to give RGB values to the final image. The problem is that each subpixel can only measure so little (ex. 0) or so much (ex. 255) light in a given part of the spectrum. You can manipulate the exposure time of the camera to affect how much light hits the sensor in total. Note that brightness/gain is NOT the same thing as exposure time - exposure time affects how much light can hit the sensor, while gain/brightness effectively multiply that amount of light by a constant.

If your LED light emits 90% green light, 9% blue light, and 1% red light (totally made up numbers), imagine what happens at different exposure times. At one exposure time, you might measure R=1, G=90, B=9.

This is decisively green and easy to segment in HSL/HSV (even though it is fairly dark). At another exposure time, R=10, G=900 (but gets capped at 255 by your sensor), B=90.

This is now slightly more blue looking, but still pretty green. At another exposure time, R=100, G=9000 (capped at 255), B=900 (capped at 255).

Now this is azure. At another exposure time, R=1000 (capped at 255), G=90000 (capped at 255), B=9000 (capped at 255).

This is white, and obviously bad for segmentation.

I would try to fix this before worrying about color thresholding (most USB or Ethernet cameras allow you to fix the exposure time one way or another). Once you get a consistent, non-saturated color in your image, thresholding becomes much easier (because the range of hues that you are interested in becomes very narrow).