HSV Value Of Green from Reflectant Tape

Hi everyone,

We are trying to set up our image analysis so that only the green from the light bouncing off the reflectant tape is shown, so the code(roughly) is:

Core.inRange(inputimage, scalar(h,s,v), scalar(h,s,v), hsv(output))

So does anyone happen to know what would be a good min scalar and max scalar hsv value to set so that only that particular colour would be revealed?

Also, while Im asking, anyone know how to do noise removal with openCV(java)?

Thanks,
Ryan

I didn’t personally handle this for our team, but we’ve used Grip to isolate the color from the video stream.
The color depends on what LED you’re using so each team’s might be a little different.

If you could go into the pipeline and see if it says the hsv value of the color that would be amazing. We are using a raspberry pi and so we are using the openCV library.

Your values will be dependent on multiple things specific to your illumination and camera set up. Using values from someone else’s set up will leave you with less than desirable performance.
Your best bet is to gather specific values from your system that work.

Ok understood, any idea what the best way to gathering those values would be? Ie is there any software where I can have a webcam plugged in and then be told the hsv value of a color which ive selected? Or I suppose if there is any software which could tell me the hsv(or rgb…its translatable) values of a color in an image that would work also.

You could save the image to a picture, and then use something like GIMP.

Or, make the threshold values tunable via networktables, and then send the video stream via cscore to your browser. Change the values, see the results in real time (just like GRIP).

GRIP is your friend.

Set up grip with your camera as the source, and your LED ring running. Add an HSV color filter, and display the output of those things. Adjust the sliders until you have all of what you want, and none of what you don’t.

Well, you might not get to “all” and “none”, depending on illumination, background, etc, but you should be able to get to a point where you have a big blob where that tape is, and a few patches of miscellaneous other spots that are not as big as the tape.

ETA: If you can’t use your camera as a GRIP source for some reason (like in my case the camera is attached to a Rasp Pi), capture some still photos and import them into GRIP. And there are other ways to do this. I just use GRIP because it is very good, very free, and shared by the FIRST community so there will be lots of advice if you can’t get it working.

GRIP will do this; however, the gotcha is that you’ll need your source to be set to the same exposure that your camera will be when running on your RPi.

Team 4050 is new to vision this year, but what we’ve got so far seems to work reasonably well. We’re using Java and OpenCV.

We wrote a basic program to run on a laptop that turns down the camera exposure (we used -10.0 for our Lifecam) and displays the webcam output to a JFrame. We used Alt-PrtScn hotkey to grab a shot of the window and then pasted it into Paint.NET as a new image. Crop out the window border and save it, and you’ve got a sample image to bring into GRIP. We did this for different distances and orientations to the gear lift.

Fire up GRIP, select the images as your source, and add the HSV Threshold operation to your pipeline. Adjust the HSV sliders to isolate the reflective tape as much as possible without too much degradation to the tape. At this point, you could simply record the start and end values of each of the sliders to get the HSV values that you’d want to use in your code.

Of course, you’d want to test in a real-world setting and probably make tweaks to the values, but the process I described should get you pretty close from the outset.

What we did was to build a full pipeline to do blurring, thresholding, eroding, and contour finding and then had GRIP generate the Java class for the pipeline (Tools > Generate Code).

Awesome, ill give it a go!

We use NI vision assistant to calibrate the thresholds. You can import the pictures into vision assistant straight from the LabVIEW Data folder, as the images from the robot are constantly temporarily saved in this folder. (I’m not sure if non-LabVIEW teams have something similar or not). Then adjust thresholds accordingly until you are satisfied! Screenshot.

You mentioned that you wanted to remove noise from the picture - A common way to do this is with a blur, generally either a Gauseian or Median blur.

I prefer a Median Blur, because it keeps hard edges better and I think is better at removing noise than a Gauseian, both of which are quite helpful for Tracking tape and other objects with hard edges.

The javadocs are here - http://docs.opencv.org/java/3.0.0/org/opencv/imgproc/Imgproc.html#medianBlur(org.opencv.core.Mat,%20org.opencv.core.Mat,%20int) - but you’ll notice they don’t give very much.

A much more in-detail write up is available here for python http://docs.opencv.org/3.1.0/d4/d13/tutorial_py_filtering.html and here for java - https://www.tutorialspoint.com/java_dip/applying_gaussian_filter.htm - but using a Gaussian blur.

I have python source code which my team used for last year’s vision tracking, which used a median blur, available here https://github.com/Team4613-BarkerRedbacks/2016-vision but of course it is in python.

If you need more info or help, please ask!

Thanks, Tom.

Nice way to get HSV value is to take a picture with your camera and the light shining on the retro-reflective tape then open that image in Microsoft paint (yes Microsoft paint actually has a use). From there take a sample of the green light reflected on the tape to get the RGB values then use a converter to convert that to HSV. That should give you your HSV value and a range to set your HSV thresholds.

So, thanks to the help from everyone, I think we’ve finally got it! :slight_smile: I am testing this afternoon but I believe the combination of using the GRIP pipeline plus the raspberry pi project provided by screensteps will work out.

I know for a fact I am using python on the pi next year though…I feel abused by java.