View Full Version : Values for Image Proccessing
inkspell4
18-01-2013, 14:45
Hello,
I'm trying to determine what values to replace the ones in the following line with.
BinaryImage thresholdImage = image.thresholdHSV(60, 100, 90, 255, 20, 255);
Any help would be appreciated.
We have never used vision processing before this year as a team and have decided to go with a green ring light.
Hello,
I'm trying to determine what values to replace the ones in the following line with.
BinaryImage thresholdImage = image.thresholdHSV(60, 100, 90, 255, 20, 255);
Any help would be appreciated.
We have never used vision processing before this year as a team and have decided to go with a green ring light.
right now, this seems to be working pretty good for me
10,60,200,255,200,255
though this is our specific test field and im currently working on a system to adjust for light
inkspell4
18-01-2013, 15:17
How did you determine what values to use for your system?
dvanvoorst
18-01-2013, 15:22
You should use the NI Vision Assistant to experiment with the numbers. You can see the results of the changes immediately on the screen. If you've installed the NI FRC tools on your computer, you should have it.
Start it up, choose "Acquire Image", choose "Axis Camera", fill in the IP info, and click the play button and you'll see your camera feed. If you click the single play button it will take a snapshot. Then click the last icon to "store the acquired image in browser". Then go to the Process Images page, choose the Color Threshold tool, select HSV, and then you can tweak the numbers there and see exactly what it would do to your image.
You can, of course, add multiple processing filters along with the final particle report - exactly the same as you would in your program.
Keep in mind that in Java you don't have access to all the different tools that are shown in the Vision Assistant.
Dale
inkspell4
18-01-2013, 17:16
What range of values should i start out with.
We are using a green ring light as mentioned before
inkspell4
18-01-2013, 18:36
Is it likely that there will be noise/small particles that stray after the threshold is done.
Joe Ross
18-01-2013, 18:44
Have you read the Vision Processing paper? http://wpilib.screenstepslive.com/s/3120/m/8731
inkspell4
18-01-2013, 19:20
Yes i have read the linked document but could not find much info on the threshold values
Greg McKaskle
18-01-2013, 22:11
thresholdHSV(60, 100, 90, 255, 20, 255)
These are the numbers for minH, maxH, minS, maxS, minV, and maxV. The best way to determine the numbers is to use your LEDs, your camera settings, and either debug using pixel values in your code, or open it in a tool such as NI Vision Assistant and see what the values are.
Without seeing your images, I'd say you should probably start with maxH higher, say at 125 or so. You should also lower minH from90 to 40 or so. Then work the mins higher and the MaxH higher until you eliminate other particles.
Greg McKaskle
inkspell4
18-01-2013, 22:55
Which tool in vision assistant should i use for testing and finding the values
Patrick Chiang
18-01-2013, 23:37
In NI Vision Assistant, load the sample script they give you.
Acquire an image by connecting to your camera and using the Acquire Image button (enter IP and settings).
In the Color Threshold, adjust the values until your goal is red (and nothing else is). Those are your HSV values that you want to put into your program.
inkspell4
18-01-2013, 23:40
In NI Vision Assistant, load the sample script they give you.
Acquire an image by connecting to your camera and using the Acquire Image button (enter IP and settings).
In the Color Threshold, adjust the values until your goal is red (and nothing else is). Those are your HSV values that you want to put into your program.
I thought you wanted everything but your goal to be red
Patrick Chiang
18-01-2013, 23:45
I thought you wanted everything but your goal to be red
Yeah that's what I thought too at first, but apparently not. The red part is what you want.
Ideally you'd see a border of red around some black (goal hole), then everything else is black. Then, if you go down the steps (beyond the color threshold), you should see the goal get entirely filled in with red, and the small particles get eliminated.
Here: http://wpilib.screenstepslive.com/s/3120/m/8731/l/90361-identifying-the-targets
RTFM (Read The FIRST Manual) ;)
inkspell4
18-01-2013, 23:53
Oh i get what your saying i was think of it in terms of the preview color when applying the threshold in vision assistant not as the image afterwards
Patrick Chiang
18-01-2013, 23:55
Yeah it's our first year doing image processing too.
We also had some problems with the provided sample code (java) detecting distance, so we're in the process of writing our own distance computing method.
inkspell4
19-01-2013, 00:03
I haven't even tested the code yet but i know that the sample seems to filter out all but red while we need it to filter out all but green
inkspell4
19-01-2013, 00:06
Would it be better to replace the hsv threshold with rgb
Greg McKaskle
19-01-2013, 07:52
It is a bit difficult to talk about these with no images.
The Threshold operation compares pixel values and returns a Boolean masked image. The image only has two values, 0 and nonzero. Vision Assistant and other tools will typically display this with two colors. The colors have no relation to any original colors.
If the threshold is masking out the wrong color, change the hue. I attached an approximate hue wheel from a LV panel. I highly recommend you open up Vision Assistant and experiment with an image and the color threshold block.
The block supports RGB, and various HS(IVL) versions.
There was a question about switching to RGB, and that will be a bit faster to process, but I think you'll find it far less accurate for specifying ranges. But the cool thing about Vision Assistant is that you can experiment, discover things, and ask questions.
Greg McKaskle
inkspell4
19-01-2013, 09:33
Thank you for all your help
Chimera0x694
19-01-2013, 10:08
Last year for image processing we put the camera with the LEDs on the robot, lined it up, and used the driver station image. If you look at the bottom of the image you should see RGB values. Put the cursor on the highlighted target and record tose values, then plug them into your code. Now, if you are doing this in your work area, these values are going to be different when you take your robot to competitions. It will need to be calibrated each time you move to a place with different lighting. I hope this helps! Good luck with Java! :)
Greg McKaskle
19-01-2013, 10:17
Do you recall what the values were? Do you have your camera set to automatically adjust white balance and exposure?
If you were to set the camera up to where the majority of light you care about comes from the LEDs and the camera isn't adjusting settings, I suspect the values won't need to be calibrated or the calibration will not change by much.
Greg McKaskle
Chimera0x694
19-01-2013, 10:31
// RED GREEN BLUE
if (BlueAlliance) {
// greenStuff = image.thresholdRGB(0,140, 240,255, 0,255);
greenStuff = image.thresholdRGB(0,140, 200,255, 0,255);
} else {
greenStuff = image.thresholdRGB(100,200, 225,255, 100,200); // this was sorta OK
}
This was our thresholding code, and the lighting truly did effect the values. we found this out when our auto targeting would work perfectly at our build facility, and then not work at competitions. we found that at the competitions, the thresholds were drastically different.
Greg McKaskle
19-01-2013, 10:48
I was asking about build versus competition values. Is that the commented code change?
Also, do you know if the exposure and white balance were set to auto or not?
The commented code change is actually not that big a change. It is basically saying that the green lower limit needed to change by 20%.
Do you have any images used to do the calibration?
Greg McKaskle
Chimera0x694
19-01-2013, 10:58
Yes i think it is what the commented code is, what I meant by the values drastically changing was that the results changed. the robot went from snapping directly to the target with a push of a button to jumping around and not staying still because it couldn't see the target properly. I don't remember changing anything else, so the exposure and white balance must have been set to default. we used a false color image for the robot to parse, filtering everything but the colors within the threshold.
Greg McKaskle
19-01-2013, 12:54
The default settings of the camera are automatic exposure and automatic white balance. These will give more variability than if you calibrate and keep the settings constant. I'd love to be able to set them from code or configuration, but the camera doesn't have full support for that. I think that you may want to experiment with calibration, as discussed in the white paper and then test the results in the shop, outside, perhaps with some work lights or bright flashlights shining on the targets or the robot. No vision system is immune to lighting changes, not even your eyes, but knowing what will break it and what won't is pretty good stuff.
Greg McKaskle
jesusrambo
19-01-2013, 17:30
Yeah it's our first year doing image processing too.
We also had some problems with the provided sample code (java) detecting distance, so we're in the process of writing our own distance computing method.
What I did last year was just look at the apparent size of a vertical side of the target, and then compare that to the actual size. It's simple to figure out range from that proportion.
I did have to correct the reported values, but I'd recommend just getting the first part of what I said outputting values, then do some empirical testing, plot the values in excel (computed distance v actual) and then generating a trend line. The equation of that line should correct your calculated distances pretty well.
vBulletin® v3.6.4, Copyright ©2000-2017, Jelsoft Enterprises Ltd.