You should use the NI Vision Assistant to experiment with the numbers. You can see the results of the changes immediately on the screen. If you’ve installed the NI FRC tools on your computer, you should have it.
Start it up, choose “Acquire Image”, choose “Axis Camera”, fill in the IP info, and click the play button and you’ll see your camera feed. If you click the single play button it will take a snapshot. Then click the last icon to “store the acquired image in browser”. Then go to the Process Images page, choose the Color Threshold tool, select HSV, and then you can tweak the numbers there and see exactly what it would do to your image.
You can, of course, add multiple processing filters along with the final particle report - exactly the same as you would in your program.
Keep in mind that in Java you don’t have access to all the different tools that are shown in the Vision Assistant.
These are the numbers for minH, maxH, minS, maxS, minV, and maxV. The best way to determine the numbers is to use your LEDs, your camera settings, and either debug using pixel values in your code, or open it in a tool such as NI Vision Assistant and see what the values are.
Without seeing your images, I’d say you should probably start with maxH higher, say at 125 or so. You should also lower minH from90 to 40 or so. Then work the mins higher and the MaxH higher until you eliminate other particles.
In NI Vision Assistant, load the sample script they give you.
Acquire an image by connecting to your camera and using the Acquire Image button (enter IP and settings).
In the Color Threshold, adjust the values until your goal is red (and nothing else is). Those are your HSV values that you want to put into your program.
Yeah that’s what I thought too at first, but apparently not. The red part is what you want.
Ideally you’d see a border of red around some black (goal hole), then everything else is black. Then, if you go down the steps (beyond the color threshold), you should see the goal get entirely filled in with red, and the small particles get eliminated.
Yeah it’s our first year doing image processing too.
We also had some problems with the provided sample code (java) detecting distance, so we’re in the process of writing our own distance computing method.
It is a bit difficult to talk about these with no images.
The Threshold operation compares pixel values and returns a Boolean masked image. The image only has two values, 0 and nonzero. Vision Assistant and other tools will typically display this with two colors. The colors have no relation to any original colors.
If the threshold is masking out the wrong color, change the hue. I attached an approximate hue wheel from a LV panel. I highly recommend you open up Vision Assistant and experiment with an image and the color threshold block.
The block supports RGB, and various HS(IVL) versions.
There was a question about switching to RGB, and that will be a bit faster to process, but I think you’ll find it far less accurate for specifying ranges. But the cool thing about Vision Assistant is that you can experiment, discover things, and ask questions.
Last year for image processing we put the camera with the LEDs on the robot, lined it up, and used the driver station image. If you look at the bottom of the image you should see RGB values. Put the cursor on the highlighted target and record tose values, then plug them into your code. Now, if you are doing this in your work area, these values are going to be different when you take your robot to competitions. It will need to be calibrated each time you move to a place with different lighting. I hope this helps! Good luck with Java!