In the process of tracking pegs with the camera with code I make (for obvious reasons) a binary image out of the camera image. The function to do that takes six integer inputs (min/max for three values depending on whether you’re making an HSL, HSV, HSI, or RBG image). This is all fine, but look where I may I can’t find the domain for these inputs. Anybody have any idea?
We’re programming in Java (though it should be the same for C++) and I’m looking at threshold functions in the the wpilibj here: http://www.wbrobotics.com/javadoc/edu/wpi/first/wpilibj/image/ColorImage.html
I’m making the binary image based off of HSL, so I can guess a domain of 0-360 for hue (if you wikipedia HSL you’ll see why), but there’s no intuitive domain for the other two values, especially as they are integers (if they were doubles or floats maybe I’d guess 0-1).
On a side note, does anybody know how to make the classmate display a live binary image instead of the normal camera image?
Here’s a video for 319s camera tracker.
and here are the settings:
threshold : 60,80,0,10,220,255
note there is only one color where the old code was built for 2
also the target layout with a dot on the peg and a strip above and beneath caused some issues with false positives for me.
If you want any other info just ask
I’m assuming then the domains are 0-255?
Also, I’m entirely new to image processing (I sorta taught myself a bit just today, and I’m working with what I know to write my code), and I was wondering how to get the camera identify the pegs with the rectangular strips above and below the peg as part of the peg as well. Right now my code would (hopefully; not yet tested) track just the peg in the middle. I feel like tracking the strips above and below the peg would be useful for zeroing in on the peg more precisely.
Do I have to separately identify the rectangles below and above and check their x position on the image relative to the peg reflector? I’m sort of starting to get ideas how to do this as I type, but any help would be greatly appreciated.
For the most part, the Java imaging stuff are wrappers around the IMAQ libraries. The best manual for getting an overview of vision is the Vision Concepts manual. The best reference manual for you would be the C or CVI function reference manual. I’d assume the manuals are available in National Instruments/Vision/Documentation or something similar.
Also, you may find it useful to read the tutorial on http://decibel.ni.com/content/community/first/frc?view=all#/?tagSet=1001 about vision targets.
This stuff is awesome. It’s a shame that all the documentation for FRC is scattered all over the web. I assume many teams will miss great, time saving docs like this.