Target tracking in Java

This year’s target tracking is really hard (not much documentation either) for us. I get how you can get the image and with the threshold, you can find the target. But we have no idea how to do all that in Java. So far this is what we have:


ColorImage image = AxisCamera.getInstance().getImage();
image.thresholdHSL(hueLow, hueHigh, saturationLow, saturationHigh, luminenceLow, luminenceHigh);

I have no idea what to do next. I’m assuming there’s a lot more to do still. I did read a couple white papers from another thread but all that just went over my head. I’m not asking for someone to just post their code, but to explain how all this works. I’d really appreciate all your help.

Also, will FIRST release sample code for Java this year like they did in 2009?

Here is our Github repo if anyone is interested: https://github.com/Neal/Team1777

The basic workflow of processing an image using the NI functions is like this:

Get image -> Threshold to binary image -> get particle analysis reports -> pick out particles that look like rectangles

So, the next step for you would be to change your code to something like this

ColorImage image = AxisCamera.getInstance().getImage();
BinaryImage binaryImage = image.thresholdHSL(hueLow, hueHigh, saturationLow, saturationHigh, luminenceLow, luminenceHigh);

This will let you store the binary image that the thresholding creates. From there, you can call the getOrderedParticleAnalysisReports function to get the particle analysis reports, and go from there. One thing I did to make it easier was to write the images to the CRIO memory, then FTP in and look at them to make sure my thresholds are good. Each image has a write function, which takes the filename (with extension) as an argument.

That basic workflow kind of makes sense. At least more than what all those white papers did. So this is what I got:


ColorImage colImage = AxisCamera.getInstance().getImage();
BinaryImage binImage =  colImage.thresholdHSL(hueLow, hueHigh, saturationLow, saturationHigh, luminenceLow, luminenceHigh);
ParticleAnalysisReport] report = binImage.getOrderedParticleAnalysisReports();

I don’t get what/how to do with report and “pick out particles that look like”

Any examples?

Thanks for the previous explanation!
Neal

The class ParticleAnalysisReport has several public fields that give you useful information about particles. A “particle”, often referred to in image processing as a “blob”, is a group of continuous pixels.

A single report details the information about one particle - where it’s located on the screen, its bounding box (the box in which all the pixels of the particle are contained), its size, and a few other useful things. The particleArea field, for example, represents the number of pixels in the image.

Greg’s whitepaper details a simple algorithm for finding rectangles: Threshold image -> apply “convex hull” operation -> find the best “rectangle scores”.

The rectangle score is the percentage of the area of the bounding box that is covered by pixels. The “convex hull” operation is a bit more complex to access - if you want to get at it see JewishDan’s thread about accessing the C/C++ imaging functions in Java. But the particle properties alone should be enough to at least get you started.

Here is the mentioned thread:

Comparing the bounding box are to the particle area of the convex hull is a good place to start

Our team was using the vision assistant to try and create an algorithm, at first we tried using convex hull, but then realized that wouldn’t work (or at least we didn’t see any way for it to work) in frc java, so we rooted around in the api and found this field: particleQuality.

This is a ratio between a particle’s area and any pixels within the particle, but aren’t a “true” in the binary image. Fooling around with some pictures of a backboard, we found that square targets (like we’re using) usually are within 35%-55%, while other bright things, like reflections and fluorescent lights are somewhere around 90%. Obviously these depend somewhat on your thresholds, but we’ve managed to find the targets (and only the targets) pretty consistently, from almost any angle and distance.

Can anyone tell me why HSL is used? The camera image gives RGB, and I can easily find the RGB values of whatever I want to look at. But I don’t know the best way to get the HSL of a target, or why it’s done with HSL in the first place.

HSL is nice for finding white/bright objects, as you can threshold on only the luminance value. If you want to use RGB, you can filter on it if you want.

Sounds reasonable. Thank you.
I may end up using RGB then. (if HSL doesn’t work first) Because of possible weirdness, we’re planning on using colored LEDs to light up the retroreflective tape. I can’t see the robot finding green surrounded by black just anywhere. :stuck_out_tongue:

RGB and HSL are equivalent for identifying a color point. They differ when specifying a color volume.

RGB colors represent a color cube between 0 and 255 generally. Any range of colors winds up being a smaller cube.

HSL colors represent a double cone (two ice cream cones) one right side up and one upside down with the circles touching. This seems like a very odd shape at first, but it is basically a cylinder where the circular end caps are shrunk to points.

The HSL range can describe circular disks, a circular ribbon and tilted circular shapes sort of like you’d get if you were being creative carving up a melon.

The types of color ranges you often want to specify are less sloppy using HSL, HSV, or HSI than RGB. As an example, to include the saturated rainbow colors, you can use HSL range such as (0-255, 225-255, 100-150). The HSL shape is basically a donut. To do this operation in RGB is no longer a numeric range, but a function that looks at terms such as the average and spread of the RGB elements and includes the color if the average is between 150 and 200 and the difference is at least 150 or so. Even this may not be accurate enough, as it may include more pastel colors. Pretty soon you are doing the same as converting the color to HSL using something like the Foley and van Dam conversion algorithm.

Speaking of which, I have heard that the HSL and HSV implementations in IMAQ are quite different in term of performance and accuracy. I haven’t tested it in awhile to see if it is still the case.

Greg McKaskle