Hello, This year my team is looking into utilizing camera tracking more than last year. We are using java and I understand the process used for on robot tracking already. Now I am looking into laptop based tracking. I already know how to create a WPI camera extension.
Using some code provided by team 237 I have made some progress. The biggest problem right now comes from thresholding. In there code they have
WPIBinaryImage red = image.getRedChannel().getThreshold(64),
green = image.getGreenChannel().getThreshold(64),
blue = image.getRedChannel().getThreshold(64);
WPIBinaryImage threshold = red.getAnd(green).getAnd(blue);
I was under the impression Thresholds required a range of values. How is each channel only using one value?
A location for some sort of API documentation would also be appreciated.
from WPIGrayscaleImage.java (from the WPIJavaCV project):
/**
* Returns a black and white image where every pixel that is higher (in the 0-255 scale) than the given threshold is <bold>white</bold>,
* and everything below is <bold>black</bold>.
* @param threshold a value 0-255. if a pixel has a value below the theshold, it becomes black
* if the pixel value is above or equal to the threshold, the pixel becomes white
* @return a new {@link WPIBinaryImage} that represents the threshold
*/
public WPIBinaryImage getThreshold(int threshold) {
validateDisposed();
IplImage bin = IplImage.create(image.cvSize(), 8, 1);
cvThreshold(image, bin, threshold, 255, CV_THRESH_BINARY);
return new WPIBinaryImage(bin);
}
It says that getThreshold() is black for [0,threshold) and white for [threshold, 255] - though, if you look at the OpenCV documentation for cvThreshold, it says that it’s black for [0,threshold] and white for (threshold,255]. Either way, white is somewhere above, black is somewhere below.
The companion function getThresholdInverted() simply reverses the ranges, and if you compare getThreshold() and getThresholdInverted() from the same image with the same threshold, they should be perfect negatives.
Thanks for the info. i think I understand how the thresholds work. But I’m not clear on how to isolate a color. I can see using a combination of inverted and normal thresholds to do this. But in my testing I have had very little success. This is partly because im not sure how to combine them. team 237 used a variant of the code below but I don’t see much logic in how this executes.
There is also a .getOr() function which I might be able to use.
Ok I understand the thresholds now, but I’m still unsure how to isolate a color.
I can see how I might be able to utilize the getInverseThreshold() function but my testing has had little success.
This is compounded by the fact that I don’t understand the threshold combination command seen below.
All the functions you’re asking about are in the WPIJavaCV project, which should be in the SmartDashboard repository (I see you found it in another thread). getAnd() applies a logical AND to each pixel in 2 binary images:
on AND on = on
on AND off = off
off AND on = off
off AND off = off
getOr() does the same thing but with an OR operation:
on OR on = on
on OR off = on
off OR on = on
off OR off = off
So to do a threshold of red between [100,150] and green outside [75,100], you’d do:
WPIBinaryImage red = image.getRedChannel(),
green = image.getGreenChannel();
WPIBinaryImage redThresh = red.getThreshold(100).getAnd(red.getThresholdInverted(150)),
greenThres = green.getThresholdInverted(75).getOr(green.getThreshold(100));
WPIBinaryImage threshold = red.getAnd(green);