Ive read several posts regarding the white papers, which I’ve read, and other math based solutions to tracking which I understand. How am I supposed to actually measure the targets in my code? I found all the stuff regarding the images themselves but how do I analyze the images in my code?
What point are you at? Do you have an image from the camera? If you are more specific, we can tell you what the next step is
So I’ve been messing around with it for a bit and so far all I managed to get is this:
try {
ColorImage img = camera.getImage();
// Do the threshold
BinaryImage bImg = img.thresholdHSL(0, 255, 0, 20, 239, 255);
//Do convex hull???
for(int i = 0; i < bImg.getNumberParticles(); i++)
{
ParticleAnalysisReport report = bImg.getParticleAnalysisReport(i);
// Identify the vision targets in here using the reports
}
// Work with the vision targets
} catch (AxisCameraException ex) {
ex.printStackTrace();
} catch (NIVisionException ex) {
ex.printStackTrace();
}
But the binary image needs to be modified before it gets the particle analysis report. More specifically, a convex hull operation needs to be used to turn the hollow rectangles into filled ones (and ensuring they only show up as single particles instead of potentially multiple). At least, that’s what it says to do in the Vision Tracking PDF. However, functionality of the convex hull operation appears to have not been implemented, I found this line in NIVision.java, line 521:
//IMAQ_FUNC int IMAQ_STDCALL imaqConvexHull(Image* dest, Image* source, int connectivity8);
That’s the function I want! But it’s commented out! Why? Does anybody know of any ways to do a convex hull operation in java? Or is there a work around?
This is the first year our team is attempting camera tracking, by the way.
http://www.wbrobotics.com/javadoc/edu/wpi/first/wpilibj/image/package-summary.html
Yea im stuck too, this is my first year using vision tracking
If you don’t tell us what’s wrong we can’t help Where are you stuck?
I m stuck with actually analyzing the images so far i have a line doing this
ColorImage image = AxisCamera.getInstance().getImage();
Bye the way team 20? we won WPI with you a year ago remember team GUS 228
Haha yes, I remember, it was an awesome regional. I wish we were going back, but we’re doing GSR and CT this year.
So from there, you have an image object. I would recommend printing out the height and width of the image, to make sure your camera is hooked up properly. From there, the traditional route is to threshold the image to get a binary image which should ideally contain only the rectangle. From there, WPILib has a rectangle detection function, which should work for basic stuff. From there, you get the height and width, and can do a bunch of trigonometry to figure out how far away it is. I would start by figuring out the thresholding functions; play around with them and see what they do. If you have any questions, please ask!
Ok so play with the thresh holds but is there a way i can see it realtime while running it?
and where is the rectangle detector i could not find it.
Sorry im a real noob on java i havent had much teaching.
I have no experience with it, but the Labview code lets you play with threshold values as you operate the camera. You should check the Labview subforum for this; there are some people from NI who know much much more about it than me.
I’m sorry, but the “detect rectangles” code is only in C++ and labview… I’m relatively sure that if sample code is posted like it is most years, it will contain rectangle detection code (that is, if sample code is posted). Once you find a nice threshold, you should look at the particle reports, since those are what you will be analyzing. You can get the bounding rectangle of a piece of the image, and know the expected area filled (since you know the width of the tape), and can compare that to the particle filled area…but i would concentrate on the thresholding for now
Everyone is a noob sometime, that’s what these forums (and FIRST) are for