View Full Version : Tracking Rectangles with Java/C++
Hi Guys,
So I'm from team 1382 and We are guessing how to track the rectangles and, we realized that it is really hard , instead of using making the particle process, we are seeking for any kind of librarie or function, since we had never used the camera for tracking and it will be the first time, and using LabView isn't in our plans.
I know that already a lot of threads talking about rectangles tracking , I'm creating this one specifically for Java and C++ users that are havind the same problem, bacause LabView have a block to manage it .
Thanks ,
Cesar Javaroni
ETEP Team 1382
Brazilian Team
http://a1.twimg.com/profile_images/633580277/logo_etep-zord.jpg
We need a little help too by how to see the images. We are using java...
Hi Steve,
So get the image is the easiest thing to do, since you have configured your cam, WPI cam Manage it so, take a look at these:
- http://first.wpi.edu/Images/CMS/First/WPI_Robotics_Library_Users_Guide.pdf
Starting at page 53.
I think that's all, but we are really having troubles finding the rectangles
Greg McKaskle
09-01-2012, 18:59
Even if you intend to do the processing using a different library or a different language, it may still be useful to use the NI Vision Assistant to acquire a variety of images, and then try different mathematical algorithms to see is helpful. You can then review the reference or concept manual and it will often tell you the algorithm used if you would like to translate to OpenCV or another library.
Greg McKaskle
davidthefat
09-01-2012, 19:00
If you really want, try looking into Hough transformations. The images are encoded in Jpeg; you can just decode it and go from there. That is my plan so far, but it looks like we won't need a camera this year.
Perhaps once we get a suitable laptop to code and test with, I'll post an example up next week or when ever.
I am able to write the code in Java and port it into C++, but the C++ port will not be tested because I do not want to risk flashing the cRio too much.
SpikeyBot293
09-01-2012, 19:03
Yeah, my team is in the same boat. We're using C++ and there aren't any handy examples for tracking like there are in LabView (almost make me want to switch back :ahh: ) I briefly skimmed some sample code that would get the image, but, as said before, the real trouble is going to be tracking those rectangles...
My team will be testing the camera throughout the next couple weeks so hopefully we'll think of something. But if anyone has any breakthroughs, please post them here! Any help would be greatly appreciated :)
Derek012
09-01-2012, 20:05
If your team isnt 100% against using labview at all you could do the image processing from the Classmates Dashboard by connecting the camera through the bridge instead of the cRio.
My team is going to be putting a small Atom powered computer on our robot to do vision processing and other high-level functions. We are going to be using the javacv library to utilize OpenCV in Java. We experimented with vision processing on the cRIO last year, but we found that it was very slow and often lagged the rest of the robot functions. You can easily (I wrote a demo program in ~10 minutes) detect edges/contours with OpenCV and from there decide whether or not the contours make the rectangle you're looking for or not.
davidthefat
09-01-2012, 20:19
My team is going to be putting a small Atom powered computer on our robot to do vision processing and other high-level functions. We are going to be using the javacv library to utilize OpenCV in Java. We experimented with vision processing on the cRIO last year, but we found that it was very slow and often lagged the rest of the robot functions. You can easily (I wrote a demo program in ~10 minutes) detect edges/contours with OpenCV and from there decide whether or not the contours make the rectangle you're looking for or not.
Good luck doing that.
May I suggest using linux without xserver and just going with C++ to bypass the JVM.
My Question now is:
Will OpenCV work normally on cRIO, and, will it take a lot of processing from it ?
We often write some huge programs to avoid errors, and it already take a little of cRIO, but, if exits anything that we can use to process this images :/
Using any kind of notebook,netbooks, whatever... Are out of our mind, since our money isn't enough for it.
Thanks.
abrightwell
10-01-2012, 07:24
Have you read through this whitepaper?
http://firstforge.wpi.edu/sf/docman/do/downloadDocument/projects.wpilib/docman.root/doc1302
While it doesn't talk specifically to functions or specific library calls, it does talk about different techniques specifically related to this years game.
Greg McKaskle
10-01-2012, 08:00
Just so there is no misunderstanding, the vision libraries that LabVIEW uses are equally accessible from C/C++. The NI product for C development is called LabWindows CVI, so the vision documentation with the CVI suffix is all about the C/C++ entry points.
C:\Program Files\National Instruments\Vision\Documentation contains general documentation about vision processing and LV and C/C++ specific documents and others specific to Vision Assistant. It also has a calibration grid file if you need to correct the images for lens distortion.
The libraries are installed on the cRIO and if you have installed Vision Assistant, I believe you have them on the laptop as well.
It has been a while since I've looked at the WPI wrappers, and at least initially, they tended to hide the imaq entry point rather than simplify them. Perhaps the Examples for C based image processing will be helpful. They are located at C:\Program Files\National Instruments\Vision\Examples\MSVC.
NI doesn't have Java wrappers for the C libraries or for much else. I know some have been added to WPILib, but it is far from complete. If you are more familiar with OpenCV, that is certainly a good option. I wouldn't expect it to be that much difference in performance or capabilities, but both libraries have their specializations and benefits. If you search online, you can probably find some comparisons.
Once again, I'd also encourage you to take advantage of vision assistant, its code generation features and the vision concept manual.
Greg McKaskle
JamesBrown
10-01-2012, 09:29
My team is going to be putting a small Atom powered computer on our robot to do vision processing and other high-level functions. We are going to be using the javacv library to utilize OpenCV in Java. We experimented with vision processing on the cRIO last year, but we found that it was very slow and often lagged the rest of the robot functions. You can easily (I wrote a demo program in ~10 minutes) detect edges/contours with OpenCV and from there decide whether or not the contours make the rectangle you're looking for or not.
I won't say that this is a bad idea but I have seen quite a few teams try something like this and have it cause them more trouble than it is worth. The cRIO is powerful enough to handle vision and motor control, there are thousands of industrial manipulators doing both on the same hardware FIRST uses.
Good luck doing that.
May I suggest using linux without xserver and just going with C++ to bypass the JVM.
While it will reduce the complexity of setting up the board (computer) they use to some degree it is probably far easier to spend an extra hour configuring their OS of choice than it would be to switch from JAVA to C++.
My Question now is:
Will OpenCV work normally on cRIO, and, will it take a lot of processing from it ?
We often write some huge programs to avoid errors, and it already take a little of cRIO, but, if exits anything that we can use to process this images :/
Using any kind of notebook,netbooks, whatever... Are out of our mind, since our money isn't enough for it.
Thanks.
Look at the files mentioned in Greg's posts they will give you a good idea of what is available for you to use on the cRIO.
Derek mentioned above that you could try to do the vision processing on the classmate or whatever computer you use for an OI. This may be an option for offloading some of the processing, however I'd imagine it would not work for Autonomous.
basicxman
11-01-2012, 19:22
Just so there is no misunderstanding, the vision libraries that LabVIEW uses are equally accessible from C/C++. The NI product for C development is called LabWindows CVI, so the vision documentation with the CVI suffix is all about the C/C++ entry points.
So I've spent the morning looking around nivision.h, the "NI Vision for LabWindows/CVI Function Reference Help", and the old ellipsis tracking code to figure out tracking rectangles in C++. Here's a snippet of [entirely untested as I don't currently have access to a cRio] code.
void Camera::FindRectangles() {
HSLImage *rawImage = camera.GetImage();
MonoImage *monoImage = rawImage->GetLuminancePlane();
Image *image = monoImage->GetImaqImage();
workingImageWidth = monoImage->GetWidth();
workingImageHeight = monoImage->GetHeight();
delete monoImage;
delete rawImage;
RectangleDescriptor *rectangleDescriptor = new RectangleDescriptor;
rectangleDescriptor->minWidth = 0;
rectangleDescriptor->minHeight = 0;
rectangleDescriptor->maxWidth = workingImageWidth;
rectangleDescriptor->maxHeight = workingImageHeight;
int numCurrentMatches;
RectangleMatch *temp;
temp = imaqDetectRectangles(image,
rectangleDescriptor,
NULL, // Default curve options as per manual.
NULL, // Default shape detection options.
NULL, // (ROI) Whole image should be searched.
&numCurrentMatches);
matches->erase(matches->begin(), matches->end());
matches = new vector<RectangleMatch>;
for (int i = 0; i < numCurrentMatches; i++)
matches->push_back(temp[i]);
imaqDispose(temp);
}
EDIT: CVI manual located at C:\Program Files (x86)\National Instruments\Vision\Documentation\VDM_CVI_User_Manu al.pdf
scottbot95
12-01-2012, 19:37
Would I be correct in assuming that in your example code, Camera is a class you made?
basicxman
13-01-2012, 09:47
Would I be correct in assuming that in your example code, Camera is a class you made?
Yes, once we have our tracking code working completely and performing as we'd like, I'll likely post the full source code and an accompanying whitepaper.
wireties
13-01-2012, 09:57
I am able to write the code in Java and port it into C++, but the C++ port will not be tested because I do not want to risk flashing the cRio too much.
Don't worry about it - it will take at least 10000 erase cycle to wear out any one block.
My team is going to be putting a small Atom powered computer on our robot to do vision processing and other high-level functions. We are going to be using the javacv library to utilize OpenCV in Java. We experimented with vision processing on the cRIO last year, but we found that it was very slow and often lagged the rest of the robot functions. You can easily (I wrote a demo program in ~10 minutes) detect edges/contours with OpenCV and from there decide whether or not the contours make the rectangle you're looking for or not.
How exactly are you guys accomplishing that? Power connection, networking? Not sure how that would be done as we're also thinking of doing that.
I'm working on this problem in Java. Am I crazy or do they only make a method available for detcting ellipses and nothing else? Can we, at least, access the values for individual pixels? That way, if they give us nothing else, we could at-least write our own image processing algorithms.
I believe opencv and the axiscamera's JPEG is your best shot
scottbot95
15-01-2012, 15:54
basicxman, I tried the code you suggested and I found that we were finding a ridiculous amount of rectangles(around 42 million). Do you have any idea why this is happening? Also, is there any documentation for the imaqDetectRectangles function?
RufflesRidge
15-01-2012, 16:02
basicxman, I tried the code you suggested and I found that we were finding a ridiculous amount of rectangles(around 42 million). Do you have any idea why this is happening? Also, is there any documentation for the imaqDetectRectangles function?
You probably want to filter by brightness (luminance) or color (probably in HSL or HSV space) before trying to detect rectangles.
scottbot95
15-01-2012, 16:11
I have a sheet of printer paper with the target(with correct proportions) printed on it except where the retro-reflective tape would be, we just printed green. I am then extracting the green plane and going from there.
Ross3098
15-01-2012, 17:37
Our team is also trying to figure out how to track the rectangles. Ive been spending a few hours or so looking over nivision.h as well as a few white papers and i seem to have gotten this far:
m_ModifiedImage = m_HSLImage->ThresholdHSL(80,125,45,60,115,130);
ImaqImage = m_ModifiedImage->GetImaqImage();
The white paper about the vision targets talks about applying a convex hull operation to really help those rectangles pop out. The main problem I have at the moment is that I have no idea how to apply said operation in C++. Ive found the imaqConvexHull() operation but have no clue how to start it.:(
PriyankP
15-01-2012, 17:54
I have no idea how to apply said operation in C++. Ive found the imaqConvexHull() operation but have no clue how to start it.:(
This (http://www.chiefdelphi.com/forums/showthread.php?threadid=100176) should help you get started! I'll be more helpful more once I see what the code I wrote does when I get it to run on a robot.
basicxman
15-01-2012, 18:06
basicxman, I tried the code you suggested and I found that we were finding a ridiculous amount of rectangles(around 42 million). Do you have any idea why this is happening? Also, is there any documentation for the imaqDetectRectangles function?
You probably want to filter by brightness (luminance) or color (probably in HSL or HSV space) before trying to detect rectangles.
Aye, if you generate C code from Vision Assistant it will call imaqThreshold too - something I forgot about in my original snippet.
Would this be how to use the detectRectangles function in java. We looked at the ellipseDetect and I am thinking that hey do the same thing just based on different descriptors.
private static final BlockingFunction imaqDetectRectanglesFn =
NativeLibrary.getDefaultInstance().getBlockingFunc tion("imaqDetectRectangles");
static { imaqDetectRectanglesFn.setTaskExecutor(NIVision.ta skExecutor); }
private static Pointer numberOfRectanglesDetected = new Pointer(4);
public static RectangleMatch[] detectRectangles(MonoImage image, RectangleDescriptor rectangleDescriptor,
CurveOptions curveOptions, ShapeDetectionOptions shapeDetectionOptions,
RegionOfInterest roi) throws NIVisionException {
int curveOptionsPointer = 0;
if (curveOptions != null)
curveOptionsPointer = curveOptions.getPointer().address().toUWord().toPr imitive();
int shapeDetectionOptionsPointer = 0;
if (shapeDetectionOptions != null)
shapeDetectionOptionsPointer = shapeDetectionOptions.getPointer().address().toUWo rd().toPrimitive();
int roiPointer = 0;
if (roi != null)
roiPointer = roi.getPointer().address().toUWord().toPrimitive() ;
int returnedAddress =
imaqDetectRectanglesFn.call6(
image.image.address().toUWord().toPrimitive(),
rectangleDescriptor.getPointer().address().toUWord ().toPrimitive(),
curveOptionsPointer, shapeDetectionOptionsPointer,
roiPointer,
numberOfRectanglesDetected.address().toUWord().toP rimitive());
try {
NIVision.assertCleanStatus(returnedAddress);
} catch (NIVisionException ex) {
if (!ex.getMessage().equals("No error."))
throw ex;
}
RectanglesMatch[] matches = RectanglesMatch.getMatchesFromMemory(returnedAddre ss, numberOfRectanglesDetected.getInt(0));
NIVision.dispose(new Pointer(returnedAddress,0));
return matches;
}
Ross3098
15-01-2012, 18:20
This (http://www.chiefdelphi.com/forums/showthread.php?threadid=100176) should help you get started! I'll be more helpful more once I see what the code I wrote does when I get it to run on a robot.
Correct me if I am wrong but does this mean that the integer that the imaqConvexHull() operation returns to is the score? And also I am wondering if the destination image is actually modified within the operation.
Ross3098
15-01-2012, 21:29
What is imaqConvexHull() returning to? I have it returning to an integer but what value does the integer have?
Greg McKaskle
15-01-2012, 21:45
From the CVI documentation,
Return Value
Type
Description
int On success, this function returns a non-zero value. On failure, this function returns 0. To get extended error information, call imaqGetLastError().
Greg McKaskle
pattyb112
16-01-2012, 13:24
Does anybody know how to perform the convex hull operations mentioned in the White Paper in Java? It specifically states that C/C++ and LabView can do it but there is nothing about Java and we are really scratching our heads over the problem.
If anyone could help us out on this problem it would be greatly appreciated. Thanks!
dvanvoorst
16-01-2012, 20:42
Would this be how to use the detectRectangles function in java. We looked at the ellipseDetect and I am thinking that hey do the same thing just based on different descriptors.
private static final BlockingFunction imaqDetectRectanglesFn =
NativeLibrary.getDefaultInstance().getBlockingFunc tion("imaqDetectRectangles");
static { imaqDetectRectanglesFn.setTaskExecutor(NIVision.ta skExecutor); }
private static Pointer numberOfRectanglesDetected = new Pointer(4);
public static RectangleMatch[] detectRectangles(MonoImage image, RectangleDescriptor rectangleDescriptor,
CurveOptions curveOptions, ShapeDetectionOptions shapeDetectionOptions,
RegionOfInterest roi) throws NIVisionException {
int curveOptionsPointer = 0;
if (curveOptions != null)
curveOptionsPointer = curveOptions.getPointer().address().toUWord().toPr imitive();
int shapeDetectionOptionsPointer = 0;
if (shapeDetectionOptions != null)
shapeDetectionOptionsPointer = shapeDetectionOptions.getPointer().address().toUWo rd().toPrimitive();
int roiPointer = 0;
if (roi != null)
roiPointer = roi.getPointer().address().toUWord().toPrimitive() ;
int returnedAddress =
imaqDetectRectanglesFn.call6(
image.image.address().toUWord().toPrimitive(),
rectangleDescriptor.getPointer().address().toUWord ().toPrimitive(),
curveOptionsPointer, shapeDetectionOptionsPointer,
roiPointer,
numberOfRectanglesDetected.address().toUWord().toP rimitive());
try {
NIVision.assertCleanStatus(returnedAddress);
} catch (NIVisionException ex) {
if (!ex.getMessage().equals("No error."))
throw ex;
}
RectanglesMatch[] matches = RectanglesMatch.getMatchesFromMemory(returnedAddre ss, numberOfRectanglesDetected.getInt(0));
NIVision.dispose(new Pointer(returnedAddress,0));
return matches;
}
Where are RectangleMatch and RectangleDescriptor defined? I'd love to use this code to learn more about wrapping the IMAQ functions, but I'm running stuck getting it to compile.
basicxman
16-01-2012, 20:43
Where are RectangleMatch and RectangleDescriptor defined? I'd love to use this code to learn more about wrapping the IMAQ functions, but I'm running stuck getting it to compile.
Check out nivision.h and the CVI documentation mentioned earlier.
RectangleMatch: http://mmrambotics.ca/wpilib/struct_rectangle_match__struct.html
RectangleDescriptor:
http://mmrambotics.ca/wpilib/struct_rectangle_descriptor__struct.html
basicxman
21-01-2012, 17:37
Here's a snippet of [entirely untested as I don't currently have access to a cRio] code.
Do not use this snippet! After I finally got to experiment with the camera for a while today, I've fixed some flaws in that script and found a better way of doing it entirely.
vBulletin® v3.6.4, Copyright ©2000-2017, Jelsoft Enterprises Ltd.