Tracking Rectangles with Java/C++

Hi Guys,

So I’m from team 1382 and We are guessing how to track the rectangles and, we realized that it is really hard , instead of using making the particle process, we are seeking for any kind of librarie or function, since we had never used the camera for tracking and it will be the first time, and using LabView isn’t in our plans.

I know that already a lot of threads talking about rectangles tracking , I’m creating this one specifically for Java and C++ users that are havind the same problem, bacause LabView have a block to manage it .

Thanks ,

Cesar Javaroni
ETEP Team 1382
Brazilian Team
http://a1.twimg.com/profile_images/633580277/logo_etep-zord.jpg

We need a little help too by how to see the images. We are using java…

Hi Steve,
So get the image is the easiest thing to do, since you have configured your cam, WPI cam Manage it so, take a look at these:

Starting at page 53.

I think that’s all, but we are really having troubles finding the rectangles

Even if you intend to do the processing using a different library or a different language, it may still be useful to use the NI Vision Assistant to acquire a variety of images, and then try different mathematical algorithms to see is helpful. You can then review the reference or concept manual and it will often tell you the algorithm used if you would like to translate to OpenCV or another library.

Greg McKaskle

If you really want, try looking into Hough transformations. The images are encoded in Jpeg; you can just decode it and go from there. That is my plan so far, but it looks like we won’t need a camera this year.

Perhaps once we get a suitable laptop to code and test with, I’ll post an example up next week or when ever.

I am able to write the code in Java and port it into C++, but the C++ port will not be tested because I do not want to risk flashing the cRio too much.

Yeah, my team is in the same boat. We’re using C++ and there aren’t any handy examples for tracking like there are in LabView (almost make me want to switch back :ahh: ) I briefly skimmed some sample code that would get the image, but, as said before, the real trouble is going to be tracking those rectangles…

My team will be testing the camera throughout the next couple weeks so hopefully we’ll think of something. But if anyone has any breakthroughs, please post them here! Any help would be greatly appreciated :slight_smile:

If your team isnt 100% against using labview at all you could do the image processing from the Classmates Dashboard by connecting the camera through the bridge instead of the cRio.

My team is going to be putting a small Atom powered computer on our robot to do vision processing and other high-level functions. We are going to be using the javacv library to utilize OpenCV in Java. We experimented with vision processing on the cRIO last year, but we found that it was very slow and often lagged the rest of the robot functions. You can easily (I wrote a demo program in ~10 minutes) detect edges/contours with OpenCV and from there decide whether or not the contours make the rectangle you’re looking for or not.

Good luck doing that.

May I suggest using linux without xserver and just going with C++ to bypass the JVM.

My Question now is:
Will OpenCV work normally on cRIO, and, will it take a lot of processing from it ?

We often write some huge programs to avoid errors, and it already take a little of cRIO, but, if exits anything that we can use to process this images :confused:

Using any kind of notebook,netbooks, whatever… Are out of our mind, since our money isn’t enough for it.

Thanks.

Have you read through this whitepaper?

http://firstforge.wpi.edu/sf/docman/do/downloadDocument/projects.wpilib/docman.root/doc1302

While it doesn’t talk specifically to functions or specific library calls, it does talk about different techniques specifically related to this years game.

Just so there is no misunderstanding, the vision libraries that LabVIEW uses are equally accessible from C/C++. The NI product for C development is called LabWindows CVI, so the vision documentation with the CVI suffix is all about the C/C++ entry points.

C:\Program Files\National Instruments\Vision\Documentation contains general documentation about vision processing and LV and C/C++ specific documents and others specific to Vision Assistant. It also has a calibration grid file if you need to correct the images for lens distortion.

The libraries are installed on the cRIO and if you have installed Vision Assistant, I believe you have them on the laptop as well.

It has been a while since I’ve looked at the WPI wrappers, and at least initially, they tended to hide the imaq entry point rather than simplify them. Perhaps the Examples for C based image processing will be helpful. They are located at C:\Program Files\National Instruments\Vision\Examples\MSVC.

NI doesn’t have Java wrappers for the C libraries or for much else. I know some have been added to WPILib, but it is far from complete. If you are more familiar with OpenCV, that is certainly a good option. I wouldn’t expect it to be that much difference in performance or capabilities, but both libraries have their specializations and benefits. If you search online, you can probably find some comparisons.

Once again, I’d also encourage you to take advantage of vision assistant, its code generation features and the vision concept manual.

Greg McKaskle

I won’t say that this is a bad idea but I have seen quite a few teams try something like this and have it cause them more trouble than it is worth. The cRIO is powerful enough to handle vision and motor control, there are thousands of industrial manipulators doing both on the same hardware FIRST uses.

While it will reduce the complexity of setting up the board (computer) they use to some degree it is probably far easier to spend an extra hour configuring their OS of choice than it would be to switch from JAVA to C++.

Look at the files mentioned in Greg’s posts they will give you a good idea of what is available for you to use on the cRIO.

Derek mentioned above that you could try to do the vision processing on the classmate or whatever computer you use for an OI. This may be an option for offloading some of the processing, however I’d imagine it would not work for Autonomous.

So I’ve spent the morning looking around nivision.h, the “NI Vision for LabWindows/CVI Function Reference Help”, and the old ellipsis tracking code to figure out tracking rectangles in C++. Here’s a snippet of [entirely untested as I don’t currently have access to a cRio] code.


void Camera::FindRectangles() {
    HSLImage  *rawImage  = camera.GetImage();
    MonoImage *monoImage = rawImage->GetLuminancePlane();
    Image     *image     = monoImage->GetImaqImage();
    workingImageWidth  = monoImage->GetWidth();
    workingImageHeight = monoImage->GetHeight();
    delete monoImage;
    delete rawImage;

    RectangleDescriptor *rectangleDescriptor = new RectangleDescriptor;
    rectangleDescriptor->minWidth  = 0;
    rectangleDescriptor->minHeight = 0;
    rectangleDescriptor->maxWidth  = workingImageWidth;
    rectangleDescriptor->maxHeight = workingImageHeight;

    int numCurrentMatches;
    RectangleMatch *temp;
    temp = imaqDetectRectangles(image,
                                rectangleDescriptor,
                                NULL, // Default curve options as per manual.
                                NULL, // Default shape detection options.
                                NULL, // (ROI) Whole image should be searched.
                                &numCurrentMatches);
    matches->erase(matches->begin(), matches->end());
    matches = new vector<RectangleMatch>;
    for (int i = 0; i < numCurrentMatches; i++)
        matches->push_back(temp*);
    
    imaqDispose(temp);
}

EDIT: CVI manual located at C:\Program Files (x86)\National Instruments\Vision\Documentation\VDM_CVI_User_Manual.pdf*

Would I be correct in assuming that in your example code, Camera is a class you made?

Yes, once we have our tracking code working completely and performing as we’d like, I’ll likely post the full source code and an accompanying whitepaper.

Don’t worry about it - it will take at least 10000 erase cycle to wear out any one block.

How exactly are you guys accomplishing that? Power connection, networking? Not sure how that would be done as we’re also thinking of doing that.

I’m working on this problem in Java. Am I crazy or do they only make a method available for detcting ellipses and nothing else? Can we, at least, access the values for individual pixels? That way, if they give us nothing else, we could at-least write our own image processing algorithms.

I believe opencv and the axiscamera’s JPEG is your best shot