FRC 2011 Vision Tracking

Hey everyone! Happy built season! I had a really quick advise related question about the vision tracking this year. First a bit of background on our programming team. Last year it was just me working on the programming - I used (and am using ) LV. Also last year was our teams first year so I am hoping to progress a good bit this year now that we have a bigger team.

Any way I was wondering if anyone could give me some basics in vision processing. I have already been to here: http://decibel.ni.com/content/docs/DOC-8923 for a bit of help however I am a bit confused on how the entire concept of vision tracking works. Could anyone give me some basics as well as maybe some past techniques that they used to program the vision tracking. Thanks! and happy build Days!

Ps. I have servos that work finally, so I know that I can implement moving the camera automatically.

Are you going to stick with labview, or are you open to java. If you are open to java I can help you get started with the vision tracking. WPIlib has a lot of support for the camera. Basically you calibrate the camera to see the target, and then it return the the size, x, and y of the target. You can use all three of these to approach the target.

You say you have servos, but I think a static camera may be easier to use. You do not need the camera moving around to use it, especially not for this game. It will only complicate things.

Use the y to tell how far you are from the target and the x to center your robot (having the ability to strafe helps a lot here) I would recommend putting your robot at the distance it needs to be to cap and then mount the camera so the bottom target is at the top of the image.

Look at 2009’s sample labview code, there may be a tracker in there. The code for that year tracked two colors, but it can be adjusted to track one. I know in java is the tracker sample, labview may be different

The imaging libs, like almost everything else in WPI is almost identical in LV, C++, and Java. The library names and a few conventions are different, but otherwise the same.

As mentioned, the 2009 examples and other color tracking examples are applicable.

If you’ve read the tutorial, it mentions robot mounted lighting. Have you built one? Do your images look something like the ones in the tutorial? The next step is to use something such as color threshold or subtraction of an unlit image from a lit image. This refers to flashing the light near your own camera. With either of these approaches, you’ll have a binary image. Adjust the threshold or other factors until this isolates the targets. Note that when viewing a binary image, be sure to right-click on the image viewer and choose the palette for binary.

Once you have particles, use the particle analysis report or similar technique for getting location, size, and other measurements from the particles in the image.

Finally, I highly recommend reading portions of the Vision Concepts manual. It is located in Start>>National Instruments>>
and the last part of the path is either Vision>>Documentation, or just Documentation. It will review the many different tools available, show images, talk about conditions that affect when the technique will work best. It reviews the various filters and processing for binary images which may be useful for removing unwanted noise, etc.

Finally, feel free to ask more specifics here, including images or partially processed images.

Greg McKaskle

Thanks for this tip, I didn’t even think of the lit vs unlit. I think a combination of the two will be the best way to go, The white inner tubes mimic the reflective tape, so you need to use the color threshold to tell the difference. However if you’re just doing it in autonomous the lit vs unlit should suffice.

Hi Greg,

How well does the RGB thresholding work for these targets? I am worried different lighting/exposure levels may throw it off. Do you think using an IR light source and the appropriate filter over the CCD could yield better results? It would make the thresholding single channel and may be less susceptible to varying backgrounds and lighting conditions. Or, do you think this would be overkill?

Thanks in advance.

I had a lot of success with just using a standard light bulb, and then changing the lighting environment. Luckily the reflective tape reflects back to the light source, so when i used HSL filter(just like the 09 code) it woked in several lighting environments. The only light that would effect you is on directly behind you at the level of your robot. So I think comp lighting wont be as much of an issue with this system.

The ambient lighting shouldn’t be that much of an issue.

That is based on the fact that these markers are in roughly the same place and at roughly the same orientation as the circles of last year, and that part of the field doesn’t catch much glare. It is bright, but the lights are coming from the long sides of the field.

The second reason is that the material specs claim it as 600X brightness for narrow angles. This means it reflects 600 times as much light back to the source as a white painted surface would. I’m sure the measurement specifics are much more technical than that, but it is very bright. Meanwhile, the ambient light or other spots will be returned to their sources. If they aren’t right behind you, you should get a pretty pure reflection of your source.

The lights used in the tutorial were cheap christmas lights, not too bright, and they were acceptable even with windows behind the targets. Depending on your light source color, I would experiment with the white balance and try to set the image exposure similar to how it was for 06. In other words, darken the image to where not much more than the reflected lights and active lights show up in the image. You may be able to do this with the auto exposure and brightness combo, but I believe that it will be best to use a custom exposure that results in markers being high intensity and high saturation. This will also speed the threshold processing since the luminance and saturation will then exclude more pixels before the hue is even calculated.

As for using IR or a single color plane, I’m interested to hear how it works. The sensors should be sensitive there. You may need to replace or modify the lens, and if I remember correctly, the IR source will show up more as a white light and not a colored light.

Greg McKaskle

What I have found to be the best way to develop a camera algorithm is to use the NI Vision Assistant. This can be found in the Labview install included in the FRC Labview DVD. The Vision Assistant allows you to tweak your parameters and view what affect the tweaks have on the result. Once you have the image altered to an acceptable level you can export the vision assistant parameters directly to Labview code. For help on this process check our decible.ni.com

I am trying to help Team 2751 to do vision tracking for the first time this year. We program our robot using Java, and I am curious how much of the NI Image Processing library is really available to us. A few questions to anyone kind enough to answer:

  1. Other than getting the settings from the color thresholding operation, how much of the NI Vision Assistant algorithm prototyping can be transferred to Java? For example, I did some template matching in the Vision Assistant that works well. I can, of course, save the script for inclusion in VI. Is it possible to access VI from Java?

  2. In the Javadoc reference, the package to access NI’s nvision library is edu.wpi.first.wpilibj.image. It only has the following classes referenced in documentation:
    BinaryImage
    ColorImage
    CurveOptions
    EllipseDescriptor
    EllipseMatch
    HSLImage
    Image
    MonoImage
    ParticleAnalysisReport
    RegionOfInterest
    RGBImage
    ShapeDetectionOptions

These classes appear to be there to support the Java sample machine vision projects from last year. Are other NI image processing functions available through wpilibj or is it just based on the people writing wrapper classes for the functions that are needed?

  1. According to the WPI Robotics Library Users Guide - which is ostensibly for Java and C++ - there is reference to the FRC Vision API Specification document, which gets installed with WindRiver. Is it only available in WindRiver? Do I really have to install the IDE for C++ that I don’t need since I am using Java NetBeans?

The snippet of information I see in the Library Users Guide says that the FRC Vision Interface includes high level calls for color tracking through TrackingAPI.cpp. It also says programmers may call into the low level library by using nvision.h. Are the trackingAPI and/or the low level calls available to Java? The Java VM we have doesn’t support JNI - which is the typical way to make calls to C-libraries.

In summary, it looks like for image processing, LabView has the most support, followed by WindRiver C++, with Java bringing up the rear. From reading this forum, however, I see that several of you are using Java. Are the issues I raise here really not that big of a deal? How did you overcome them?

Any answers and guidance to getting started would be most appreciated. Thanks for your help and good luck during the build season.

Barry Sudduth
Team 2751

Hi–we are also looking for any sample Java code that would demonstrate how to grab images from the Axis camera and process them for tracking, etc.

I see references to prior year demo code. Where can we find that?

Thanks!

Anyone looking for code for the vision tracking in Java, shoot me an email.

[email protected]

I will add you to my dropbox and supply you with my working camera code.
Also I created a video tutorial that explains how I accomplished the task.

Where is the TrackingAPI.cpp and associated .h file mentioned on page 70 of the 2011 WPI Robitics Library Users Guide?

All we’ve been able to find is the Vision2009 TrackAPI libraries.

Dan

I am trying to integrate Vision tracking to my labview code. Can someone send me some code to work off of? I am just tired of finding stuff for java and not labview.

This thread hasnt been active in 8 years, might want to start a new one.

2 Likes

Ah… how things have changed.