LabView/C++ equally capable?

A while back, there was a thread about autonomous camera tracking.

There seemed to be a consensus forming that teams who had success getting the camera to track well enough used C++ and not LabView. This got me thinking about why this might be and what it means for the future of LabView as a programming option in FRC.

First, was that consensus correct? If so, will we see teams that really want to make the most of their controller abandon LabView or will more effort be put in by the LabView developers to provide comparable capability?

There is a correlation here, but not causation (in my opinion).

C/C++ is a far more universally taught and utilized language than LabView in education and industry (not to mention in FIRST prior to 2009). Thus more programming students and mentors will have a high level of understanding with C++ than with LabView.

The result? More experienced programmers, who are more likely to use C++ as that is where there expertise tends to fall, are more likely to get camera tracking working.

As far as capabilities, be it LabView or C++ (or Java), it all boils down to the same FPGA configuration and assembly language. I don’t buy that any FRC language is more or less capable than any other.

Jared, good points. I had considered that it might be a matter of familiarity as you suggest but I would have expected at least some reported success with LabView. I agree that the actual language should not be the limiting factor.

I’m wondering if perhaps the readability of C++ vs. the “black box” nature of LabView has something to do with it. I tried to dig into the LV to see how the video was processed and got pretty discouraged pretty fast.

The question still stands though regardless of cause. Will the correlation drive a change in behavior regarding language selection? I guess my hope is that the video capability provided is adequate in all languages so that a team might concentrate on how to put the data to work controling a robot and not on how to get a basic functionality working.

Ivan

I think this is part of it, too. There is something about engineers that makes us inherently suspicious of black boxes. Sometimes this is a good thing, but it also means that we can be stubborn to accept a good off the shelf solution to a problem. Case in point, you wouldn’t believe the number of times a subcontractor on a software project has supplied me with their own implementation of a C++ “vector” class. And as much as I trust that my subs are competent, the odds of them writing a bug-free vector class faster than the Standard Template Library’s implementation are very slim.

I believe that in many cases, it already has. Our students learn C++ from experienced programming mentors who know C++. I generally encourage rookie teams to use C++ because if they have a problem, I will know how to help.

The vision kernels are in fact a .out file, and are equally black box to all languages, as is the FPGA. If you’d like to ask questions about the LV or C++ image processing, I’ll be happy to shine some light on them. I think very quickly, we’ll rediscover that there are a few layers of user native language code, and below that you jump into a library. This is the case for all of WPILib.

Greg McKaskle

Our team (604) used LabView this past year, and we were fairly successful at getting our turret to lock in on our target. We had a potentiometer on the turret, so the camera took a frame, figured out how many degrees right/left to turn, and we used the potentiometer to servo to the position.

We also can shoot in autonomous mode (http://www.youtube.com/watch?v=SmXBMmEgfbI)

However, we do not have any comparison to how fast other robots’ turrets center on their target, etc., so I don’t know how “successful” we actually were, compared to others.

Here is another video of a test chassis tracking the target with LabView to compare with other teams’ speeds. Note that it was only a couple hours of work, so the tuning could have been better.

Team 1241 used LV to program in Lunacy. We also were able to get the Camera to track the target, with a fair bit of success at FLR and GTR.

Thanks to Ben Zimmerman and Team 843, without whose teachings and assistance we would be at a great loss, and also to Greg McKaskle and everyone on CD who has provided us with so much help.

Different people prefer different tools and different methods with the same tool to do the same job. If they produce similar-quality work for similar prices, should we care which method they use, all other things being equal?

Use the tool that is most comfortable for you.

I believe that ALL of the teams that got the camera to work at the Peachtree Regional used Labview.

As a C++ die hard I was reluctant to use Labview. But when I saw how quickly the kids picked it up and how easy it was to teach, and how little trouble we had getting things up and running - I was sold.

We used to spend a LOT of time debugging missing semi colons, etc - so much so the kids would get bored and completely loose the point. Last year the kids were able to program the entire thing themselves with very little adult help. I’ve never had that experience with text based languages… They enjoyed it so much they even did some side projects.

Very pleased with the whole experience.

We’ll be using LV again this year.

Thanks to all who replied, I wasn’t looking forward to making a switch to C++ but thought it might be wise given what I read in the thread I referenced. I’ll stick with the LV and see if I can get more out of it. Pretty much everything but the camera worked well for us last year so I know where to concentrate my efforts.

Your initial question is a reasonable one. I suspect this will be a perennial topic aimed at identifying what will lead to more success, and a particular technology or approach is always an easy difference to gravitate towards – similar to the wood vs aluminum, six wheels vs four discussions.

To make progress on improving vision, what part of the camera code worked, what didn’t? Any questions that you want to ask?

Greg McKaskle

The problems I had with the vision code was frame rate and repeatable performance. Sometimes one color would be picked up, sometimes both. I tried the various changes in values that were suggested in a couple of threads here on CD but never got anything stable enough to rely on.

To be honest, we ended up bailing on trying to use vision for any automation about half way through the build. There were other, non-control issues that needed more attention. I had followed a couple of discussions on the camera but I couldn’t dedicate the time to try all of the suggestions. Part of the reason for my original question was that I couldn’t find anything in the LV implementation that that would have lead me to some of the methods people seemed to be talking about and so assumed, apparently wrongly, that the C++ code or perhaps comments was more descriptive of the process, less black box like, and that was part of the success some were having.

I was looking for a frame rate somewhere in the 20 fps range. I don’t think we got out of single digits. The things I remember reading that should have helped are:
More light
Lower resolution
Turning off some things in the 2 color vi.
There was also some LV specific issue where the code would only process every other frame, I forget the specifics.

The purpose of the thread was to ask the question. If it will be instructive for many of us to change it to a discussion of how to get good camera performance, great. Since many teams are probably starting back up with the new school year, perhaps it is a good time to do this.

So, what really has to change in the 2 color demo to get the frame rate up? Once I can get that working, then I’ll work on the reliability.

Thanks,

Ivan

Well, the main things we changed were setting the resolution to 160x120 (set in Begin.vi if you’re using the advanced framework, i believe) and disabling the “mask display” in the 2 color demo. These two should improve your FPS.

A tip on getting the lighting to work under various lighting conditions:
for the HSL values, define H (hue) as narrow as possible but keep the S and L values relatively wide and set the “brightness” control to automatic (in Begin.vi). That way if the lighting changes you will still be able to track. It’s kind of a balance between tracking robustness and picking up extra noise (although the noise sholudnt be that big of an issue because its unlikely that you will have two large blobs of “noise” of green and pink one on top of the other)

During the season we had it set on a fixed brightness and very closely defined HSL values. At SVR it worked great because of the lighting consistency but at Atlanta, the semi-transparent roof cover screwed us up badly.

Hope this helps!

There are a couple things that will affect frame rate. I’ll cover the ones I remember, and then talk about why frame rate isn’t necessarily that important.

Frame Rate:
One obvious thing that can limit frame rate is the frame rate setting. Setting it to a low number will delay the request for the frame. Setting it too high will request the next as soon as one arrives and will go as fast as other factors allow.

Another issue is the resolution. Each resolution change is a 4x pixels difference. 640x480 images are nearly 1MB bit and take 100ms simply to decompress. All processing will be about four times as expensive as the 320x240. The 320x240 images take about 22ms to decode, and this was the size I used for the examples. This was really just a built in performance handicap, and it is about 4x slower than the 160x120 image. The small image takes 8ms to decode and the processing will similarly be about four times faster.

The next issue, which affects LV more than C++ is the setup of the camera. If you don’t add the FRC, FRC account on the camera, it takes multiple requests for the cRIO to get an image from the camera. The driver doesn’t know which account will work, so it goes through three of them in sequence. For performance, you’d like it to succeed on the first, the FRC, FRC one.

The last issue has to do with various camera settings. The camera will lower the frame rate if it doesn’t have enough light for a good exposure. The settings that affect this are the Exposure, Exposure Priority, and Brightness.

The other things mentioned such as the width of the hue will not have a large affect on performance, but since they will produce more blobs in the mask to analyze, they will have some affect. Also, the Saturation and Luminance will have some affect as well, since any pixels that can be eliminated by Sat or Lum are cheaper than having to do the calculations for Hue. Again, I think these settings are secondary for performance.

Performance isn’t everything:
This may be counter intuitive, but FPS isn’t really super important. More important is the lag, or the latency. This is defined as the time between when something happens in the real world, and when the image processing can notice it. It may seem that higher FPS would fix this, but think about how the awards shows have a 10 second delay to allow the censors to block things that aren’t supposed to be televised. They don’t change the FPS to do this, instead they buffer the images. The places images can be buffered include in the camera TCP stack, the cRIO TCP stack, and in the user’s program. To measure the latency, I used the LED on the front of the cRIO itself, but you can use one off of a digital card if you’d prefer. Turn the LED on, and time the amount of time it takes for vision to receive an image with the LED on. Because the camera exposure and the LED will be unsynchronized, you’ll need to look at numerous measurements and do some statistics to see how things behave.

When I measured this, both the 320x240 and 160x120 sizes had around 60ms of latency with the simplest processing I could have. Clearly this will go up as the processing becomes more complex. What this means is that everything the cRIO senses through the camera is really delayed by some amount based on the settings. For this year’s processing, I think the amount was probably about 80ms. So by the time the cRIO “sees” something, it has already happened by about 80ms.

Why is this important? In order to hit a moving target, you don’t want to shoot where something is. You certainly don’t want to shoot where it used to be. You want to shoot where it will be. If the ball traveled instantaneously, you’d want to estimate relative velocity and aim about 80ms ahead. Of course the orbit balls are anything but instantaneous flyers, and the further away the target is, the longer the flight time. I dont’ have any measured numbers, and it probably depends quite a bit on the delivery mechanism.

Anyway, the point is that a higher fps will give you a better estimate of the velocity, but will not allow you to ignore the latency issue.

I actually don’t have a measurement for latency using C++. It is possible that the numbers are very different.

None of this performance related talk has anything to do with it seeing only one color or the other. Those are tuning issues. The camera has many different color settings for white balance, and lighting will change considerably from event to event. Tilting the target to and from the light will also affect the saturation quite a bit.

The best way to deal with these is to capture images and take them into vision assistant where you can do a line profile or look at a mask and come to understand how these environmental changes will affect the values that the camera will give you. Then you can try different things out to have the camera behave better, mount the camera better, etc. I put some images up on flicker last year that demonstrate some of the issues.

Greg McKaskle

Greg, thanks for the information. The regular day job got in the way again, finally got the camera set up and working again last night. What you say about latency vs. frame rate makes good sense. I’m a little stumped about how to measure it though. I can see it on the display when I move the camera but I’m not following what you said about using an LED to get timing. Did you just use a stopwatch?

Ivan

I’ll see if I can find the test code. It was basically pointing the camera at the cRIO LEDs, the ones near the power and ethernet plugs. There is an RT function for controlling the LED, and I figured it was close to instantaneous compared to the camera, so I turned it off, and started the camera to acquiring.

At some point, I’d turn the LED on, record the time on the cRIO, then loop inspecting images until one showed up with the LED lit. To detect the LED being lit, I use the regular Camera Get which uncompresses the image, then I measured the intensity of a pixel over the LED. A that point, turn the LED back off, wait for things to settle, and do it over again. I decided to wait a random amount with the LED off. This gave me a pretty good statistical picture of the latency. It shows the minimum, the typical and the maximum time you could expect for a given camera setup.

Greg McKaskle

you know you can use Java right?

Last year, you couldn’t. The only data we have is from last year (other than beta test data from this year). Therefore, we have to make this comparison until such time as Java camera code becomes available through some means.

that makes sense. we have no data from last year cuz all our programers graduated and didnt bother passing on their knowlege. so we gotta start from scratch. and we only know how to text program not visual so my team is gunna use java

If you’re starting from scratch, don’t dismiss LabVIEW out of hand. For someone new to programming for FRC robots, I think it’s a whole lot faster to learn.