900's 2015 Robot Code

Hey all,

With the championship now over, we at 900 are doing the yearly tradition of opening our repos to the public. Included is our LabVIEW Swerve Drive and Arm control, our dashboard, and the vision code we used this year on our on board Jetson TK1 to detect the green bins during auton. We will be releasing a few whitepapers in the following month(s) about the systems our robot used this year.

Feel free to ask us questions about our code here (If you have questions about champs and Einstein go here).

Links to GitHub:
Robot Code
Dashboard Code
Vision Code
As of April 28th this code has not been cleaned up.

I think I talked to you Saturday about your system. Really cool work. You said something that was really odd to me: you were getting ~15-20 fps. It didn’t seem right, and looking at your code I now know why (or have a very good guess).

You are using 2 namespaces that have identical functions, such as erode, dilate, and threshold, cv and cv::gpu.

I’ll use the function generateThreshold as a running example. You are passing in Mats, not GpuMat. Meaning when you call cvtColor on ImageIn, it will use the cpu based cvtColor. Same goes for threshold, split, erode and dilate.

There isn’t a gpu equivalent of findcontours that I am aware of. There are gpu edge detectors such as sobel and laplacian, however.

Once again, excellent work. This took me a little while to figure out. I’d love to see what the fps is with these changes. Also, if you want to stick to object recognition through training, look into cudNN.

Thanks for the feedback. I’ll have the students work on this after our break and when they have caught-up on last weeks homework.

I looked through your code. I did not see that you guys used vision this year. The code that you had on github was a C++ library. Do you have a way to integrate the c++ library?

Our vision system runs on a nVidia Jetson. The Jetson writes values to networktables and the roborio reads them in order for our robot to align with bins in our two bin automonous, which you can see here: https://www.youtube.com/watch?v=WqHk50xX1_A. This vision system was in the works throught the entire season, but was not used at NC Regional or Palmetto Regional. In fact, I think we only used it sucessfully in 2 or 3 matches at St. Louis.

I posted our code (Team 107). If you have time take a look through it and let me know what you think. We had a very successful year. We have a three tote auto and we could pretty much change our auton with very short notice. We were 12 for 12 at worlds and about 98% for the rest of the year.

Here is a link

Sorry for the slow response, just saw this.

The short version of the answer is that you’ve managed to find dead code. The functions you list were written up and tested on the CPU and we found they didn’t improve our detection accuracy, so we scrapped them. They’re still in the code but they aren’t used.

The detection code we do use is cascadeDetect(). There’s a GPU and CPU version - we use the GPU version if a GPU is detected at startup (see the top of main()) and if not we fallback to a CPU. This lets us run the code on a normal laptop for testing and then only have to debug the GPU specific stuff on a Jetson.

When we switched from Haar to LBP cascades I did expect a decent performance bump based on what I’d read. The training speed did increase dramatically but the runtime performance didn’t change too much. But after some quick testing, we saw similar performance with the bare-bones OpenCV cascade demo code. My guess is that Haar is either more GPU friendly or just not as well optimized. I’d be happy to find a simple fix if one if out there, though.

Correct me if I’m wrong, but you only used the gpu for the cascade classifier it seems.

I was looking through your code and saw you have a lot of In Place Element Structures. What are these for?

The conversion to grayscale and histogram equalize are also done on the GPU. But yeah, running the cascade classifier is all the vision stuff the code is doing so there’s not much else to offload to the GPU. There’s minor fiddling with rects in some of the modes we run in, but it probably isn’t worth it to upload and download just for that.

Mostly just keeping things grouped together so autocleanup doesn’t separate them.

We just use them as an organizational structure to make the code easier to read.

Here is a link to team 900’s vision whitepaper: http://www.chiefdelphi.com/forums/showthread.php?p=1484741