Chief Delphi

Chief Delphi (http://www.chiefdelphi.com/forums/index.php)
-   General Forum (http://www.chiefdelphi.com/forums/forumdisplay.php?f=16)
-   -   Nvidia posted a video about first (http://www.chiefdelphi.com/forums/showthread.php?t=137178)

marshall 14-05-2015 07:52

Re: Nvidia posted a video about first
 
Quote:

Originally Posted by Munchskull (Post 1481947)
That is awesome. Just making sure that I am understanding this correctly as I am not a programer. It looks to me from the notes on github, that this vision code is one that you teach. Am I correct?

Also from a more hardware perspective, would you be able to runs this on other micro controllers such a raspberry pi or is the Jetson required?

It is indeed code that you teach or train. You must provide it positive and negative images of the items you are seeking to recognize. The white paper we are working on uses the balls from the 2014 game as an example. I've been told that white paper is still being worked on and to expect a draft this Friday.

You do not need a Jetson to run this type of code but you do need one to run this specific code. In fact, a lot of our prototype work was done on PCs. That being said, we're fans of the Jetson. A raspberry pi should work as well.

Also, the code is using a technique known as cascade classification. It's pretty clever but there are even more cleverer ways to do this using neural networks but that is going to become an off season project for us.

KJaget 14-05-2015 13:39

Re: Nvidia posted a video about first
 
Quote:

Originally Posted by Sperkowsky (Post 1481977)
I'm a hardware guy by nature and a raspberry pi should be ok. I would keep it on a jetson but a rpi shouldn't have an Issue. I'm pretty sure the code would have to be changed around quite a bit tho

The code changes aren't that bad. It's only a few changes to switch between GPU and CPU detection code using OpenCV - you just switch from a CascadeClassifier object to a CascadeClassifier_GPU object and most everything else just works. There might be slight differences in the parameters passed to the call to actually do the detect - we just wrapped them in classes which hid those differences from the calling code. Our code builds and runs not only on a Jetson but on x86 Linux, Windows and Cygwin and autodetects whether to use CPU or GPU based on the hardware it finds.

The bigger problem is going to be speed. Based on what we saw running on the Jetson CPUs I'm not sure RPi performance is going to be usable. But I don't have any specific tests to prove it, but I'd be surprised.

ToddF 14-05-2015 13:50

Re: Nvidia posted a video about first
 
Quote:

Originally Posted by Joe Johnson (Post 1481012)
I like your arm but I like Overclock's "standard 3 joint arm" better still (engineers have the great misfortune of falling in love with their designs).

Our Doc Ock arm was used for cans.

Munchskull 14-05-2015 14:33

Re: Nvidia posted a video about first
 
Quote:

Originally Posted by marshall (Post 1482006)
It is indeed code that you teach or train. You must provide it positive and negative images of the items you are seeking to recognize. The white paper we are working on uses the balls from the 2014 game as an example. I've been told that white paper is still being worked on and to expect a draft this Friday.

You do not need a Jetson to run this type of code but you do need one to run this specific code. In fact, a lot of our prototype work was done on PCs. That being said, we're fans of the Jetson. A raspberry pi should work as well.

Also, the code is using a technique known as cascade classification. It's pretty clever but there are even more cleverer ways to do this using neural networks but that is going to become an off season project for us.

Would you be able to post the computer version of the code?

And if you can, would you be able to give directions on how to run it? (Ex. What environment I would need? Required libraries? Required camera? Ect.)

marshall 14-05-2015 14:53

Re: Nvidia posted a video about first
 
Quote:

Originally Posted by Munchskull (Post 1482083)
Would you be able to post the computer version of the code?

And if you can, would you be able to give directions on how to run it? (Ex. What environment I would need? Required libraries? Required camera? Ect.)

I will see what I can do about getting a student to post some examples once the white paper is up. The computer versions were never meant to run beyond POC from what I recall. I don't think we ever had a complete version built for PC but I could be wrong. I'll see what I can do though.

Sperkowsky 14-05-2015 16:02

Quote:

Originally Posted by KJaget (Post 1482063)
The code changes aren't that bad. It's only a few changes to switch between GPU and CPU detection code using OpenCV - you just switch from a CascadeClassifier object to a CascadeClassifier_GPU object and most everything else just works. There might be slight differences in the parameters passed to the call to actually do the detect - we just wrapped them in classes which hid those differences from the calling code. Our code builds and runs not only on a Jetson but on x86 Linux, Windows and Cygwin and autodetects whether to use CPU or GPU based on the hardware it finds.

The bigger problem is going to be speed. Based on what we saw running on the Jetson CPUs I'm not sure RPi performance is going to be usable. But I don't have any specific tests to prove it, but I'd be surprised.

I went to the jetson release at makerfaire and talked to the lead designer. The architecture is quite different but speed increase is quite nominal compared to the pi. Thats coming from an intel designer.

Munchskull 14-05-2015 16:34

Re: Nvidia posted a video about first
 
Quote:

Originally Posted by marshall (Post 1482086)
I will see what I can do about getting a student to post some examples once the white paper is up. The computer versions were never meant to run beyond POC from what I recall. I don't think we ever had a complete version built for PC but I could be wrong. I'll see what I can do though.

May i suggest that you make a independent thread for this discussion. That would allow for a more formal place to talk about this awesome piece of software.

KJaget 14-05-2015 18:51

Re: Nvidia posted a video about first
 
Quote:

Originally Posted by Munchskull (Post 1482083)
Would you be able to post the computer version of the code?

And if you can, would you be able to give directions on how to run it? (Ex. What environment I would need? Required libraries? Required camera? Ect.)

I'll take a quick shot at this, from memory. Give it a try and ask questions if you run into problems. This will eventually morph into a readme in our code, but feedback will help us debug it.

The code in our git repo (https://github.com/FRC900/2015VisionCode) will build and run on Linux or Windows. The training code will require either Linux or cygwin for some of the scripts. The C++ code will build on everything we've tried, which includes X86 windows, X86 Linux, ARM Linux (for the Jetson) and so on.

You'll need OpenCV 2.4.x installed. On Linux this is typically an apt-get thing or the equivalent. For windows, the OpenCV page is good - http://docs.opencv.org/doc/tutorials...s_install.html.

For cygwin, we've had luck with the tarball at http://hvrl.ics.keio.ac.jp/kimura/op...cv-2.4.11.html. I think we had to move the files extracted into /lib, /share, and so on for the compiler to find them.

The code works with any camera we've thrown at it. It will also run on still images or on video files. For example, for testing we ran the code against video we've downloaded from youtube. We have special code in place for Logitech C920s under Linux since that's what we used, but it wasn't as critical as we thought to use that particular camera.

The detection code itself is in the subdir bindetection. Steps to build :
1. cd bindetection/galib247
2. make
3. cd ..
4. cmake .
5. make

We've hit a weird bug where occasionally you get a weird link error the first time through. If so, repeat the "cmake ." and make.

This will produce the creatively named binary "test", which is the recycle bin detector.

Most of the options to those code can be controlled from the command line. One thing to edit is line 25 of classifierio.cpp. Change the initial /home/ubuntu to the directory the code has been downloaded to. This will require a recompile to take effect.

To run using a camera, run test. This will open the default camera and start detecting. Adding a number to the command line to pick another camera.
To run against a video, add the video name to the command line (e.g. "test video.avi").

I'm sure I'm missing something but that's a start.

ForeverAlon 28-05-2015 20:22

Re: Nvidia posted a video about first
 
Here is a link to team 900's vision whitepaper: http://www.chiefdelphi.com/forums/sh....php?p=1484741


All times are GMT -5. The time now is 06:53.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi