|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
|
|
Thread Tools | Rate Thread | Display Modes |
|
|
|
#1
|
||||
|
||||
|
Re: Nvidia posted a video about first
Quote:
Also from a more hardware perspective, would you be able to runs this on other micro controllers such a raspberry pi or is the Jetson required? |
|
#2
|
||||
|
||||
|
Re: Nvidia posted a video about first
Quote:
You do not need a Jetson to run this type of code but you do need one to run this specific code. In fact, a lot of our prototype work was done on PCs. That being said, we're fans of the Jetson. A raspberry pi should work as well. Also, the code is using a technique known as cascade classification. It's pretty clever but there are even more cleverer ways to do this using neural networks but that is going to become an off season project for us. |
|
#3
|
||||
|
||||
|
Re: Nvidia posted a video about first
Quote:
And if you can, would you be able to give directions on how to run it? (Ex. What environment I would need? Required libraries? Required camera? Ect.) Last edited by Munchskull : 14-05-2015 at 14:46. |
|
#4
|
||||
|
||||
|
Re: Nvidia posted a video about first
I will see what I can do about getting a student to post some examples once the white paper is up. The computer versions were never meant to run beyond POC from what I recall. I don't think we ever had a complete version built for PC but I could be wrong. I'll see what I can do though.
|
|
#5
|
||||
|
||||
|
Re: Nvidia posted a video about first
Quote:
|
|
#6
|
||||
|
||||
|
Re: Nvidia posted a video about first
Quote:
The code in our git repo (https://github.com/FRC900/2015VisionCode) will build and run on Linux or Windows. The training code will require either Linux or cygwin for some of the scripts. The C++ code will build on everything we've tried, which includes X86 windows, X86 Linux, ARM Linux (for the Jetson) and so on. You'll need OpenCV 2.4.x installed. On Linux this is typically an apt-get thing or the equivalent. For windows, the OpenCV page is good - http://docs.opencv.org/doc/tutorials...s_install.html. For cygwin, we've had luck with the tarball at http://hvrl.ics.keio.ac.jp/kimura/op...cv-2.4.11.html. I think we had to move the files extracted into /lib, /share, and so on for the compiler to find them. The code works with any camera we've thrown at it. It will also run on still images or on video files. For example, for testing we ran the code against video we've downloaded from youtube. We have special code in place for Logitech C920s under Linux since that's what we used, but it wasn't as critical as we thought to use that particular camera. The detection code itself is in the subdir bindetection. Steps to build : 1. cd bindetection/galib247 2. make 3. cd .. 4. cmake . 5. make We've hit a weird bug where occasionally you get a weird link error the first time through. If so, repeat the "cmake ." and make. This will produce the creatively named binary "test", which is the recycle bin detector. Most of the options to those code can be controlled from the command line. One thing to edit is line 25 of classifierio.cpp. Change the initial /home/ubuntu to the directory the code has been downloaded to. This will require a recompile to take effect. To run using a camera, run test. This will open the default camera and start detecting. Adding a number to the command line to pick another camera. To run against a video, add the video name to the command line (e.g. "test video.avi"). I'm sure I'm missing something but that's a start. |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|