Chief Delphi

Chief Delphi (http://www.chiefdelphi.com/forums/index.php)
-   General Forum (http://www.chiefdelphi.com/forums/forumdisplay.php?f=16)
-   -   Nvidia posted a video about first (http://www.chiefdelphi.com/forums/showthread.php?t=137178)

Sperkowsky 08-05-2015 06:37

Nvidia posted a video about first
 
Noticed this on nvidias
YouTube and wanted to share.
https://youtu.be/4KlYdCBdjEg

jman4747 08-05-2015 08:51

Re: Nvidia posted a video about first
 
Kudos to team 900. I talked too them about this at championships, and can't wait to read the white paper.

Joe Johnson 08-05-2015 10:37

Re: Nvidia posted a video about first
 
Sperkowsky, that was a great video. Thanks for pointing it out.

You want to know something funny? I have literally never seen a picture or video of Team 900's robot prior to the whole Harpoon Bot conversion / Chees ecake Controversy thing (don't pick at that sore, it'll never heal. Seriously. Leave it alone -- we don't need another CD thread on Cheesecaking. GAH! I said it again. STOP IT!).

ANYWAY... ...I don't know what I expected but I know I didn't expect this.

As an old timer who has designed more than my fair share of FIRST robots with beefy arms* (<< a Strongbad reference for you youngin' out there), I am impressed. Nice job Zebracorns.

Dr. Joe J.

P.S. I like your arm but I like Overclock's "standard 3 joint arm" better still (engineers have the great misfortune of falling in love with their designs).

*I just did the accounting 85% of the FIRST robots I've had a hand in have had something that could fairly be described as a "Beefy Arm." I haven't done the math but I'd have to guess that this is far higher than the typical population of FIRST robots, even than the typical population of "high end" robots (say those at the IRI). So... I can't argue that I tend to put arms on robots because that is the way to make competitive FIRST robots. No. I am afraid that I just like robots with Beefy Arms.

I should probably get some therapy about that ;-)

marshall 08-05-2015 11:16

Re: Nvidia posted a video about first
 
I was about to come to CD to post this exact video. I'm glad people are getting to see all of the work that our students put into this vision system. It was quite the cool setup in my opinion.

Quote:

Originally Posted by Joe Johnson (Post 1481012)
Sperkowsky, that was a great video. Thanks for pointing it out.

You want to know something funny? I have literally never seen a picture or video of Team 900's robot prior to the whole Harpoon Bot conversion / Chees ecake Controversy thing (don't pick at that sore, it'll never heal. Seriously. Leave it alone -- we don't need another CD thread on Cheesecaking. GAH! I said it again. STOP IT!).

ANYWAY... ...I don't know what I expected but I know I didn't expect this.

As an old timer who has designed more than my fair share of FIRST robots with beefy arms* (<< a Strongbad reference for you youngin' out there), I am impressed. Nice job Zebracorns.

Dr. Joe J.

P.S. I like your arm but I like Overclock's "standard 3 joint arm" better still (engineers have the great misfortune of falling in love with their designs).

*I just did the accounting 85% of the FIRST robots I've had a hand in have had something that could fairly be described as a "Beefy Arm." I haven't done the math but I'd have to guess that this is far higher than the typical population of FIRST robots, even than the typical population of "high end" robots (say those at the IRI). So... I can't argue that I tend to put arms on robots because that is the way to make competitive FIRST robots. No. I am afraid that I just like robots with Beefy Arms.

I should probably get some therapy about that ;-)

Thank you for the compliments Joe! I'll let you keep the cheesecake. ;)

Seriously, that arm is a beast. It gave us no end of trouble, just ask any of the AndyMark crew about the gearboxes we abused.

We largely borrowed the arm's telescoping design from Team 40's arm (sadly they aren't around but if you dig then you can find pictures). They had the advantage of just moving inner tubes instead of heavy recycling bins though.

We enjoyed working with Nvidia in St Louis and are hoping to collaborate with them more in the future.

EDIT: Flickr gallery of Team 40. Their robot from 2011 was a huge inspiration for our arm design: https://www.flickr.com/photos/trinityrobotics/

IKE 08-05-2015 11:20

Re: Nvidia posted a video about first
 
4/12 I have had a hand in had some sort of beefy arm.

Joe Johnson 08-05-2015 15:00

Re: Nvidia posted a video about first
 
Quote:

Originally Posted by IKE (Post 1481021)
4/12 I have had a hand in had some sort of beefy arm.

Maybe we should form some sort of Beefy Arm 12 Step Program.

Hi I'm Joe (hi Joe!) and I am a Beefy Arm Designer (audience claps)...

marshall 08-05-2015 15:42

Re: Nvidia posted a video about first
 
And now back to vision...

We're currently threatening some students with not helping them with some cool off season projects until they get the vision white paper published. I suspect a draft will be out soon. They'll be sure to get it posted as soon as they can.

Mastonevich 11-05-2015 11:31

Re: Nvidia posted a video about first
 
I love seeing how things progress in FIRST. Many things today are standard or COTS, but they were great inventions not long ago.

I think Vision is one of the things that lacking as something "easy" to do. It is great to see progress towards this.

marshall 11-05-2015 13:04

Re: Nvidia posted a video about first
 
Quote:

Originally Posted by Mastonevich (Post 1481473)
I love seeing how things progress in FIRST. Many things today are standard or COTS, but they were great inventions not long ago.

I think Vision is one of the things that lacking as something "easy" to do. It is great to see progress towards this.

I agree completely. We need to move the goal posts for vision. It seems many are content with basic vision tracking using the FIRST provided vision targets, nothing wrong with that but it's been the same vision challenge since I was a student.

We've set the bar high for our students working on this project. Our ultimate goal is to remove the driver from certain situations entirely via automation programming and the advancement of this vision system. We think object recognition is a pretty big leap forward and we're hoping to continue to improve this over the years.

Seriously though, our students are working on the white paper for this... or at least that is what they keep telling us anyway.

Jon K. 12-05-2015 21:56

Re: Nvidia posted a video about first
 
In the hours of conversation I have had with team 900 about this arm (and yes, it really has been hours) not once did this vision system get mentioned! That is truly quite the feat! Awesome job!

marshall 12-05-2015 23:37

Re: Nvidia posted a video about first
 
Quote:

Originally Posted by Jon K. (Post 1481769)
In the hours of conversation I have had with team 900 about this arm (and yes, it really has been hours) not once did this vision system get mentioned! That is truly quite the feat! Awesome job!

The vision system only really worked when the arm was working so we had to keep our priorities on the arm!

Seriously though, thank you for listening about our gearbox struggles and to everyone at AndyMark for helping us. I don't know that we could build insane crazy arms without you guys. Granted, I'm sure you folks would rather we stop building crazy arms. ;)

Munchskull 13-05-2015 00:57

Re: Nvidia posted a video about first
 
I saw the nvidia footage of your vision tracking. Are you thinking of making it open source?

marshall 13-05-2015 08:35

Re: Nvidia posted a video about first
 
Quote:

Originally Posted by Munchskull (Post 1481799)
I saw the nvidia footage of your vision tracking. Are you thinking of making it open source?

You're in luck! It already is:

https://github.com/FRC900/2015VisionCode

Contrary to the YouTube comments it is running on the Jetson using OpenCV and the Nvidia libraries that enable support for CUDA... though it is Tegra CUDA and not Geforce CUDA so take that as you will. Not all CUDA cores are created equal.

Munchskull 13-05-2015 18:34

Re: Nvidia posted a video about first
 
Quote:

Originally Posted by marshall (Post 1481823)
You're in luck! It already is:

https://github.com/FRC900/2015VisionCode

Contrary to the YouTube comments it is running on the Jetson using OpenCV and the Nvidia libraries that enable support for CUDA... though it is Tegra CUDA and not Geforce CUDA so take that as you will. Not all CUDA cores are created equal.

That is awesome. Just making sure that I am understanding this correctly as I am not a programer. It looks to me from the notes on github, that this vision code is one that you teach. Am I correct?

Also from a more hardware perspective, would you be able to runs this on other micro controllers such a raspberry pi or is the Jetson required?

Sperkowsky 13-05-2015 22:03

Quote:

Originally Posted by Munchskull (Post 1481947)
That is awesome. Just making sure that I am understanding this correctly as I am not a programer. It looks to me from the notes on github, that this vision code is one that you teach. Am I correct?

Also from a more hardware perspective, would you be able to runs this on other micro controllers such a raspberry pi or is the Jetson required?

I'm a hardware guy by nature and a raspberry pi should be ok. I would keep it on a jetson but a rpi shouldn't have an Issue. I'm pretty sure the code would have to be changed around quite a bit tho

marshall 14-05-2015 07:52

Re: Nvidia posted a video about first
 
Quote:

Originally Posted by Munchskull (Post 1481947)
That is awesome. Just making sure that I am understanding this correctly as I am not a programer. It looks to me from the notes on github, that this vision code is one that you teach. Am I correct?

Also from a more hardware perspective, would you be able to runs this on other micro controllers such a raspberry pi or is the Jetson required?

It is indeed code that you teach or train. You must provide it positive and negative images of the items you are seeking to recognize. The white paper we are working on uses the balls from the 2014 game as an example. I've been told that white paper is still being worked on and to expect a draft this Friday.

You do not need a Jetson to run this type of code but you do need one to run this specific code. In fact, a lot of our prototype work was done on PCs. That being said, we're fans of the Jetson. A raspberry pi should work as well.

Also, the code is using a technique known as cascade classification. It's pretty clever but there are even more cleverer ways to do this using neural networks but that is going to become an off season project for us.

KJaget 14-05-2015 13:39

Re: Nvidia posted a video about first
 
Quote:

Originally Posted by Sperkowsky (Post 1481977)
I'm a hardware guy by nature and a raspberry pi should be ok. I would keep it on a jetson but a rpi shouldn't have an Issue. I'm pretty sure the code would have to be changed around quite a bit tho

The code changes aren't that bad. It's only a few changes to switch between GPU and CPU detection code using OpenCV - you just switch from a CascadeClassifier object to a CascadeClassifier_GPU object and most everything else just works. There might be slight differences in the parameters passed to the call to actually do the detect - we just wrapped them in classes which hid those differences from the calling code. Our code builds and runs not only on a Jetson but on x86 Linux, Windows and Cygwin and autodetects whether to use CPU or GPU based on the hardware it finds.

The bigger problem is going to be speed. Based on what we saw running on the Jetson CPUs I'm not sure RPi performance is going to be usable. But I don't have any specific tests to prove it, but I'd be surprised.

ToddF 14-05-2015 13:50

Re: Nvidia posted a video about first
 
Quote:

Originally Posted by Joe Johnson (Post 1481012)
I like your arm but I like Overclock's "standard 3 joint arm" better still (engineers have the great misfortune of falling in love with their designs).

Our Doc Ock arm was used for cans.

Munchskull 14-05-2015 14:33

Re: Nvidia posted a video about first
 
Quote:

Originally Posted by marshall (Post 1482006)
It is indeed code that you teach or train. You must provide it positive and negative images of the items you are seeking to recognize. The white paper we are working on uses the balls from the 2014 game as an example. I've been told that white paper is still being worked on and to expect a draft this Friday.

You do not need a Jetson to run this type of code but you do need one to run this specific code. In fact, a lot of our prototype work was done on PCs. That being said, we're fans of the Jetson. A raspberry pi should work as well.

Also, the code is using a technique known as cascade classification. It's pretty clever but there are even more cleverer ways to do this using neural networks but that is going to become an off season project for us.

Would you be able to post the computer version of the code?

And if you can, would you be able to give directions on how to run it? (Ex. What environment I would need? Required libraries? Required camera? Ect.)

marshall 14-05-2015 14:53

Re: Nvidia posted a video about first
 
Quote:

Originally Posted by Munchskull (Post 1482083)
Would you be able to post the computer version of the code?

And if you can, would you be able to give directions on how to run it? (Ex. What environment I would need? Required libraries? Required camera? Ect.)

I will see what I can do about getting a student to post some examples once the white paper is up. The computer versions were never meant to run beyond POC from what I recall. I don't think we ever had a complete version built for PC but I could be wrong. I'll see what I can do though.

Sperkowsky 14-05-2015 16:02

Quote:

Originally Posted by KJaget (Post 1482063)
The code changes aren't that bad. It's only a few changes to switch between GPU and CPU detection code using OpenCV - you just switch from a CascadeClassifier object to a CascadeClassifier_GPU object and most everything else just works. There might be slight differences in the parameters passed to the call to actually do the detect - we just wrapped them in classes which hid those differences from the calling code. Our code builds and runs not only on a Jetson but on x86 Linux, Windows and Cygwin and autodetects whether to use CPU or GPU based on the hardware it finds.

The bigger problem is going to be speed. Based on what we saw running on the Jetson CPUs I'm not sure RPi performance is going to be usable. But I don't have any specific tests to prove it, but I'd be surprised.

I went to the jetson release at makerfaire and talked to the lead designer. The architecture is quite different but speed increase is quite nominal compared to the pi. Thats coming from an intel designer.

Munchskull 14-05-2015 16:34

Re: Nvidia posted a video about first
 
Quote:

Originally Posted by marshall (Post 1482086)
I will see what I can do about getting a student to post some examples once the white paper is up. The computer versions were never meant to run beyond POC from what I recall. I don't think we ever had a complete version built for PC but I could be wrong. I'll see what I can do though.

May i suggest that you make a independent thread for this discussion. That would allow for a more formal place to talk about this awesome piece of software.

KJaget 14-05-2015 18:51

Re: Nvidia posted a video about first
 
Quote:

Originally Posted by Munchskull (Post 1482083)
Would you be able to post the computer version of the code?

And if you can, would you be able to give directions on how to run it? (Ex. What environment I would need? Required libraries? Required camera? Ect.)

I'll take a quick shot at this, from memory. Give it a try and ask questions if you run into problems. This will eventually morph into a readme in our code, but feedback will help us debug it.

The code in our git repo (https://github.com/FRC900/2015VisionCode) will build and run on Linux or Windows. The training code will require either Linux or cygwin for some of the scripts. The C++ code will build on everything we've tried, which includes X86 windows, X86 Linux, ARM Linux (for the Jetson) and so on.

You'll need OpenCV 2.4.x installed. On Linux this is typically an apt-get thing or the equivalent. For windows, the OpenCV page is good - http://docs.opencv.org/doc/tutorials...s_install.html.

For cygwin, we've had luck with the tarball at http://hvrl.ics.keio.ac.jp/kimura/op...cv-2.4.11.html. I think we had to move the files extracted into /lib, /share, and so on for the compiler to find them.

The code works with any camera we've thrown at it. It will also run on still images or on video files. For example, for testing we ran the code against video we've downloaded from youtube. We have special code in place for Logitech C920s under Linux since that's what we used, but it wasn't as critical as we thought to use that particular camera.

The detection code itself is in the subdir bindetection. Steps to build :
1. cd bindetection/galib247
2. make
3. cd ..
4. cmake .
5. make

We've hit a weird bug where occasionally you get a weird link error the first time through. If so, repeat the "cmake ." and make.

This will produce the creatively named binary "test", which is the recycle bin detector.

Most of the options to those code can be controlled from the command line. One thing to edit is line 25 of classifierio.cpp. Change the initial /home/ubuntu to the directory the code has been downloaded to. This will require a recompile to take effect.

To run using a camera, run test. This will open the default camera and start detecting. Adding a number to the command line to pick another camera.
To run against a video, add the video name to the command line (e.g. "test video.avi").

I'm sure I'm missing something but that's a start.

ForeverAlon 28-05-2015 20:22

Re: Nvidia posted a video about first
 
Here is a link to team 900's vision whitepaper: http://www.chiefdelphi.com/forums/sh....php?p=1484741


All times are GMT -5. The time now is 06:53.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi