Noticed this on nvidias
YouTube and wanted to share.
Noticed this on nvidias
Kudos to team 900. I talked too them about this at championships, and can’t wait to read the white paper.
Sperkowsky, that was a great video. Thanks for pointing it out.
You want to know something funny? I have literally never seen a picture or video of Team 900’s robot prior to the whole Harpoon Bot conversion / Chees ecake Controversy thing (don’t pick at that sore, it’ll never heal. Seriously. Leave it alone – we don’t need another CD thread on Cheesecaking. GAH! I said it again. STOP IT!).
ANYWAY… …I don’t know what I expected but I know I didn’t expect this.
As an old timer who has designed more than my fair share of FIRST robots with beefy arms* (<< a Strongbad reference for you youngin’ out there), I am impressed. Nice job Zebracorns.
Dr. Joe J.
P.S. I like your arm but I like Overclock’s “standard 3 joint arm” better still (engineers have the great misfortune of falling in love with their designs).
*I just did the accounting 85% of the FIRST robots I’ve had a hand in have had something that could fairly be described as a “Beefy Arm.” I haven’t done the math but I’d have to guess that this is far higher than the typical population of FIRST robots, even than the typical population of “high end” robots (say those at the IRI). So… I can’t argue that I tend to put arms on robots because that is the way to make competitive FIRST robots. No. I am afraid that I just like robots with Beefy Arms.
I should probably get some therapy about that
I was about to come to CD to post this exact video. I’m glad people are getting to see all of the work that our students put into this vision system. It was quite the cool setup in my opinion.
Thank you for the compliments Joe! I’ll let you keep the cheesecake.
Seriously, that arm is a beast. It gave us no end of trouble, just ask any of the AndyMark crew about the gearboxes we abused.
We largely borrowed the arm’s telescoping design from Team 40’s arm (sadly they aren’t around but if you dig then you can find pictures). They had the advantage of just moving inner tubes instead of heavy recycling bins though.
We enjoyed working with Nvidia in St Louis and are hoping to collaborate with them more in the future.
EDIT: Flickr gallery of Team 40. Their robot from 2011 was a huge inspiration for our arm design: https://www.flickr.com/photos/trinityrobotics/
4/12 I have had a hand in had some sort of beefy arm.
Maybe we should form some sort of Beefy Arm 12 Step Program.
Hi I’m Joe (hi Joe!) and I am a Beefy Arm Designer (audience claps)…
And now back to vision…
We’re currently threatening some students with not helping them with some cool off season projects until they get the vision white paper published. I suspect a draft will be out soon. They’ll be sure to get it posted as soon as they can.
I love seeing how things progress in FIRST. Many things today are standard or COTS, but they were great inventions not long ago.
I think Vision is one of the things that lacking as something “easy” to do. It is great to see progress towards this.
I agree completely. We need to move the goal posts for vision. It seems many are content with basic vision tracking using the FIRST provided vision targets, nothing wrong with that but it’s been the same vision challenge since I was a student.
We’ve set the bar high for our students working on this project. Our ultimate goal is to remove the driver from certain situations entirely via automation programming and the advancement of this vision system. We think object recognition is a pretty big leap forward and we’re hoping to continue to improve this over the years.
Seriously though, our students are working on the white paper for this… or at least that is what they keep telling us anyway.
In the hours of conversation I have had with team 900 about this arm (and yes, it really has been hours) not once did this vision system get mentioned! That is truly quite the feat! Awesome job!
The vision system only really worked when the arm was working so we had to keep our priorities on the arm!
Seriously though, thank you for listening about our gearbox struggles and to everyone at AndyMark for helping us. I don’t know that we could build insane crazy arms without you guys. Granted, I’m sure you folks would rather we stop building crazy arms.
I saw the nvidia footage of your vision tracking. Are you thinking of making it open source?
You’re in luck! It already is:
Contrary to the YouTube comments it is running on the Jetson using OpenCV and the Nvidia libraries that enable support for CUDA… though it is Tegra CUDA and not Geforce CUDA so take that as you will. Not all CUDA cores are created equal.
That is awesome. Just making sure that I am understanding this correctly as I am not a programer. It looks to me from the notes on github, that this vision code is one that you teach. Am I correct?
Also from a more hardware perspective, would you be able to runs this on other micro controllers such a raspberry pi or is the Jetson required?
I’m a hardware guy by nature and a raspberry pi should be ok. I would keep it on a jetson but a rpi shouldn’t have an Issue. I’m pretty sure the code would have to be changed around quite a bit tho
It is indeed code that you teach or train. You must provide it positive and negative images of the items you are seeking to recognize. The white paper we are working on uses the balls from the 2014 game as an example. I’ve been told that white paper is still being worked on and to expect a draft this Friday.
You do not need a Jetson to run this type of code but you do need one to run this specific code. In fact, a lot of our prototype work was done on PCs. That being said, we’re fans of the Jetson. A raspberry pi should work as well.
Also, the code is using a technique known as cascade classification. It’s pretty clever but there are even more cleverer ways to do this using neural networks but that is going to become an off season project for us.
The code changes aren’t that bad. It’s only a few changes to switch between GPU and CPU detection code using OpenCV - you just switch from a CascadeClassifier object to a CascadeClassifier_GPU object and most everything else just works. There might be slight differences in the parameters passed to the call to actually do the detect - we just wrapped them in classes which hid those differences from the calling code. Our code builds and runs not only on a Jetson but on x86 Linux, Windows and Cygwin and autodetects whether to use CPU or GPU based on the hardware it finds.
The bigger problem is going to be speed. Based on what we saw running on the Jetson CPUs I’m not sure RPi performance is going to be usable. But I don’t have any specific tests to prove it, but I’d be surprised.
Our Doc Ock arm was used for cans.
Would you be able to post the computer version of the code?
And if you can, would you be able to give directions on how to run it? (Ex. What environment I would need? Required libraries? Required camera? Ect.)
I will see what I can do about getting a student to post some examples once the white paper is up. The computer versions were never meant to run beyond POC from what I recall. I don’t think we ever had a complete version built for PC but I could be wrong. I’ll see what I can do though.