![]() |
Re: We Want Pictures and Videos of Your Boulders
Here are our pictures (be warned there are a lot of them). Enjoy!
https://drive.google.com/open?id=0B6...DZmVVlrSWRUeWs |
Re: We Want Pictures and Videos of Your Boulders
Quote:
|
Re: We Want Pictures and Videos of Your Boulders
Quote:
|
Re: We Want Pictures and Videos of Your Boulders
Quote:
|
Re: We Want Pictures and Videos of Your Boulders
1 Attachment(s)
Quote:
Thanks again for the pics, they have honestly been quite helpful for testing. Attachment 20019 |
Re: We Want Pictures and Videos of Your Boulders
Quote:
|
Re: We Want Pictures and Videos of Your Boulders
Just in case anyone thought we were joking about this. It's finally been merged into our code and we are aiming to put this on a robot before too long:
![]() The white paper should be coming soon too. :) I'm super proud of my students who have been doing this research. They are some of the smartest people I know. |
Re: We Want Pictures and Videos of Your Boulders
Quote:
|
Re: We Want Pictures and Videos of Your Boulders
We only have 2 of them- those things are expensive (though this is FRC we're talking about, so maybe not comparatively). One is pretty new and the other got caught up on a nail in a wooden prototype so it's all ripped. Oh, and also we played soccer outside with the ripped boulder.
|
Re: We Want Pictures and Videos of Your Boulders
Alright... it took us a year to get this working but it finally works:
https://youtu.be/OT5FHyLBjCg https://youtu.be/eRMo1_hJNa0 That's a video of our robot from 2016 using a neural network based vision system to detect and pick up a game piece by itself (NO DRIVER INTERACTION) from the 2016 game FIRST Stronghold. And contrary to what some teams are saying about the Nvidia TX1... we love it and we are using it and a Stereolabs ZED camera to detect the boulder and then send data back via ZeroMQ to a a National Instruments RoboRIO running LabVIEW along with a KauaiLabs NavX MXP IMU board to orient the robot, drive towards, and grab the ball. So to all of you who thought we were joking when we were asking for pictures. We weren't. This is what we've done with them. Also, check out this awesome custom case for our TX1: ![]() Files for reproduction of the case are available here: https://workbench.grabcad.com/workbe...vJsPGiurrAmGW7 EDIT: Code is available on our Github... it's been out there for a while though. |
Re: We Want Pictures and Videos of Your Boulders
Quote:
Quote:
*FRC272 used this method in 2010 to allign with the soccer balls. They wrote about their experience with this method in a powerpoint they used for a seminar last year here. The video isn't working, but I know this solution worked well for them. |
Re: We Want Pictures and Videos of Your Boulders
Quote:
Quote:
Quote:
Quote:
With all that said. If you're thinking about doing this yourself then understand that there is a ton of work that has gone into it and a ton more will go into it before it is reliable. This isn't something a potential championship winning team should be relying on to establish dominance for a match. It's untested, unreliable, and fragile.... it's still really frickin' cool though. |
Re: We Want Pictures and Videos of Your Boulders
Cool! This is AFAIK the first working NN implemented on an FRC robot.
Have you tried comparing your NN approach to a more traditional model-based vision approach? With a single camera I'd suggest trying the Hough circle transform on an intensity or edge image. If you assume that balls are sitting on the ground plane and your camera is at a fixed height and angle, you can constrain the range of radii that you need to consider. With a stereo/depth camera rig you can do even better; estimate and remove points near the floor plane, and then look only at the points that remain and cluster into spheres (using a 3D template matching algorithm or Hough sphere transform, only now scale is entirely fixed). (C)NNs are awesome technology and are quickly become ubiquitous in computer vision (and if only for this reason alone, they are a worthwhile learning exercise for your team!) They excel at solving hard detection and classification problems where humans don't have good intuition about how to specify useful visual features and the relationships between them. But for tracking a roughly monochromatic sphere on a level surface, I'd wager that human intuition is pretty good. |
Re: We Want Pictures and Videos of Your Boulders
Another 900 mentor here (snow's coming so no one cares about work@work today).
Quote:
Quote:
Quote:
More seriously, we do incorporate some of the scale-limiting ideas you mention to speed up our code. It is still CNNs doing the bulk of the work, though. Quote:
I like to think the students who worked on the various parts got a lot out of it. Plus it works, so bonus. Quote:
We spent a bit of time with meanshift and camshift and didn't have much luck. Some of the non-intuitive things we found about the problem : 1. the balls are reflective, so they pick up the color of field lighting pretty easily. People can easily determine that they're gray despite this. Computers have a tougher time. 2. they are sorta monochromatic, except for the random white markings on them depending on the ball's rotation plus what I mentioned above. Plus shadows and whatever. Typical computer vision problems. 3. There's lots of gray stuff on the field in addition to the balls. None of these are insurmountable. But we only had so many people to throw at the problem, which goes back to the point that the CNNs worked way better than expected pretty quickly. |
| All times are GMT -5. The time now is 01:52. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi