|
|
|
![]() |
|
|||||||
|
||||||||
Zebravision 3.0 is a culmination of libraries and several applications that are designed to use a Logitech C920 camera mounted to a robot to detect recycling bins in the 2015 FRC game: Recycle Rush. It is designed to run in real time using the OpenCV library onboard a NVIDIA Jetson TK1 DevKit mounted to the robot.
Zebravision 3.0 represents the culmination of many hours of work and research by the programming students for FRC Team 900, also known as the Zebracorns. It encompasses a library and several applications that are designed to use a Logitech C920 camera mounted to a robot to detect recycling bins in the 2015 FRC game: Recycle Rush. It is designed to run in real time using the OpenCV library onboard a NVIDIA Jetson TK1 DevKit mounted to the robot. The detection is done using the Cascade Classification method and Local Binary Patterns.
White Paper.pdf
05-28-2015 08:18 PM
ForeverAlonZebravision 3.0 is Team 900's 2015 initiative to take robot vision in FRC farther. In the 2015 season we successfully integrated cascade classification using feature detection as well as an automated tracking and navigation system. This paper details what we did and how we did it, as well as offering a tutorial so that other teams can use this application. If you have any questions please post here and someone who worked on the paper will respond.
05-29-2015 11:51 AM
faust1706What cost function did you use in your classifier?
05-29-2015 12:27 PM
BerniniIt seems that you didn't fully utilize the classifier. You classify a bin for instance, then compute on that. Extremely inefficient considering you could simply use a CNN and a SVM to compute anything you want about the object, including distance and rotation to it.
05-29-2015 04:44 PM
KJaget
05-29-2015 05:22 PM
faust1706I figured you guys did that, just wanted to make sure though.
Will you be releasing an analysis of your data? Not your training sets, but rather a statistical analysis of the classifier's output.
05-29-2015 06:48 PM
HjelstromWow, great job! Can't wait to see what you guys do next!
05-29-2015 09:36 PM
marshall|
I figured you guys did that, just wanted to make sure though.
Will you be releasing an analysis of your data? Not your training sets, but rather a statistical analysis of the classifier's output. |
05-29-2015 09:42 PM
marshall|
It seems that you didn't fully utilize the classifier. You classify a bin for instance, then compute on that. Extremely inefficient considering you could simply use a CNN and a SVM to compute anything you want about the object, including distance and rotation to it.
|
05-29-2015 09:43 PM
marshall
06-01-2015 03:12 AM
faust1706I do have one more request, could you post the raw data that you analyze?
06-01-2015 01:56 PM
ForeverAlon| Will you be releasing an analysis of your data? Not your training sets, but rather a statistical analysis of the classifier's output. |
| I do have one more request, could you post the raw data that you analyze? |
06-01-2015 02:18 PM
faust1706For starters, when nothing is moving, how much do your output variables change? How much noise does your output data have? Can said noise be classified as Gaussian? What is the exact relationship between resolution and frame rate? How much precision do you lose / gain with different resolutions?
06-03-2015 09:37 AM
KJaget|
For starters, when nothing is moving, how much do your output variables change? How much noise does your output data have? Can said noise be classified as Gaussian? What is the exact relationship between resolution and frame rate? How much precision do you lose / gain with different resolutions?
|
06-12-2015 09:22 AM
faust1706I am struggling to find the time for this inquiry. Here is a question you may be able to answer for me: How often did you get false positives? False negatives?
I'll eventually find the time to compile all the data of the vision programs in FRC the past few years: 341's, 1706's and yours, and do an analysis on each one. But that might be tricky considering I have zero of the materials they were all designed for.
Here is what @bernini (if we all start to do this, eventually chief delphi will add the feature, one can hope) was talking about with CNN (convolutional neural network) and SVM (support vector machine): http://yann.lecun.com/exdb/publis/pd...g-lecun-06.pdf
Your implementation of the same algorithm for FRC would yield better results due to the smaller scale of the network and SVM, I would suspect it to be 100 percent accurate in detecting with so few classes to classify something into (ball, robot, goal, etc..).
|
Thanks! Neither can we. We've got some plans we're working on though. Something about depth perception and neural networks last I heard.
|
08-09-2016 07:44 PM
AMendenhallI know I'm late to the party, but still: that's awesome.
How did you guys get your project to run when the robot started up? Is that done with code on the RoboRIO or on the Jetson?
08-09-2016 09:25 PM
marshall|
I know I'm late to the party, but still: that's awesome.
How did you guys get your project to run when the robot started up? Is that done with code on the RoboRIO or on the Jetson? |