Teach me Vision

Hello,
There is some how can teach me from scratch how to do Vision Processing?
Wait before you close this thread - I promised my team I will succeed doing that and I failed…

I have 2 ways to do that (Programming with java):

  • Raspberry Pi
  • GRIP

Please if you know to program one of this ways a vision process Please comment me and teach me.

I’m not an idiot, I’m programming for 5 years - Websites and Application.
I am in the FRC program for 2 years and I know how things works, just in that topic I’m getting troubles…

Thanks,
Nir.

The biggest thing that helped me learn vision was looking at other teams code. 548 will post their vision code soon, you can also look at other teams code from 2012 or 2013 when vision was used alot

Also, remember to start with crude vision. Don’t try to over complicate things before you get a basic form working. As to the actual details, I’m sorry but I can’t help you. We work in LabVIEW with NI Vision Assistant.

I can walk you through the process I used to get started with OpenCV, or at least the steps I used.

Step 1: Compile a sample program - This may seem trivial but do it. It makes sure your environment is set up right.

Step 2: Load a static image and display it - Again, simple, but you’d be surprised at just how important small successes are in programming.

Step 3: Run a simple filter on your static image(I’m partial to threshold by a color) and display the result. - Not only does it get you an understanding of how to process images but you’re going to do a lot of this anyway.

Step 4: Convert between different color spaces. Typical you have an RGB image, you should convert to HSV. Even if you don’t need to it’s good to know how to.

Step 5: Start writing your filter to find the desired feature. Typically this is a series of steps to isolate the feature. Ours was a simple threshold by and hue range this year.

Step 6: Extract Filter - For this I typically use the various contour functions.

Step 7: Filter your contours - maybe you got a light, maybe there’s multiple targets. This isn’t really vision just simple data cleaning.

Step 8: Interface with robot - ??? This is the hard part.

To add on to this question. How do you export the video feed into java run on eclipse? in other words what i want is to use grip for the image processing but do the calculation for a crosshair, shape detecting and alignment on the live video in eclipse and I don’t want to use the smart dashboard or anything. if this is at all possible that is:D

I’m pretty sure that if you use grip, you need to send the data through network tables, which necessarily implies the use of a smartdashboard.

well darn. Can you get smartdashboard on a mac?

Unfortunately Smartdashboard is a component of Driver Station, a Windows only program. If you use Bootcamp (Boot Camp Assistant User Guide for Mac - Apple Support) you can run Windows and thus Driver Station and Smartdashboard on your Mac. Boot Camp is significantly better than virtualization as it runs directly on the hardware, so no bottlenecks. That’s what most of our programming guys with Macs do.

Edit:

Oops, I didn’t know that. I’ve kept my original post if you want to run Driver Station, but perhaps someone with more knowledge than me can tell you how to get it running on Mac.

Other then the fact that the driver station can launch smartdashboard, they aren’t related at all. Smartdashboard is a Java program that can run on macs. See here for an example https://m.youtube.com/watch?v=qnS6O04Yjrc