View Single Post
  #5   Spotlight this post!  
Unread 15-02-2016, 19:37
derekhohos's Avatar
derekhohos derekhohos is offline
Registered User
FRC #2338 (Gear It Forward)
Team Role: Programmer
 
Join Date: Feb 2016
Rookie Year: 2014
Location: Oswego, IL
Posts: 2
derekhohos is an unknown quantity at this point
Re: Vision for frc 2016

Quote:
Originally Posted by DGoldDragon28 View Post
For vision code, you have two parts: One that analyzes the image and spits out data about the contours of the image and another that analyzes those contours and gives you a position. For the former, I suggest using GRIP, a Java-based image processor with a great GUI which can be found on GitHub. It can be run on the RIO or, as we did, on a co-processor (WARNING: GRIP only works on some architectures so make sure your processor has a supported architecture). General way to use it is Image source -> Filter -> Find Contours -> Publish Contours. You then have a network table at GRIP/<nameyouchoose> that contains several arrays with contour information. Read that on the RIO and perform some trigonometry, and you have the position of the target.

NOTE: for sensing retroreflective tape, you should ring your camera with LEDs and sense for that color. You may have to adjust your camera's exposure (Or LifeCam's default was quite whitewashed).
If I may ask, how accurate can your GRIP pipeline analyze the retroreflective tape? Does your pipeline detect other "objects" (i.e. bright lights)? Finally, what threshold are you using to filter the contours (HSL, HSV, RGB)? My team can successfully detect the U-shaped retroreflective tape, but the pipeline sometimes picks up bright lights and could alter the values of the ContoursReport.
Reply With Quote