View Single Post
  #9   Spotlight this post!  
Unread 04-07-2016, 09:56
MaskedBandit1 MaskedBandit1 is offline
MaskedBandit1
AKA: Anurag
FRC #2383 (Ninjineers)
Team Role: Programmer
 
Join Date: Jan 2016
Rookie Year: 2015
Location: Florida
Posts: 3
MaskedBandit1 is an unknown quantity at this point
Re: Vision for frc 2016

Quote:
Originally Posted by DGoldDragon28 View Post
For vision code, you have two parts: One that analyzes the image and spits out data about the contours of the image and another that analyzes those contours and gives you a position. For the former, I suggest using GRIP, a Java-based image processor with a great GUI which can be found on GitHub. It can be run on the RIO or, as we did, on a co-processor (WARNING: GRIP only works on some architectures so make sure your processor has a supported architecture). General way to use it is Image source -> Filter -> Find Contours -> Publish Contours. You then have a network table at GRIP/<nameyouchoose> that contains several arrays with contour information. Read that on the RIO and perform some trigonometry, and you have the position of the target.

NOTE: for sensing retroreflective tape, you should ring your camera with LEDs and sense for that color. You may have to adjust your camera's exposure (Or LifeCam's default was quite whitewashed).


Im using grip for testing right now, I was able to find and publish contours for a static image. How do I get to the network table exactly?
Reply With Quote