Vision for frc 2016

Hello there teams this year Our team was thinking of using vision with our robot but the thing is I don’t know how to achieve this can you guys help. Thank you

A little more specific? Camera Types? What dashboard?

Did you google search at all? WPI LIB and FRC have good documentation.

http://wpilib.screenstepslive.com/s/4485/m/24194

I did try google search and wpilib we are using Microsoft lifecam 3000 but our team want the robot to see while its in autonomous mode like look at the reflective tape and aim thank you and sorry if I didn’t describe it good

For vision code, you have two parts: One that analyzes the image and spits out data about the contours of the image and another that analyzes those contours and gives you a position. For the former, I suggest using GRIP, a Java-based image processor with a great GUI which can be found on GitHub. It can be run on the RIO or, as we did, on a co-processor (WARNING: GRIP only works on some architectures so make sure your processor has a supported architecture). General way to use it is Image source -> Filter -> Find Contours -> Publish Contours. You then have a network table at GRIP/<nameyouchoose> that contains several arrays with contour information. Read that on the RIO and perform some trigonometry, and you have the position of the target.

NOTE: for sensing retroreflective tape, you should ring your camera with LEDs and sense for that color. You may have to adjust your camera’s exposure (Or LifeCam’s default was quite whitewashed).

If I may ask, how accurate can your GRIP pipeline analyze the retroreflective tape? Does your pipeline detect other “objects” (i.e. bright lights)? Finally, what threshold are you using to filter the contours (HSL, HSV, RGB)? My team can successfully detect the U-shaped retroreflective tape, but the pipeline sometimes picks up bright lights and could alter the values of the ContoursReport.

I use a for loop to sort out all but the largest area item in the array. This is usually going to be the target if you are pointing the right way. Also using a filter contours pipe in GRIP may give you what you want.

EDIT:

public boolean isContours() {
		Robot.table.getNumberArray("area", greenAreasArray);
		if (greenAreasArray.length > 1) {
			return true;
		} else {
			return false;
		}
	}

	public void findMaxArea() {
		if (isContours()) {
			for (int counter = 0; counter < greenAreasArray.length; counter++) {
				if (greenAreasArray[counter] > maxArea) {
					maxArea = greenAreasArray[counter];
					arrayNum = counter;
				}
			}
			System.out.println(maxArea);
		}
	}

I’ve found that filtering by solidity in combination with a reasonable minimum area picks up the targets really well. The U-shaped targets have a solidity of about 1/3, regardless of how far away they are, while random blobs usually have a solidity closer to 1.

Our team found success through looking at four values:

  1. The area as compared to convex area
  2. The perimeter as compared to the convex perimeter
  3. The plenimeter (Perimeter squared over area)
  4. The convex area compared to bounding box area

You can calculate what the ideal values should be, or they can be found by looking at our code at https://github.com/FRC1458/turtleshell/blob/master/TurtleBot/src/com/team1458/turtleshell/vision/ScoreAnalyser.java. The first three help to ensure that the right shape is being recognised, and the final parameter work to make sure we are looking at a rectangle roughly, so the correct target will be identified.

Im using grip for testing right now, I was able to find and publish contours for a static image. How do I get to the network table exactly?

Wait I though GRIP only took the IP Camera not the LifeCam.

Also accessing the Network tables instructions are on ScreenSteps.
https://wpilib.screenstepslive.com/s/4485/m/50711/l/479908-reading-array-values-published-by-networktables

So can I use the Microsoft LifeCam 3000 for vision or will I have to buy an axis camera?

Also, IR Light from outdoors would affect grip algorithim running on ir or micrsoft camera or no?