|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
| Thread Tools | Rate Thread | Display Modes |
|
#1
|
||||
|
||||
|
Vision for frc 2016
Hello there teams this year Our team was thinking of using vision with our robot but the thing is I don't know how to achieve this can you guys help. Thank you
|
|
#2
|
||||
|
||||
|
Re: Vision for frc 2016
A little more specific? Camera Types? What dashboard?
Did you google search at all? WPI LIB and FRC have good documentation. http://wpilib.screenstepslive.com/s/4485/m/24194 |
|
#3
|
||||
|
||||
|
I did try google search and wpilib we are using Microsoft lifecam 3000 but our team want the robot to see while its in autonomous mode like look at the reflective tape and aim thank you and sorry if I didn't describe it good
|
|
#4
|
||||
|
||||
|
Re: Vision for frc 2016
For vision code, you have two parts: One that analyzes the image and spits out data about the contours of the image and another that analyzes those contours and gives you a position. For the former, I suggest using GRIP, a Java-based image processor with a great GUI which can be found on GitHub. It can be run on the RIO or, as we did, on a co-processor (WARNING: GRIP only works on some architectures so make sure your processor has a supported architecture). General way to use it is Image source -> Filter -> Find Contours -> Publish Contours. You then have a network table at GRIP/<nameyouchoose> that contains several arrays with contour information. Read that on the RIO and perform some trigonometry, and you have the position of the target.
NOTE: for sensing retroreflective tape, you should ring your camera with LEDs and sense for that color. You may have to adjust your camera's exposure (Or LifeCam's default was quite whitewashed). |
|
#5
|
||||
|
||||
|
Re: Vision for frc 2016
Quote:
|
|
#6
|
|||
|
|||
|
Re: Vision for frc 2016
I use a for loop to sort out all but the largest area item in the array. This is usually going to be the target if you are pointing the right way. Also using a filter contours pipe in GRIP may give you what you want.
EDIT: Code:
public boolean isContours() {
Robot.table.getNumberArray("area", greenAreasArray);
if (greenAreasArray.length > 1) {
return true;
} else {
return false;
}
}
public void findMaxArea() {
if (isContours()) {
for (int counter = 0; counter < greenAreasArray.length; counter++) {
if (greenAreasArray[counter] > maxArea) {
maxArea = greenAreasArray[counter];
arrayNum = counter;
}
}
System.out.println(maxArea);
}
}
|
|
#7
|
||||
|
||||
|
Re: Vision for frc 2016
Quote:
|
|
#8
|
|||
|
|||
|
Re: Vision for frc 2016
Our team found success through looking at four values:
|
|
#9
|
|||
|
|||
|
Re: Vision for frc 2016
Quote:
Im using grip for testing right now, I was able to find and publish contours for a static image. How do I get to the network table exactly? |
|
#10
|
|||
|
|||
|
Re: Vision for frc 2016
Wait I though GRIP only took the IP Camera not the LifeCam.
Also accessing the Network tables instructions are on ScreenSteps. https://wpilib.screenstepslive.com/s...-networktables |
|
#11
|
|||
|
|||
|
Re: Vision for frc 2016
Quote:
Also, IR Light from outdoors would affect grip algorithim running on ir or micrsoft camera or no? |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|