Go to Post Ya know something I heard, UFH always look beter in PINK! :) - Kyle Love [more]
Home
Go Back   Chief Delphi > Technical > Programming > Java
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
Reply
Thread Tools Rate Thread Display Modes
  #1   Spotlight this post!  
Unread 11-02-2016, 14:34
Team2895's Avatar
Team2895 Team2895 is offline
Registered User
FRC #2895 (Blazenbotz)
Team Role: Mentor
 
Join Date: Feb 2015
Rookie Year: 2009
Location: Far Rockaway, New York
Posts: 10
Team2895 is an unknown quantity at this point
Vision for frc 2016

Hello there teams this year Our team was thinking of using vision with our robot but the thing is I don't know how to achieve this can you guys help. Thank you
Reply With Quote
  #2   Spotlight this post!  
Unread 11-02-2016, 20:17
Quantum Byte's Avatar
Quantum Byte Quantum Byte is offline
Lead Programmer
AKA: Domenic
FRC #4776 (S.C.O.T.S. Bots)
Team Role: Programmer
 
Join Date: May 2012
Rookie Year: 2011
Location: Hartland, Michigan
Posts: 16
Quantum Byte is an unknown quantity at this point
Re: Vision for frc 2016

A little more specific? Camera Types? What dashboard?

Did you google search at all? WPI LIB and FRC have good documentation.

http://wpilib.screenstepslive.com/s/4485/m/24194
__________________
Hey there! I wrote a multi robot, multi auton, easy to use library for WPILib Java!
Check it out!
Reply With Quote
  #3   Spotlight this post!  
Unread 13-02-2016, 08:24
Team2895's Avatar
Team2895 Team2895 is offline
Registered User
FRC #2895 (Blazenbotz)
Team Role: Mentor
 
Join Date: Feb 2015
Rookie Year: 2009
Location: Far Rockaway, New York
Posts: 10
Team2895 is an unknown quantity at this point
I did try google search and wpilib we are using Microsoft lifecam 3000 but our team want the robot to see while its in autonomous mode like look at the reflective tape and aim thank you and sorry if I didn't describe it good
Reply With Quote
  #4   Spotlight this post!  
Unread 14-02-2016, 14:39
DGoldDragon28's Avatar
DGoldDragon28 DGoldDragon28 is offline
Programmer
FRC #1719 (The Umbrella Corporation)
Team Role: Programmer
 
Join Date: Jan 2016
Rookie Year: 2015
Location: Baltimore, MD
Posts: 10
DGoldDragon28 is an unknown quantity at this point
Re: Vision for frc 2016

For vision code, you have two parts: One that analyzes the image and spits out data about the contours of the image and another that analyzes those contours and gives you a position. For the former, I suggest using GRIP, a Java-based image processor with a great GUI which can be found on GitHub. It can be run on the RIO or, as we did, on a co-processor (WARNING: GRIP only works on some architectures so make sure your processor has a supported architecture). General way to use it is Image source -> Filter -> Find Contours -> Publish Contours. You then have a network table at GRIP/<nameyouchoose> that contains several arrays with contour information. Read that on the RIO and perform some trigonometry, and you have the position of the target.

NOTE: for sensing retroreflective tape, you should ring your camera with LEDs and sense for that color. You may have to adjust your camera's exposure (Or LifeCam's default was quite whitewashed).
Reply With Quote
  #5   Spotlight this post!  
Unread 15-02-2016, 19:37
derekhohos's Avatar
derekhohos derekhohos is offline
Registered User
FRC #2338 (Gear It Forward)
Team Role: Programmer
 
Join Date: Feb 2016
Rookie Year: 2014
Location: Oswego, IL
Posts: 2
derekhohos is an unknown quantity at this point
Re: Vision for frc 2016

Quote:
Originally Posted by DGoldDragon28 View Post
For vision code, you have two parts: One that analyzes the image and spits out data about the contours of the image and another that analyzes those contours and gives you a position. For the former, I suggest using GRIP, a Java-based image processor with a great GUI which can be found on GitHub. It can be run on the RIO or, as we did, on a co-processor (WARNING: GRIP only works on some architectures so make sure your processor has a supported architecture). General way to use it is Image source -> Filter -> Find Contours -> Publish Contours. You then have a network table at GRIP/<nameyouchoose> that contains several arrays with contour information. Read that on the RIO and perform some trigonometry, and you have the position of the target.

NOTE: for sensing retroreflective tape, you should ring your camera with LEDs and sense for that color. You may have to adjust your camera's exposure (Or LifeCam's default was quite whitewashed).
If I may ask, how accurate can your GRIP pipeline analyze the retroreflective tape? Does your pipeline detect other "objects" (i.e. bright lights)? Finally, what threshold are you using to filter the contours (HSL, HSV, RGB)? My team can successfully detect the U-shaped retroreflective tape, but the pipeline sometimes picks up bright lights and could alter the values of the ContoursReport.
Reply With Quote
  #6   Spotlight this post!  
Unread 16-02-2016, 14:54
fireXtract fireXtract is offline
MegaHertz_Lux
FRC #2847 (Mega Hertz)
Team Role: Programmer
 
Join Date: Jan 2013
Rookie Year: 2012
Location: fmt
Posts: 42
fireXtract is an unknown quantity at this point
Re: Vision for frc 2016

I use a for loop to sort out all but the largest area item in the array. This is usually going to be the target if you are pointing the right way. Also using a filter contours pipe in GRIP may give you what you want.

EDIT:
Code:
public boolean isContours() {
		Robot.table.getNumberArray("area", greenAreasArray);
		if (greenAreasArray.length > 1) {
			return true;
		} else {
			return false;
		}
	}

	public void findMaxArea() {
		if (isContours()) {
			for (int counter = 0; counter < greenAreasArray.length; counter++) {
				if (greenAreasArray[counter] > maxArea) {
					maxArea = greenAreasArray[counter];
					arrayNum = counter;
				}
			}
			System.out.println(maxArea);
		}
	}
Reply With Quote
  #7   Spotlight this post!  
Unread 16-02-2016, 17:29
ThomasClark's Avatar
ThomasClark ThomasClark is offline
Registered User
FRC #0237
 
Join Date: Dec 2012
Location: Watertown, CT
Posts: 146
ThomasClark has much to be proud ofThomasClark has much to be proud ofThomasClark has much to be proud ofThomasClark has much to be proud ofThomasClark has much to be proud ofThomasClark has much to be proud ofThomasClark has much to be proud ofThomasClark has much to be proud ofThomasClark has much to be proud ofThomasClark has much to be proud of
Re: Vision for frc 2016

Quote:
Originally Posted by derekhohos View Post
If I may ask, how accurate can your GRIP pipeline analyze the retroreflective tape? Does your pipeline detect other "objects" (i.e. bright lights)? Finally, what threshold are you using to filter the contours (HSL, HSV, RGB)? My team can successfully detect the U-shaped retroreflective tape, but the pipeline sometimes picks up bright lights and could alter the values of the ContoursReport.
I've found that filtering by solidity in combination with a reasonable minimum area picks up the targets really well. The U-shaped targets have a solidity of about 1/3, regardless of how far away they are, while random blobs usually have a solidity closer to 1.
__________________
GRIP (Graphically Represented Image Processing) - rapidly develop computer vision algorithms for FRC
Reply With Quote
  #8   Spotlight this post!  
Unread 16-02-2016, 18:02
mehnadnerd mehnadnerd is offline
Registered User
AKA: Brendan
FRC #1458 (Red Tie Robotics)
Team Role: Programmer
 
Join Date: Feb 2016
Rookie Year: 2009
Location: Danville
Posts: 5
mehnadnerd is an unknown quantity at this point
Re: Vision for frc 2016

Our team found success through looking at four values:
  1. The area as compared to convex area
  2. The perimeter as compared to the convex perimeter
  3. The plenimeter (Perimeter squared over area)
  4. The convex area compared to bounding box area
You can calculate what the ideal values should be, or they can be found by looking at our code at https://github.com/FRC1458/turtleshe...eAnalyser.java. The first three help to ensure that the right shape is being recognised, and the final parameter work to make sure we are looking at a rectangle roughly, so the correct target will be identified.
Reply With Quote
  #9   Spotlight this post!  
Unread 04-07-2016, 09:56
MaskedBandit1 MaskedBandit1 is offline
MaskedBandit1
AKA: Anurag
FRC #2383 (Ninjineers)
Team Role: Programmer
 
Join Date: Jan 2016
Rookie Year: 2015
Location: Florida
Posts: 4
MaskedBandit1 is an unknown quantity at this point
Re: Vision for frc 2016

Quote:
Originally Posted by DGoldDragon28 View Post
For vision code, you have two parts: One that analyzes the image and spits out data about the contours of the image and another that analyzes those contours and gives you a position. For the former, I suggest using GRIP, a Java-based image processor with a great GUI which can be found on GitHub. It can be run on the RIO or, as we did, on a co-processor (WARNING: GRIP only works on some architectures so make sure your processor has a supported architecture). General way to use it is Image source -> Filter -> Find Contours -> Publish Contours. You then have a network table at GRIP/<nameyouchoose> that contains several arrays with contour information. Read that on the RIO and perform some trigonometry, and you have the position of the target.

NOTE: for sensing retroreflective tape, you should ring your camera with LEDs and sense for that color. You may have to adjust your camera's exposure (Or LifeCam's default was quite whitewashed).


Im using grip for testing right now, I was able to find and publish contours for a static image. How do I get to the network table exactly?
Reply With Quote
  #10   Spotlight this post!  
Unread 06-07-2016, 17:06
Ouroboroz Ouroboroz is offline
Registered User
AKA: Kevin
FRC #2554 (The Warhawks)
Team Role: Programmer
 
Join Date: Apr 2016
Rookie Year: 2014
Location: Edison NJ
Posts: 8
Ouroboroz is an unknown quantity at this point
Re: Vision for frc 2016

Wait I though GRIP only took the IP Camera not the LifeCam.

Also accessing the Network tables instructions are on ScreenSteps.
https://wpilib.screenstepslive.com/s...-networktables
Reply With Quote
  #11   Spotlight this post!  
Unread 09-07-2016, 14:10
MaskedBandit1 MaskedBandit1 is offline
MaskedBandit1
AKA: Anurag
FRC #2383 (Ninjineers)
Team Role: Programmer
 
Join Date: Jan 2016
Rookie Year: 2015
Location: Florida
Posts: 4
MaskedBandit1 is an unknown quantity at this point
Re: Vision for frc 2016

Quote:
Originally Posted by Ouroboroz View Post
Wait I though GRIP only took the IP Camera not the LifeCam.

Also accessing the Network tables instructions are on ScreenSteps.
https://wpilib.screenstepslive.com/s...-networktables
So can I use the Microsoft LifeCam 3000 for vision or will I have to buy an axis camera?

Also, IR Light from outdoors would affect grip algorithim running on ir or micrsoft camera or no?
Reply With Quote
Reply


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 22:37.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi