Go to Post I see you've one-upped me in the stripped gear game. I commend you. - frcguy [more]
Home
Go Back   Chief Delphi > Technical > Programming > Java
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
Reply
 
Thread Tools Rate Thread Display Modes
  #1   Spotlight this post!  
Unread 22-01-2017, 22:43
Lesafian Lesafian is offline
Registered User
AKA: Jeremy Styma
FRC #6077 (Wiking Kujon)
Team Role: Programmer
 
Join Date: Feb 2016
Rookie Year: 2016
Location: Posen, Michigan
Posts: 25
Lesafian is an unknown quantity at this point
Will this vision processing code work? / Confusion about NetworkTables

Hi, everyone. I am currently the only programmer for my team, and we have no programming mentors, so bare with me. This is our 2nd year, and we're making an attempt at vision processing. We're going to use GRIP for our vision processing, and a LifeCam-3000. I've been very confused as to how to use the values from NetworkTables. I decided to tinker around and just start programming. I think I may have came up with something that makes logical sense, and I'm asking for your input.

Code:
package org.usfirst.frc.team6077.robot;

import edu.wpi.first.wpilibj.ADXRS450_Gyro;
import edu.wpi.first.wpilibj.Joystick;
import edu.wpi.first.wpilibj.RobotDrive;
import edu.wpi.first.wpilibj.SampleRobot;
import edu.wpi.first.wpilibj.Timer;
import edu.wpi.first.wpilibj.VictorSP;
import edu.wpi.first.wpilibj.networktables.NetworkTable;
import edu.wpi.first.wpilibj.smartdashboard.SendableChooser;
import edu.wpi.first.wpilibj.smartdashboard.SmartDashboard;

public class Robot extends SampleRobot {
	NetworkTable table = NetworkTable.getTable("GRIP/myContoursReport");
	VictorSP leftFront, leftBack, rightFront, rightBack, shooter, shooterPaddle1, shooterPaddle2;
	ADXRS450_Gyro gyro = new ADXRS450_Gyro();
	RobotDrive drive;
	Joystick logitech = new Joystick(0);
	SendableChooser<String> chooser = new SendableChooser<>();
	static double kp = 0.03;
	static final String highGoal = "High Goal";
	static final String gear = "Gear";

	// Vision Processing
	double[] centerX;
	double[] centerY;
	public String shotReady = "Shot Ready";
	int gripTolerance = 3;
	int camWidth = 640;
	int camHeight = 480;
	int imageCenter = (camWidth / 2) + (camHeight / 2);
	boolean isShotReady;

	@Override
	public void robotInit() {
		shooter = new VictorSP(0);
		leftFront = new VictorSP(1);
		leftBack = new VictorSP(2);
		rightFront = new VictorSP(3);
		rightBack = new VictorSP(4);
		drive = new RobotDrive(leftFront, leftBack, rightFront, rightBack);
		drive.setExpiration(0.1);
		gyro.calibrate();
		SmartDashboard.putData("Auto modes", chooser);
		this.updateDashboard();

	}

	@Override
	public void autonomous() {
		String autoSelected = chooser.getSelected();
		System.out.println("Auto selected: " + autoSelected);

		switch (autoSelected) {
		case (gear):
			// Put gear onto the peg, and cross the line
			break;
		case (highGoal):
			// Cross the line, shoot balls into the High Goal
			break;
		default:
			// Drive Straight
			break;
		}
	}

	@Override
	public void operatorControl() {
		drive.setSafetyEnabled(false);
		gyro.reset();
		// drive.setMaxOutput(0.5);
		while (isOperatorControl() && isEnabled()) {
			SmartDashboard.putNumber("Gyro Angle", gyro.getAngle());
			drive.mecanumDrive_Cartesian(logitech.getY(), logitech.getX(), logitech.getTwist(), 0);
			if (logitech.getRawButton(1)) {
				shooter.set(0.55);
			}
			Timer.delay(0.005);
			
			if (logitech.getRawButton(3)) {
				this.lineUp();
			}
		}
	}

	public void lineUp() {

		this.updateDashboard();

		if (!isShotReady) {
			if (centerX[0] < imageCenter) {
				drive.drive(0.3, 0.8);
			} else if (centerX[0] > imageCenter) {
				drive.drive(0.3, -0.8);
			}

			if (centerY[0] < imageCenter) {
				drive.drive(1, 0);
			} else if (centerY[0] > imageCenter) {
				drive.drive(-1, 0);
			}
		} else {
			SmartDashboard.putString(shotReady, "true");
		}

	}

	public void updateDashboard() {
		double[] gripValuesX = new double[0];
		double[] gripValuesY = new double[0];
		centerX = table.getNumberArray("centerX", gripValuesX);
		centerY = table.getNumberArray("centerY", gripValuesY);
		SmartDashboard.putString("centerX", centerX.toString());
		SmartDashboard.putString("centerY", centerY.toString());
		try {
			if (centerX[0] + gripTolerance > imageCenter && centerX[0] - gripTolerance < imageCenter
					&& centerY[0] + gripTolerance > imageCenter && centerY[0] - gripTolerance < imageCenter) {
				isShotReady = true;
			} else {
				isShotReady = false;
			}
		} catch (Exception e) {
			SmartDashboard.putString(shotReady, "Error: No sight");
		}
	}

	@Override
	public void test() {
	}
}
My GRIP file is using findContours to draw a white line around the (assuming that we're using retroreflective tape) green rectangle.

The way I assume it works is that the value centerX is the center x value of said contour, and the centerY value is the center y value of said contour. Therefore I must match up (with a 3 pixel room for error) the center of the box with the center of my camera.

This is just what I assume, clarification would be amazing, because I'm rather confused right now.

Thank you!
Reply With Quote
  #2   Spotlight this post!  
Unread 23-01-2017, 12:25
wlogeais wlogeais is offline
Registered User
FRC #2177 (The Robettes)
Team Role: Mentor
 
Join Date: Feb 2016
Rookie Year: 2011
Location: Minnesota
Posts: 18
wlogeais is an unknown quantity at this point
Re: Will this vision processing code work? / Confusion about NetworkTables

Quote:
Originally Posted by Lesafian View Post
Hi, everyone. I am currently the only programmer for my team, and we have no programming mentors, so bare with me. ... I decided to tinker around and just start programming.
Sorry you have no programming Mentor, your attitude so far is a great asset!
I’m going to break up my response into general / examples and then into a mini-code-review detail.

For General Vision tips last year it was a challenge to run GRIP on the roboRIO unless you only needed one camera and only needed it for a brief time (in autonomous). Therefore the Pi/Kangaroo options were very popular.

This year look for “GRIP – Generate Code” instructions in the screensteps-2017 instructions. Read (and use) the whole example. Then look specifically for the VisionThread() “->” segment of code, copy it and modify that ‘lambda’ function. This allows you to build/test your pipeline with you programming computer and take that pipeline code into your team-java project. Among other benefits this will allow for lower-refresh rates and/or if you want to use 2 cameras (or other reasons for 2 pipelines – via VisionRunner ‘runOnce()’ rather than VisionThread ‘start()’ ).

That said – if you already have electrical/ip plans towards a kangaroo – that still works – that approach still requires the network-tables code you have.

Lastly – also do a google search for “frc vision hot or cold”. After chief-delphi and after screensteps-2017 you should see screensteps-2017 which contains a great section on processing the (grip-like-) contours to evaluate which contour is the best target – and other code relating to what do with good/bad target values.

~~~~ part 2. ~~~~ regarding your test-code.
A good start.

Your imageCenter won't be useful as calculated. Logically you either need to be dealing with area or x-ratios/values or y-rations/values ( or more advanced the x/y ration like in target evaluation).

Based on that - x-center (vs imageXcenter) works well for robot-drive-rotate while x-width or y-height can be used for robot-drive-(closer-or-further) - prioritize one then the other, you might not need both.

So given the ideals of 2014-(is-goal-hot-or-not...), I think you want to change your 'updateDashboard()' code to be focused on // find best target...,
Then have the lineup() code focus on // need to drive rotate..., else // close enough... // else ...ready-for-shot...

Lastly you may find this tip helpful for trouble-shooting
This...
} else {
SmartDashboard.putString(shotReady, "true");
And
SmartDashboard.putString(shotReady, "Error: No sight");
... is something I recommend against.
Rather, I recommend towards use of string-literals as the ('key',...) and then variable as the value.
String direction = "NONE";
If () { ... ; direction = "forward"; }
Else If () { ... ; direction = "spin right"; }
SmartDashboard.putString("direction", direction);
And a separate key from different methods/purposes - such as your updateDashboard()
SmartDashboard.putNumber("contour count", centerX.length );
// this way you get a hint of when you are dealing with center[0] but center[1] might be the better target...
Reply With Quote
  #3   Spotlight this post!  
Unread 23-01-2017, 12:30
wlogeais wlogeais is offline
Registered User
FRC #2177 (The Robettes)
Team Role: Mentor
 
Join Date: Feb 2016
Rookie Year: 2011
Location: Minnesota
Posts: 18
wlogeais is an unknown quantity at this point
Re: Will this vision processing code work? / Confusion about NetworkTables

Quote:
Originally Posted by wlogeais View Post
And after screensteps-2017 you should see screensteps-2017 which contains a great section on processing the (grip-like-) contours to evaluate which contour is the best target – and other code relating to what do with good/bad target values.
Meant you should see screensteps-2014, which contains...
Reply With Quote
Reply


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 12:58.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi