Using GRIP code with C++

Team 2002 is having a lot of trouble with vision processing this year. We tried to use RoboRealm like last year, but it wouldn’t connect to the RoboRIO. Now we’re trying to figure out GRIP, but the only example on FIRST’s website is for Java, so we don’t know how to use the generated code with our robot. Also, looking around on the web, it seems like we should run GRIP on our computer to save the RIO some processing power, but we aren’t sure what that would take either. Any help at all would be much appreciated, we’re completely stumped.

Just saying, I’m quite new to this too, so you may want to take this with a grain of salt.

After making your GRIP program, click on Tools -> Generate Code and select C++. I’m not sure what the “Implement WPILib VisionPipeline” does. You will get a file with a name like “GripPipeline.cpp” along with “GripPipleline.h” (or whatever you decided to call it.)

In order to use the program in your code, put both files into the same folder as the file that your program is in. You will need to #include “GripPipeline.h”. To actually get the image, you do

cv::Mat frame;
cv::VideoCapture cap(cameranum);

Then, set the image width and height,

cap.set(CV_CAP_PROP_FRAME_WIDTH, /image width here/);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, /image height here/);

To return the results of what you made in GRIP:

bool bSuccess = cap.read(frame);
grip::GripPipeline gp;
return gp.GripPipeline::process(frame);

I’ve attached an example here. In my GRIP program, I did an HSV threshold, then used find contours and filter contours to try to detect which contours were the vision targets on the high goal and gear peg. Here, I use the fact that the pixel height and distance are inversely proportional in order to calculate the distance from the vision targets. The method “initialOrientation” figures out if the robot starts on the left, middle, or right to determine which auto mode to run.


/*
 * VisionMethods.h
 *
 *  Created on: Feb 3, 2017
 *      Author: matthewacho
 */
#include "Commands/GripPipeline.h"
#include "RobotConstants.h" //If you don't know what some variables are, they were probably defined in here
#include <string>
#include <vector>
#ifndef SRC_VISIONMETHODS_H_
#define SRC_VISIONMETHODS_H_


std::vector<std::vector<cv::Point>> filteredContours(int cameranum) {
	cv::Mat frame;
	cv::VideoCapture cap(cameranum);
	cap.set(CV_CAP_PROP_FRAME_WIDTH, IMG_WIDTH);
	cap.set(CV_CAP_PROP_FRAME_HEIGHT, IMG_HEIGHT);
	bool bSuccess = cap.read(frame);
	grip::GripPipeline gp;
	return gp.GripPipeline::process(frame);
}

double distFromHighGoal() { //Find distance from the center of the boiler, in inches.
	int maxArea = 0;
	double bestHeight = 0.0;
	std::vector<std::vector<cv::Point>> contours = filteredContours(cameraPortHigh);
	for(int c=0;c<contours.size();c++) {
		if(contourArea(contours[c])>maxArea) {
			maxArea=contourArea(contours[c]);
			bestHeight = boundingRect(contours[c]).height;
		}
	}
	//Pixel height and distance are inversely proportional
	return 48.0*bestHeight/targetHeight4FeetFromHighGoal;
}

double distFromGearPeg() { //Find distance from the gear peg, in inches.
	int maxArea = 0;
	double bestHeight = 0.0;
	std::vector<std::vector<cv::Point>> contours = filteredContours(cameraPortLow);
	for(int c=0;c<contours.size();c++) {
		if(contourArea(contours[c])>maxArea) {
			maxArea=contourArea(contours[c]);
			bestHeight = boundingRect(contours[c]).height;
		}
	}
	//Pixel height and distance are inversely proportional
	return 48.0*bestHeight/targetHeight4FeetFromGearPeg;
}

cv::Point centerOfContour(std::vector<cv::Point> contour) {
	int totalx=0.0; //given a contour, outputs its center
	int totaly=0.0;
	for(int d=0; d<contour.size();d++) {
		totalx+=contour[d].x;
		totaly+=contour[d].y;
	}
	cv::Point pt;
	pt.x=totalx/contour.size();
	pt.y=totaly/contour.size();
	return pt;
}

std::vector<cv::Point> contourCenters(std::vector<std::vector<cv::Point>> contours) {
	std::vector<cv::Point> centers; //given a vector of contours, outputs a vector consisting of their centers
	double totalx;
	double totaly;
	for(int c=0;c<contours.size();c++) {
		centers.push_back(centerOfContour(contours[c]));
	}
	return centers;
}

std::string initialOrientation() {
	int score=0; //negative for left, positive for right
	std::vector<cv::Point> centers = contourCenters(filteredContours(cameraPortLow));
	for(int c=0;c<centers.size();c++) {
		if(centers[c].x>IMG_WIDTH/2+TOLERANCE) {
			score++;
		}
		else if(centers[c].x<IMG_WIDTH/2-TOLERANCE) {
			score--;
		}
	}
	if(score==0) {return "middle";}
	else if(score<0) {return "right";}
	else {return "left";}
}

By the way, the class cv :: Point has 2 public integers, x and y. The x-coordinate increases as you go right and the y-coordinate increases as you go down. Sorry if that’s common sense to you, but it was confusing to me at first so I thought I’d include it. A contour is a std :: vector of cv :: Points.

Let me know if you have any questions. I hope this helped!

-Pay me $2

Feel free to copy the code. I am a programmer for my team, but since I’m a rookie, nothing that I do will make a difference on the real bot. This code will probably be on team 9514’s robot at Calgames this year, though.

Sorry for the long delay. I’ve tried using some of your code, but some of it doesn’t make sense. For one, our GripPipeline::Process returns void, not a vector. Also, do you know how we can get the X coordinate of the cv::Point? we’re trying to find the position of the gear lift, so we were going to find the spot in the middle of the two contours we get. Thanks for your help!

Suddenly, we’re getting an error when we #include GripPipeline.cpp. Eclipse yells at us saying “first defined here” and “multiple definition of…” Does anyone know what’s happening? Nothing shows up with a Google search and nothing GRIP related is working anymore.

You need to include the header GripPipeline.h, not the .cpp implementation file

Thanks, it’s nice to know everything isn’t broken, just weirdly set up. Our main problem now is that when we try to call a grip function on the RIO, it quits out of Tele-Op. Do you know why it would do that?

void Robot::OperatorControl(void) {
    grip::GripPipeline grip;
    cv::Mat frame;
    cv::VideoCapture camera;
    camera.set(CV_CAP_PROP_FRAME_WIDTH, CAMERA_RES_X);
    camera.set(CV_CAP_PROP_FRAME_HEIGHT, CAMERA_RES_Y);
    while(IsOperatorControl() && IsEnabled()) {
        camera.read(frame);
        grip.GripPipeline::Process(frame);
    }
}

Can you post the RobotConstants.h?

Your GripPipeline.cpp, GripPipelne.h and processing file will have to be together so unless your also vision processing on the robo rio thats the reason.

The better way would be to post all the values to network tables and get them in the robot program.

Just for reference (although the file isn’t too exciting):

/*
 * RobotConstants.h
 *
 *  Created on: Feb 3, 2017
 *      Author: matthewacho
 */

#ifndef SRC_ROBOTCONSTANTS_H_
#define SRC_ROBOTCONSTANTS_H_


//set these to correct values after drivetrain fixed
constexpr int TOLERANCE = 10;
constexpr int IMG_WIDTH = 640;
constexpr int IMG_HEIGHT = 480;
constexpr int targetHeight4FeetFromHighGoal = 39;
constexpr int targetHeight4FeetFromGearPeg = 121;
constexpr double cameraWidth2Feet = 29.9375;
constexpr double pi = 3.1415926535898;
constexpr double radiansPerSec = 7.0; //Just a guess. The number of radians
									  //the robot can turn a second by doing
									  //drivetrain->set(1.0,-1.0) or vice versa
constexpr int bestDistFromHighGoal = 36; //this value may be off
constexpr double robotSpeed = 120; //inches per second, may be off

bool isAiming = false;
bool isGearing = false;

//4 feet = height 121 for gear peg
//4 feet = height 39 for high goal 4 inch high tape

//24 in = 29.9375 in width


// constexpr int RANGE_FINDER_MODULE = 1;

#endif  // ROBOTMAP_H

#endif /* SRC_ROBOTCONSTANTS_H_ */

Our robot keeps returning the error “pixel format is unsupported by OpenCV.” Any suggestions?

Would you have to put anything in the operator control function, or does GripPipeline.cpp do its thing automatically?