|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
|
|
Thread Tools | Rate Thread | Display Modes |
|
|
|
#1
|
|||
|
|||
|
Using GRIP code with C++
Team 2002 is having a lot of trouble with vision processing this year. We tried to use RoboRealm like last year, but it wouldn't connect to the RoboRIO. Now we're trying to figure out GRIP, but the only example on FIRST's website is for Java, so we don't know how to use the generated code with our robot. Also, looking around on the web, it seems like we should run GRIP on our computer to save the RIO some processing power, but we aren't sure what that would take either. Any help at all would be much appreciated, we're completely stumped.
|
|
#2
|
||||
|
||||
|
Re: Using GRIP code with C++
Just saying, I'm quite new to this too, so you may want to take this with a grain of salt.
After making your GRIP program, click on Tools -> Generate Code and select C++. I'm not sure what the "Implement WPILib VisionPipeline" does. You will get a file with a name like "GripPipeline.cpp" along with "GripPipleline.h" (or whatever you decided to call it.) In order to use the program in your code, put both files into the same folder as the file that your program is in. You will need to #include "GripPipeline.h". To actually get the image, you do cv::Mat frame; cv::VideoCapture cap(cameranum); Then, set the image width and height, cap.set(CV_CAP_PROP_FRAME_WIDTH, /*image width here*/); cap.set(CV_CAP_PROP_FRAME_HEIGHT, /*image height here*/); To return the results of what you made in GRIP: bool bSuccess = cap.read(frame); grip::GripPipeline gp; return gp.GripPipeline: rocess(frame);I've attached an example here. In my GRIP program, I did an HSV threshold, then used find contours and filter contours to try to detect which contours were the vision targets on the high goal and gear peg. Here, I use the fact that the pixel height and distance are inversely proportional in order to calculate the distance from the vision targets. The method "initialOrientation" figures out if the robot starts on the left, middle, or right to determine which auto mode to run. Code:
/*
* VisionMethods.h
*
* Created on: Feb 3, 2017
* Author: matthewacho
*/
#include "Commands/GripPipeline.h"
#include "RobotConstants.h" //If you don't know what some variables are, they were probably defined in here
#include <string>
#include <vector>
#ifndef SRC_VISIONMETHODS_H_
#define SRC_VISIONMETHODS_H_
std::vector<std::vector<cv::Point>> filteredContours(int cameranum) {
cv::Mat frame;
cv::VideoCapture cap(cameranum);
cap.set(CV_CAP_PROP_FRAME_WIDTH, IMG_WIDTH);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, IMG_HEIGHT);
bool bSuccess = cap.read(frame);
grip::GripPipeline gp;
return gp.GripPipeline::process(frame);
}
double distFromHighGoal() { //Find distance from the center of the boiler, in inches.
int maxArea = 0;
double bestHeight = 0.0;
std::vector<std::vector<cv::Point>> contours = filteredContours(cameraPortHigh);
for(int c=0;c<contours.size();c++) {
if(contourArea(contours[c])>maxArea) {
maxArea=contourArea(contours[c]);
bestHeight = boundingRect(contours[c]).height;
}
}
//Pixel height and distance are inversely proportional
return 48.0*bestHeight/targetHeight4FeetFromHighGoal;
}
double distFromGearPeg() { //Find distance from the gear peg, in inches.
int maxArea = 0;
double bestHeight = 0.0;
std::vector<std::vector<cv::Point>> contours = filteredContours(cameraPortLow);
for(int c=0;c<contours.size();c++) {
if(contourArea(contours[c])>maxArea) {
maxArea=contourArea(contours[c]);
bestHeight = boundingRect(contours[c]).height;
}
}
//Pixel height and distance are inversely proportional
return 48.0*bestHeight/targetHeight4FeetFromGearPeg;
}
cv::Point centerOfContour(std::vector<cv::Point> contour) {
int totalx=0.0; //given a contour, outputs its center
int totaly=0.0;
for(int d=0; d<contour.size();d++) {
totalx+=contour[d].x;
totaly+=contour[d].y;
}
cv::Point pt;
pt.x=totalx/contour.size();
pt.y=totaly/contour.size();
return pt;
}
std::vector<cv::Point> contourCenters(std::vector<std::vector<cv::Point>> contours) {
std::vector<cv::Point> centers; //given a vector of contours, outputs a vector consisting of their centers
double totalx;
double totaly;
for(int c=0;c<contours.size();c++) {
centers.push_back(centerOfContour(contours[c]));
}
return centers;
}
std::string initialOrientation() {
int score=0; //negative for left, positive for right
std::vector<cv::Point> centers = contourCenters(filteredContours(cameraPortLow));
for(int c=0;c<centers.size();c++) {
if(centers[c].x>IMG_WIDTH/2+TOLERANCE) {
score++;
}
else if(centers[c].x<IMG_WIDTH/2-TOLERANCE) {
score--;
}
}
if(score==0) {return "middle";}
else if(score<0) {return "right";}
else {return "left";}
}
Let me know if you have any questions. I hope this helped! -Pay me $2 |
|
#3
|
||||
|
||||
|
Re: Using GRIP code with C++
Feel free to copy the code. I am a programmer for my team, but since I'm a rookie, nothing that I do will make a difference on the real bot. This code will probably be on team 9514's robot at Calgames this year, though.
|
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|