First, I am not an expert. I follow the instructions then push, shove, dig, kick thru until I get to the end. Hopefully, we learn enough along the way and if not, we come here to the real experts. This is what I’ve done with FRC Vision Processing on an Arm.
OK, we got the Vision Sample running on a Raspberry Pi. We were able to compile on Windows after downloading Gradle and the latest JDK, not mentioned in the instructions. Then we got Grip running on Windows using the Raspberry USB camera using SparrowCam Motion, Add Source>ip Camera - Rpi.ip.adr:8081. I felt we would get better pipeline values with the actual camera pixs. Again some missing details, nothing we couldn’t push through. We built a usable pipeline and Generated Code for Java.
So here’s the question, what do we do with it? The generated code is a Class. This makes sense. Instantiate the GripPipeLine object, make a call to gripPipeLine.source(mat0). However, VisionSample was built using command line Gradlew and I have no idea how to include GripPipeLine.Java into this build.
So at this time, the only solution I can come up with is to type the openCV commands into the Main.Java and use the number from the pipeline. This the way the sample is setup. It would be nice if the sample was setup to use the Generate Code output. Two file, Main & GPL combined with Gradle build. Or build inside an IDE. Just saying.
If you don’t have the time, then the short answer is that the pipeline class that is rendered from Grip has some instance variables on it that contain the result of whatever operation you were trying to perform. So for instance, if your last operation was to find blobs, then there will be an instance variable of matrix of key points, like so:
private MatOfKeyPoint findBlobsOutput = new MatOfKeyPoint();
to which you can get a count and also a location and radius of found blobs.
Then, you would need to interpret what that means to you and put some values into network tables or otherwise rig up a communication message to the roboRio with the values.
Thanks Chuck. I did watch your vids and did find them helpful to get me thru the Vision Sample. Very nice presentations.
SOooo, I copied the Generate Code output to the same folder as Main.java. Mine is named GripPipeline.java. I tried several (hundred) time to copy pieces of the pipeline into Main but either had compile errors or runtime errors. Finally one of the errors suggested it needed a ref to GripPipeline.java. Hmmmm, could it be? I deleted all my copy/pastes, cleaned up Main to the original. Then “Instantiate the GripPipeLine object, make a call to gripPipeLine.source(mat0)” and changed the output ref. Sound familiar?
Here are the changes to Main:
public class Main {
// Instantiate the Grip Pipeline as gpl
static GripPipeline gpl = new GripPipeline();
…
// -org- Imgproc.cvtColor(inputImage, hsv, Imgproc.COLOR_BGR2HSV);
// Call to the Grip Pipeline.
// Stores results in Global vars, hsvThresholdOutput, etc.
gpl.process(inputImage);
…
// -org- imageSource.putFrame(hsv);
// Call to the Grip output.
imageSource.putFrame(gpl.hsvThresholdOutput());
Gradlew build --info, copied to RPi, move, unzip, sh runCameraVision. I have the raw image at port 1185 and hsvThreshold image on 1186. AHHHHhhhhhhhhhhhhhh!!!
Next issue is the live hsv image doesn’t match the Grip hsv image. I think it has to do with the camera. I’m using a Logitech 910(?). I’m going to try switching to a Pi Cam. I’d have a little more control.