James is doing a great job of explaining the mechanical parts of our robot design, and has asked me to share some of the software aspects as they come together.
This year, we have decided to try using a Raspberry Pi 3B as a vision coprocessor. With the WPI-provided disk image to help with coprocessing, this seems like a good thing to start trying. The instructions from WPI work great, with no surprises.
We made a discovery that makes testing and developing these algorithms much easier: it’s pretty easy to run code meant for the coprocessor on Windows. This isn’t too surprising, but it’s nice to see how easy it is.
You can find Team 95’s bleeding-edge example in our github repo. (Explanations farther below) If you want to run our example, follow these steps:
- In VSCode, say “open folder”, and open the “VisionCoprocessor” folder
- Download OpenCV 3.4.4 from https://opencv.org/releases.html (https://sourceforge.net/projects/opencvlibrary/files/3.4.4/opencv-3.4.4-vc14_vc15.exe/download)
- Run that .exe to self-extract
- Copy its opencv\build\java\x64\opencv_java344.dll into C:\Users\Public\frc2019\frccode\ (Or, if you’re on an x86 computer, you should use the x86 version)
- Click “debug” or “run without debugging” while GripPipelineLinesFromTarget.java is in the forefront in the editor
If you do this, you should see a pile of windows appear, with some annotations drawn on top of the test images. This is an algorithm we’ve decided not to use, so you may notice that it doesn’t work super well. (As of commit 671740ef9de1df072eea18a20af7392841dc0ef2)
The key to making this work is found in GripPipelineLinesFromTarget.main()
:
public static void main(String[] args) {
String[] filesToProcess = {
"test_images/Floor line/CargoAngledLine48in.jpg",
/// ... many files omitted ...
"test_images/Unoccluded, two targets/RocketPanelAngleDark84in.jpg ",
};
Scalar unfilteredLineColor = new Scalar(255, 0, 0);
Scalar leftLineColor = new Scalar(0, 255, 0);
Scalar rightLineColor = new Scalar(0, 0, 255);
int lineWidth = 1;
GripPipelineLinesFromTarget processor = new GripPipelineLinesFromTarget();
for (String file : filesToProcess) {
Mat img = Imgcodecs.imread(file);
processor.process(img);
for(GripPipelineLinesFromTarget.Line line : processor.findLinesOutput()) {
Imgproc.line(img, line.startPoint(), line.endPoint(), unfilteredLineColor, lineWidth);
}
for(GripPipelineLinesFromTarget.Line line : processor.filterLines0Output()) {
Imgproc.line(img, line.startPoint(), line.endPoint(), leftLineColor, lineWidth);
}
for(GripPipelineLinesFromTarget.Line line : processor.filterLines1Output()) {
Imgproc.line(img, line.startPoint(), line.endPoint(), rightLineColor, lineWidth);
}
HighGui.imshow(file, img);
System.out.println(file + " has " + processor.filterLines0Output().size() + " left side lines: " + processor.filterLines0Output());
System.out.println(file + " has " + processor.filterLines1Output().size() + " right side lines: " + processor.filterLines1Output());
}
HighGui.waitKey(10);
}
This main() method is not run by the code that runs on the Pi (see Main.java
for that - we’re using the one from the Java example on the Pi image). This is code that is only executed when you say “Run” from VSCode. This way lets you run the VSCode debugger really easily too, which let me identify some errors in the GRIP-generated pipeline.