|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
| Thread Tools | Rate Thread | Display Modes |
|
#1
|
||||
|
||||
|
Using the Raspberry Pi for Vision Processing
So I am slightly lost on how to use the raspberry pi 3 for vision tracking. We are using a Logitech USB Camera, and we are using GRIP as our vision processing program.
We have Raspbian installed on the Pi. I checked WPILib for processing on an ARM coprocessor: https://wpilib.screenstepslive.com/s...rm-coprocessor I saw the "Building Java" part, and i am completely lost as to the purpose and procedure of this section. (What are Maven Artifacts? Gradle Build?) I also tried building Java on the Pi, from this link: https://github.com/wpilibsuite/Visio...mples/releases But I got lost at the part where they ask you to actually run and build the code. Should I be running this directly from the Pi? or from a windows machine? If anyone could set me on the correct path, it would be great. UPDATE: Through the instructions from the links above, I managed to get Gradle to make a build of the sample files. I then moved the ZIP File from the output folder to my Pi. But now i have no idea what to do with "CameraVision.zip". All the extracted files are a jar file, a .so file, and a text file. Last edited by koreamaniac101 : 04-02-2017 at 14:23. |
|
#2
|
|||
|
|||
|
Re: Using the Raspberry Pi for Vision Processing
It depends on how much work you want to do on the PI - so far, we are trying to do as much vision work as possible on the PI, and let the RoboRIO do what it does best - operate the robot. This means that all of the vision work is delegated to the PI - name, acquiring the image and running the GRIP (opencv) pipeline.
The path we are taking is: 1) install (aka "build") opencv native for the pi. 2) Write the PI side of the code, which consists of: - capture the image - use the GRIP pipeline code to apply the transforms that may identify the target - do some math on the output of the pipeline - put some info in NetworkTables 3) On the RoboRIO side of things, write code that - adds an event listener for NetworkTables changes - when a Networktable change occurs, decide if any action is required, and take the action if it is needed So, basically we have 2 java programs, one running on the PI doing the vision processing, and one "traditional" program running on the RoboRIO, using NetworkTables to communicate between them. Ours is still in progress, BTW, but this describes the high level "architecture" |
|
#3
|
|||
|
|||
|
Re: Using the Raspberry Pi for Vision Processing
Wouldn't happen to have a GitHub link for that? I'm on the same boat as the OP trying to figure out how to get everything into the Gradle build.
|
|
#4
|
|||
|
|||
|
Re: Using the Raspberry Pi for Vision Processing
I've managed to get the contents of the ZIP file working; it's fairly simple.
(On the pi) 1) Extract to a location 2) Copy the text from the text file 3) paste the text into the command line & hit enter 4) Viola! It works. |
|
#5
|
|||
|
|||
|
Re: Using the Raspberry Pi for Vision Processing
I try to run gradlew build from the windows cmd but it says that JAVA_HOME isn't set
|
|
#6
|
|||
|
|||
|
Re: Using the Raspberry Pi for Vision Processing
You need to set your JAVA_HOME variable. In general, you cannot build Java programs without specifying what Java distribution you are building them with.
From Oracle: Quote:
|
|
#7
|
|||
|
|||
|
Re: Using the Raspberry Pi for Vision Processing
Thank you very much! I got the build to go through, now to just get it on the pi and then make the vision jar run on startup.
|
|
#8
|
|||
|
|||
|
Re: Using the Raspberry Pi for Vision Processing
Quote:
![]() Depending on how things go, I'll try to create an example project |
|
#9
|
|||
|
|||
|
Re: Using the Raspberry Pi for Vision Processing
Quote:
http://first.wpi.edu/FRC/roborio/mav...m-raspbian.jar cscore. This is my test program: Code:
import org.opencv.core.Core;
import org.opencv.core.CvType;
import org.opencv.core.Mat;
import edu.wpi.first.wpilibj.CameraServer;
import edu.wpi.first.wpilibj.IterativeRobot;
import edu.wpi.first.wpilibj.networktables.NetworkTable;
import org.opencv.core.Rect;
import org.opencv.imgproc.Imgproc;
//import com.ctre.CANTalon;
import edu.wpi.cscore.UsbCamera;
import edu.wpi.first.wpilibj.CameraServer;
import edu.wpi.first.wpilibj.IterativeRobot;
import edu.wpi.first.wpilibj.RobotDrive;
import edu.wpi.first.wpilibj.vision.VisionRunner;
import edu.wpi.first.wpilibj.vision.VisionThread;
public class Test {
public static void main(String[] args){
VisionThread visionThread;
Object imgLock = new Object();
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
NetworkTable table = NetworkTable.getTable("foo");
Mat mat = Mat.eye(3, 3, CvType.CV_8UC1);
System.out.println("mat = " + mat.dump());
UsbCamera camera = CameraServer.getInstance().startAutomaticCapture();
camera.setResolution(640, 480);
}
}
|
|
#10
|
||||
|
||||
|
Re: Using the Raspberry Pi for Vision Processing
Alright, I managed to get the build file and script to run on the Raspberry Pi. But when i connect the Roborio and Pi through an ethernet switch, i get an error of:
ERROR: select() to roboRIO-5938-FRC.local port 1735 error 113 - No route to host (TCPConnector.cpp:167) Can anyone help me diagnose this? |
|
#11
|
|||
|
|||
|
Re: Using the Raspberry Pi for Vision Processing
That sounds like a network configuration issue on the Raspberry Pi.
Do you have the mdns service enabled on the Pi (often called avahi or avahi-daemon on Linux distributions)? You need to have that service enabled in order to resolve names like roboRIO-5938-FRC.local to their respective IP addresses. Alternatively, if your roboRIO is configured with a static IP address (like 10.59.38.2), you could try using that in your code instead of the local mdns name. Running the ping command (like: ping roboRIO-5938-FRC.local) on the Pi is always a quick and easy check to see if your Pi is able to reach the roboRIO. Good luck. |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|