We plan on using the Raspberry Pi to do our vision processing. How would we output Python script results (after processing) and receive on RoboRio (Java)? I looked into pynetworktables and am confused on how to receive the data from the RoboRio.
GRIP has some examples. Specifically samples/frc_find_red_areas.
Basically, you just publish your results to NetworkTables with the pi connected to the roborio/radio/switch with an ethernet cable, then you can read that from your robot program.
Hi, I am working on the same thing, but in Java. Is there documentation for Network tables in java?
Thanks, could you take a look at this code I found on another thread:
//======================================== //Robot Code // declare object NetworkTable cameraTable; // getinstance (table is a singleton, if one does not exist then one will // be created otherwise returns reference to existing instance cameraTable = NetworkTable.getTable("camera"); // get data element from the table // default values returned if no data has been provided from another source yet cameraTable.beginTransaction(); Boolean imageFound = cameraTable.getBoolean("found", false ); Double imageOffset = cameraTable.getDouble("offset", -1.0); Double imageDistance = cameraTable.getDouble("distance", -1.0); cameraTable.endTransaction(); System.out.println( "found = " + imageFound); System.out.println( "offset = " + imageOffset); System.out.println( "distance = " + imageDistance); //======================================= // Driver Station PC // camera image processing is done in C# application on DS // declare it private NetworkTable table; // Init NetworkTable NetworkTable.setIPAddress("10.6.23.2"); // ip of crio table = NetworkTable.getTable("camera"); // in the image processing loop table.beginTransaction(); table.putBoolean("found", true); table.putDouble("distance", distance); table.putDouble("offset", offset); table.endTransaction();
Looks like what I need, but is it missing something (ie setTeam, etc.)?
That example is 4 years old. All you have to do on the pi:
NetworkTable.setClientMode(); NetworkTable.setTeamNumber(1512); ITable visionTable = NetworkTable.getTable("vision");
That’ll set up the pi and a table specifically for vision. If you launch OutlineViewer from Eclipse (WPILib > Launch OutlineViewer), you’ll be able to see the values update in realtime.
Then to publish the results from the vision processing
visionTable.putBoolean("Have target", haveTarget); visionTable.putNumber("Angle to target", angleToTarget); ...
In your robot program, you’d also declare a table for vision just like we did on the pi. To read the values the pi is publishing, use the corresponding ‘get’ methods
haveTarget = visionTable.getBoolean("Have target", false); // default to false targetAngle = visionTable.getNumber("Angle to target", 0); // default to 0 ...
Thank you so much!
Does anyone know of a vision processing tutorial?
Sent from my Pixel using Tapatalk
Look up PyImageSearch. Extremely useful!
Sent from my SM-N920V using Tapatalk
I thought the Pi had issues running GRIP, are those gone now?
We are not using GRIP but OpenCV