WPILib FRCVision Raspberry Pi image 2019.2.1 Update

I’m pleased to announce the availability of the 2019.2.1 update release of FRCVision, an off-the-shelf Raspberry Pi 3 image for FRC use, brought to you by WPILib! This is an update to the kickoff release.

TLDR: the 2019.2.1 release can be downloaded from GitHub.

About FRCVision

This Raspbian-based Raspberry Pi image includes C++, Java, and Python libraries required for vision coprocessor development for FRC (e.g. opencv, cscore, ntcore, robotpy-cscore, Java 11, etc).

The image has been tested with both the Raspberry Pi 3 Model B and B+.

Features

  • Web dashboard for configuring the rPi (e.g. changing network settings), monitoring the vision program (console, restart), changing CameraServer and NetworkTables settings, and uploading vision processing applications, all without the need for SSH
  • Default application that performs simple streaming of multiple cameras; the image is “plug and play” for FRC dashboard streaming (just set your team number in the rPi web dashboard)
  • Includes example C++, Java, and Python programs to use as a basis for vision processing code
  • Designed for robustness to hard power offs by defaulting the filesystem to read only mode; safe to power directly from the VRM without an external battery
  • Boots (power applied to vision program running) in less than 20 seconds

Getting Started

See http://wpilib.screenstepslive.com/s/currentCS/m/85074/l/1027253-what-you-need-to-get-the-pi-image-running for visual step-by-step installation instructions and additional documentation.

What’s Changed In This Release (since 2019.1.1)

Web dashboard

  • Available USB cameras are now listed on the web dashboard, and cameras can be easily switched between different path options (e.g. by-id or by-path). Camera connection status has also been added.
  • Windows EOLs are now properly converted to Unix style when uploading a Python application
  • C++ applications or other applications greater than 128KB (that would be a big Python program!) can now successfully be uploaded
  • Default stream settings can now be set (if custom application is used, requires updates to work; see examples section)
  • Application directory fixed for on-Pi example applications

Image

  • The full WPILib libraries are now installed
  • Extraneous warnings from libjpeg seen with some cameras have been removed
  • The first time the image is used, the Pi is now automatically rebooted after resizing the root filesystem. This ensures it’s properly read only at first use
  • Python console output is now unbuffered by default, making Python prints visible in the console window
  • OpenCV headers are now included in the image
  • BLAS has been replaced by OpenBLAS, which should improve numpy performance

Built-in streaming application

  • Cameras are now kept open by default
  • Implements the default stream settings set by dashboard

Examples

Note: Benefiting from these changes requires downloading the updated example .zip from the dashboard and merging your code changes into it.

  • Several fixes to the C++ example. Required dependency libs and includes are now bundled in the .zip file
  • Java example required dependency .jar files are now put into the correct location in the .zip file
  • Cameras are now kept open by default
  • All examples implement the default stream settings set by dashboard
  • Example README.txt files have been updated with instructions for building/deploying from desktop using the web dashboard
  • The full WPILib libraries are now included in the dependencies provided with the example .zips
6 Likes

Is there a way to use this with GRIP? Sorry if it’s already covered somewhere on screensteps or the GRIP wiki on Github, I couldn’t find anything there.

Kinda sorta? You can take what GRIP spits out in the python file and massage it to meet your needs. If you look at what 3997 has done https://github.com/team3997/ChickenVision you can see how they’ve manipulated HSV values which you could tune in GRIP and used the medianBlur filter.

Hi Peter,
I was wondering if there a white paper on how this code works on the raspberry pi.
I would like to get a better understanding on how the multiCameraServer code functions
and how it streams to the FRCvision.local

I plan on using openCV to process the frames for line tracking. How would I go about displaying the processed frames that would show the results of the line tracking. Do I use Xming or shuffle Board ? do you have any recommendations ? are there any resources that describe how to?

Another concern that I have is, is the Rpi streaming video one booted thru the runCamera script .
or does it need the FRCvision.local access to start streaming

I have more question but I will stop here. : )

Thank you and best regards,

-Ed