I’m pleased to announce the availability of the kickoff release (2019.1.1) of FRCVision, an off-the-shelf Raspberry Pi 3 image for FRC use, brought to you by WPILib!
This Raspbian-based image includes the WPILib and RobotPy C++, Java, and Python libraries required for vision coprocessor development for FRC (e.g. opencv, cscore, ntcore, pynetworktables, robotpy-cscore, Java 11, etc), and bundles both a default application that performs streaming of multiple cameras, and example C++, Java, and Python programs to use as a basis for vision processing code. It ties into NetworkTables for easy camera use from FRC dashboards such as Shuffleboard and the LabView dashboard.
A web dashboard is also included to configure/monitor the rPi (e.g. changing network settings), monitor the vision program (console, restart), change CameraServer and NetworkTables settings, and upload vision processing applications, all without the need for SSH. The image has also been designed for robustness to hard power offs by defaulting the filesystem to read only mode.
Documentation is available on ScreenSteps here: https://wpilib.screenstepslive.com/s/currentCS/m/85074
Download from GitHub directly here: https://github.com/wpilibsuite/FRCVision-pi-gen/releases
14 Likes
Given 2019’s sandstorm period, I would like to point out a Pi with this image is an easy way to add multiple camera streams to your robot, as the Pi has 4 USB ports and the integrated application supports multiple camera streams out of the box.
Simply go to the web dashboard, click “Add Camera”, give it a name and “/dev/video1” for the device (or “/dev/video2” etc), set the resolution, and you’ll have an additional camera stream for the dashboard.
2 Likes
Is there anyway to change the rate at which the network tables sends data?
You can use SetUpdateRate(), but even better is to use Flush(). See Networking a raspberry pi
1 Like
I am looking forward to digging into this toolkit with the team. I ran through the install on a new pi and I have one comment that might help others and one questions:
-
the comment: I noticed in the documentation that the login info was not included. It should have been obvious to me to try the pi default (usrname: pi login:raspberry) but it took me a few minutes to I offer that info here in case in helps someone.
-
the question: I notice that an Xserver is not included in the distribution. This makes sense for the intended purpose but I have a team w/o computers so I also wanted to use the pi as a development environment too. I was wondering if anyone has any thoughts about installing X by doing something like an “apt-get install xutils”? and then install Visual Studio on the pi?
I have a topic started here Raspberry Pi Vision Questions but I thought maybe I should come to the source.
I got a successful build of the java example. I copied the 2 files over to the RPI and now when trying to ./runInteractive I get this. It just hangs at the last error. Did I miss something? I am connected to the robot as I can deploy code, see everything on the driver station and SSH into the PI.
Not sure if others have had this issue; when I tried running the Python example code, it threw the error
/home/pi/runCamera: 5: cd: can't cd to python-multiCameraServer
/home/pi/runCamera: 6: exec: ./multiCameraServer.py: not found
All I had to do was ssh into the pi and type
cd examples
mv python-multiCameraServer/ ~/
to move the python example folder out of the examples folder and correct the error, but I thought I should post the problem here.
Thanks for reporting. I opened a PR to fix that issue last night, it will be in the next release.
It streams using mjpeg? Do you know what the bitrates are like for various combinations of resolution and framerate?
Yes, just like the RoboRIO CameraServer, everything is streamed in MJPEG. I don’t have a table of data rates, you’ll have to do emperical testing yourself (note data rate also depends on compression quality, so it’s actually a 3-dimensional cube). The “System Status” tab of the FRCVision webdash can be used to see the total network utilization, and most dashboards display camera data rates for each stream as well.
It also depends on how compressible the image is. The first image will compress much better then the second image (by a factor of about 3).
1 Like
I tried it with a pi camera v2.1 (imx219 sensor), and it streams in 120x160, but in the “Supported Video Modes” table, there aren’t any rows-- just the headings. Think it’s something on my end?
The Pi Camera doesn’t have designed modes, it just has a giant range (every power of 2 from tiny to huge). The table only lists specific designated modes.