Request Tutorial for RaspberryPi Vision in Python

We have recently been trying to do vision tracking on our RaspberryPi with a Python program, but after 2 days of trying to download simple software to the rpi, we can’t get it to connect to the computer, whether through our own wifi network or through the robot radio.
Another problem is that despite installing FRCVision, we still can’t use our vision program (python, from scratch) because of ‘unmet dependencies.’

This is the code we are trying to put on the rpi: 4681 Vision Code. The code itself works, because we can run it on a Windows laptop and track a tennis ball using the laptop camera.

From WPILib Docs:

Without using the FRC Console to read/write, we can’t install code to the rpi by connecting it to our computer. Both apt-get and pip don’t work.
With using the FRC Console, the rpi is not detected on our laptop. We go to frcvision.local/ and we don’t get a camera stream.
There’s also something about setting a static IP that may help, but I don’t understand.

In the end, what crucial steps are we missing or should be taking? Please ask for clarifying info at will; we are seeking any advice possible. Thanks!

To install missing dependencies, you’ll need to ssh into the Pi (username “pi”, password “raspberry”), and run the command alias “rw” to make the filesystem writable. Then you can run “sudo apt-get” et al.

The wireless card is disabled by default on the image, because it’s not legal to use it for competition. You should be able to connect the Pi to a normal network via Ethernet and get internet access.

I’m not sure what you mean by “with using the FRC Console, the rpi is not detected on our laptop.”

The vision code you linked to isn’t going to work on the Pi as-is, though. You’re using OpenCV imshow, which is a GUI output function. In order to view the stream on the driver station, you’ll need to create a video stream over the network; the standard approach for this is to use the CameraServer functions to provide a MJPEG over HTTP stream.

By using the FRC Console, I thought we would get the output lines of code. We don’t need the camera stream. Our vision code runs completely on the rpi, and we don’t need to stream the image over to our driver station.

We’ve tried attaching the rpi to our computer over wifi, and through the robot radio, and even directly to our laptop, but we can’t seem to communicate with it.

We don’t expect to get the camera stream on our driver station, but we should at least get it on the screen we attached to the rpi (unless we’re skipping a step there too).

We’re giving it one last shot this afternoon, but if it fails, :disappointed:

This may be off base, but we just had a similar issue with not being able to to communicate the rpi. I am not the programmer, but I seem to recall that he said there must be some code running on the roborio. The was a test bench at the time and the rio had no code on it. I do know that we currently have our vision stream giving us data back on the high power ports. We have the all the items connected to a switch on the test stand.

Our next step is turning pixel coordinates into real life ones! :smiley:

The WiFi adapter is disabled on the Pi in the FRCVision image. The image gets its IP address through DHCP, so through the robot radio should have worked, as the radio acts like a DHCP server when in AP mode. Alternatively, simply plugging the Pi into a normal Ethernet network (e.g. for internet) should work. If none of these work, perhaps the Pi isn’t booting or there’s something more fundamentally wrong. The best way to debug this is to use the Pi’s HDMI port and keyboard to log into the Pi, and run ifconfig -a to see what network interfaces are coming up and what IP addresses are being assigned. Plugging the Ethernet directly into a computer can work (just like the Rio) but can take a while to resolve a link-local address. No matter what, the computer you’re using will need a mDNS responder to find the “frcvision.local” address–one is installed by the NI tools, but another way to get one is to install iTunes. Another thing to test is to run ping frcvision.local at a command prompt to see if it’s an address resolution issue or potentially a browser or other firewall issue.

The FRCVision image only has a text console on the HDMI port, it does not have X-Windows installed for a graphical display. This particular image is designed for headless use, with the console only there for debugging purposes (see above).

There does not need to be code running on the Rio, but the image by default does need a DHCP server to talk to to get an address, whether that be provided by the radio or some other Ethernet network.

I think last question.
We managed to get a bunch of stuff working, like installing imutils and opencv, but we can’t seem to get imutils into our Python3.6 folder. That should be simple enough to move.

However, if things fail, why can’t we simply run our vision program on the rpi without FRCVision? Is the image necessary for a competition environment?

The image is by no means required, it’s provided as a convenience to teams. You can run any Pi image you want to. The main thing the FRCVision image provides is a read-only filesystem to prevent corruption when the Pi loses power, and preinstalled WPILib, etc.

If you run your own image, please do make sure to turn off the wifi and Bluetooth adapters.

Thanks for all the help!

What do you mean by “corruption?”
And by losing all the WPILib classes, is there anything we need to do to talk with the roboRIO, or is something like setting the IP address doable from the rPI command line?

Corruption = data loss, potentially may not be able to boot the system again without manual intervention.

You’re going to need to send data to the RoboRIO somehow for vision to be useful. The typical way to do this with Python is using pynetworktables or pyntcore.

Yes, the IP address can be set via the rPi command line. You can use the ifconfig command to do it temporarily, or edit the /etc/dhcpcd.conf file to do it permanently.