Our team is currently struggling to get vision processing working this year. Our goal is to identify the retro reflective tape on the tower, and have our robot turn to shoot into the high goal accurately. We have access to a Raspberry Pi 2B, a Microsoft Kinect, and a USB Webcam. We want to do onboard vision processing using the Pi, because we don’t want to rely on the FMS’s slow connection.
Would you guys have any suggestions for us as to which vision processing software to use on the Pi (GRIP, roboRealm, or OpenCV), and also how to transmit the data to the roboRIO using network tables? Any help would be greatly appreciated!
We went OpenCV. We originally were going to go GRIP, but at that time, they hadn’t figured out how to get GRIP to work on the Pi. We didn’t want to change framework after starting so we went with OpenCV using pyNetworkTables.
It was an easy install, make sure that you get the proper steps for your flavor and verions of linux. It was a 4hr process of downloading packages and compiling, we used Raspbian.
On a side note, I explained the “Adapter Pattern” to the programmers. Basically wrap the pyNetworkTables interface with an adapter class on both the Pi and the Code. So, if we decide to replace pyNetworkTables, it would be in just both adapters and not in any logic code.
Another reason for starting out with pyNetwork tables, is if we decide to move to GRIP, it’s already speaking the same language.
We are actually using Labview on our main robot code. I’m not sure if pyNetworkTables would work for us, and it seems to only run on java robot code. Also, how is OpenCV going for you guys? Has there been any major issues that you have ran into?
If you run your code on the PI, it sill be using a network tables implementation that is compatible with the LV implementation.
You can also look at the vision example on the getting started page. It shows some basics of camera and vision processing and maps the target info into a pretty useful coordinate space for steering the robot.
If you are looking to steer the robot using the camera, you may also want to consider taking an image, processing for the angular offset to the target, and then using a gyro to turn to the target. Cameras are a pretty slow sensor, and using them to measure how much a robot has turned is not easy. Anyway, if you take this approach, you don’t necessarily need a coprocessor.
Actually, I’ve been told (and observed with the FRC Dashboard) that LabVIEW’s NT implementation is not backwards compatible with NT2, which is what pynetworktables implements. You would not be able to use the current release version of pynetworktables with LabVIEW.
However, we have a beta version available (version 2016.0.0alpha1) that implements bare NT3 support, which you can install via pip install --pre pynetworktables … it has the same API so it should work without problems, but isn’t as well tested.
Ooh. Sorry about that. I think I knew that at one time. The LV implementation that shipped in 2015 implemented 2.0, and we updated that to 3.0 this year, but didn’t try to merge them and do all of the testing to ensure they interoperated.
If a team needs a device to do 2.0, such as for a demo, use the 2015 code. Hopefully we will all be on 3.0 soon.