I am the new lead programmer for team 2906 since the middle of this frc season. We currently use java and have just got the simplest PID subsystem to mostly work. We dont have much when it comes to hardware or microcontrollers but we have 2 raspberry pi Bs. We also have one green led ring and two microsoft lifecams. My team has never even attempted to do any type of vision and just got encoders to work this year.
First question: With the hadware we have currently is it possible to do vision processing to target retroreflective tape on a field?
Second question: How would we go about hooking this up to the robot, what do we need and where do we plug it in?
Third: What language do we use for the pi? I have installed noobs on an sd and booted it once with sucess just to kind of look through it.
Fourth: How do we tell the roboRio what to do based on what the camera sees and the pi interprets? I am guessing we use some sort of PID.
Fifth: Are there any simple example programs from teams on github or anywhere else that might help us understand what we are doing?
Sixth: Should I just go through all the wpi documentation and see what I can find and figure out through that? P.S. I will probably do this too.
Thank you in advance for your help, please let us know, anything helps.
Some advice first: you don’t need to start off with using the raspberry pi. I think you’ll have an easier time running your VP code on either your driver station laptop or on the roborio itself. Using a co-processor does generally result in plenty of performance with little strain on the roborio, but you don’t need to use a co-processor to get vision working. GRIP can be used on the DS laptop, could run on a co-processor, or can generate code for the RIO, so it would be a great tool to cover most possible platforms with relative ease. It also helps significantly to have some way to view the output of your code in realtime, even during competition, so you can troubleshoot and make quick changes.
To answer your questions:
Definitely. All you need is a camera and a LED ring (and time).
Just plug a camera into a USB port on the roboRio, and the LED ring to the appropriate connectors on the VRM. Some teams use fancier setups to allow programatically controlling the state of the LEDs, but that isn’t necessary for basic VP. If you are using a raspberry pi, wire it through ethernet to the spare radio port, and power it via USB from the roboRio.
I believe python is a common choice, though there aren’t any inherent restrictions that I’m aware of. Some teams use Java so they don’t need to learn a new language. GRIP is also an option, and would require no programming for the pi.
This is usually the hardest part with vision. PID is often used to get robot components to a precise location. Team 254 gave a presentation in 2016 about integrating movement with vision, which can be found in this thread.
YES! The screensteps will give you nearly all the information you need to get a VP system working with a robot. There have also been many good discussions here on CD, so search around to find more great info.
We hooked the two Microsoft cameras to USB ports on the Raspberry Pi. The USB port of the RoboRio provides 0.9A, which is barely within specs for the Raspberry Pi 2 B, and shouldn’t be enough to power the two cameras as well (though we have done this). You should rig up a way to provide +5V, 2.0A via a USB cable.
We used Java on the RPi3, C++ on the RoboRio (but Java would be fine)
We placed the relative offset of the vision target in the Network Tables from the RPi3 and read it from an Autonomous Command on the RoboRio. The Autonomous command used PID to steer towards that relative offset.
Yes. Look through the Vision information on Screensteps live. You could click on my name and look at my other posts for sample code and more specific links.
Never a bad idea.
P. S. Now that I see the reply entered while I was typing, I see that I didn’t mention that much of the Java was written by GRIP on a PC. GRIP is an excellent way to visualize and tweak your vision processing before putting it on a coprocessor.
I think I’m in the same situation you are in. I’m the new software lead and our team has never successfully completed vision. I’ve been working with GRIPS and planning on transferring that to the raspberry pi, but I don’t know where to start. I’m confused on how to get the code to run on the raspberry pi on startup. I’ve gotten GRIPS to generate a file that can recognize and find the blobs, but i can’t find any documentation online beyond that.
We worked with the Pi for stronghold, and it works well. We started with GRIP, but at that time, but it wasn’t working on the Pi so we switched to Python and OpenCv. I would start with looking at http://www.pyimagesearch.com/. There are some great things there. A first good project might be to get a sample opencv project and get it running on a pi. http://www.pyimagesearch.com/2015/09/14/ball-tracking-with-opencv/. This blog post walks through the code too.
I think I’m in the same situation you are in. I’m the new software lead and our team has never successfully completed vision. I’ve been working with GRIPS and planning on transferring that to the raspberry pi, but I don’t know where to start. I’m confused on how to get the code to run on the raspberry pi on startup. I’ve gotten GRIPS to generate a file that can recognize and find the blobs, but i can’t find any documentation online beyond that.
Absolutely. Check out our Github where we actually 2 video streams (one from an axis camera that’s plugged into a switch and another usb camera plugged into the roborio that streams to a port that we connect to from the same switch). https://github.com/Daltz333/2017-FRC-Dual-Tracker-System