I am attempting to create my teams first ever vision tracking program (before kickoff if possible) based on the 2016 field. In the end, all we hope to accomplish is have our robot turn to line up with the target.
So far we have wired a raspberry pi B rev. 2 (China model) with a USB to the roboRIO (later we will wire it to a second VRM) then have used a network switch to connect the extra radio port and the pi. We have green led rings and MS lifecams on hand and ready to use as well as some retroreflective tape. We are hoping to use GRIP on the pi as it appears easier but would be willing to change if there are complications. Currently, we have GRIP installed on a computer and have tested it using an image but we just aren’t sure what to install where from now on, to make the pi process the image by itself (raspberrian?).
You’ll need an OS for the pi (typically Raspbian).
You’ll need to find some way of launching your program at startup (typically by editing rc.local)
You’ll need to write the program which processes images (can use code generated by GRIP) and send the results to the RIO somehow (probably network tables).
You’ll probably want to enable ssh access on the pi so you can login remotely to make changes.
I suggest you start here and continue with the next page (“off board vision processing in java”).
PS: I think you’re cutting it a little close to the kickoff to get vision done right. You’ve got less than 3 weeks left!
I more than agree with the cutting it close comment. My team began our robot- vision journey last year. We weren’t even close to being ready to deploy it in last years competition even after 3 months dedicated effort by a couple of students. We are now in a good position to deploy it in this years competition.
From our experience I would suggest to teams starting from scratch to plan on at least a year’s lead time to come up to speed, especially if you are as inexperienced as we were with raspberry pi’s, networking, vision processing. And don’t forget about applying your targeting results to your sensor / motor control system .
Here are a few specific things we did, although there are many alternatives:
Lean ALL about IP addressing to make sure IP addresses are set properly, and how to set them on the pi and pc. Learn how to use ping, ifconfit, and netstat for trouble shooting communication issues.
We used a static IP address for Ethernet, and dynamic for wireless high as convenient for testing at our school or at home.
Install putty on the pc for remotely logging in
Install TightVNC on pc and pi for remotely working in a GUI
Install wincp on pc for transferring files
Install python 3 and opencv on pi (we didn’t use grip - programmed directly in python using opencv libraries)
Learn how to use socktets for transferring data (that was a suggestion from other teams)
We gave up quickly on using Java on the pi - python is the way to go. Our robot code is still in Java.
Thanks for your help! I was just hoping to get as much setup done as possible (Expected it to be a drastic goal) . Currently I have raspian installed and have enabled the ssh you mentioned before. But when I looked at the screensteps documentation the first step (with the command to view the camera stream doesn’t specify where to input it. I tried in the command prompt like program on it but it didnt seem to do anything.
I was under the impression you had a MS Lifecam. That command is specifically for those using the raspberry pi camera.
If you are running on a Raspberry Pi, it is actually possible to get the Raspberry Pi Camera working with CameraServer as well. This has the advantages of being off the USB bus, so you don’t have to worry about running out of USB bandwidth. This requires running the following command at boot to enable it.
sudo modprobe bcm2835-v4l2
Furthermore, that doesn’t allow you to view the camera stream. It just enables the pi camera.
TL;DR: Skip that step, it’s for hardware you aren’t using to avoid software issues you won’t have.