|
Re: Using the Raspberry Pi for Vision Processing
We're doing a little bit of that this year, vision on a Pi using NetworkTables to transmit data back.
Honestly python is a pretty good fit. The OpenCV docs for it are great and it just "flows" pretty well... mostly because the API is flat and doesn't have you running around namespaces like Imgproc and HighGui for random stuff. Plus there's no compile/deploy loop, just run the code. I like it... and it's a good excuse to expose the students to another language.
As an added bonus, I got one of our students today to say, "Awww, heck. I might as well learn vi while we're at it." That kid's going places! Whenever I ssh into the pi to work on anything I use vim out of reflex and sometimes forget that the students have no flippin' clue how to work it.
One other advantage to just rolling your own OpenCV code instead of using a GRIP pipeline is we've replicated the same logic on the roboRio in Java. So, if the vision camera is there on robotInit (we're in Java) then we use the roboRio to process the vision and publish data to NetworkTables. If the 3rd camera isn't there we assume the pi is doing it. Because it all goes to NetworkTables the code outside of the vision system doesn't know/care which system it came from. I kind of like it.
Last edited by Justin Buist : Yesterday at 18:39.
Reason: Posted too quick.
|