View Single Post
  #4   Spotlight this post!  
Unread 20-01-2014, 15:27
sparkytwd's Avatar
sparkytwd sparkytwd is offline
Registered User
FRC #3574
Team Role: Mentor
 
Join Date: Feb 2012
Rookie Year: 2012
Location: Seattle
Posts: 101
sparkytwd will become famous soon enough
Re: Yet Another Vision Processing Thread

Team 3574 here:

Quote:
Originally Posted by ubeatlenine View Post

1. What co-processor did you use? The only information I have really been able to gather here is that Pi is too slow, but do Arduino/BeagleBone/Pandaboard/ODROID have any significant advantages over each other? Teams that used the DS, why not a co-processor?
2012 - We used an Intel i3 with the hopes of leveraging the Kinect. We got this working, and in my opinion was our most succesful CV approach, though we dumped the kinect. There were mounting and power issues. Running a system that requires stable 12 volts off of a source that can hit 8 and regularly hits 10 was the biggest issue.

2013 - Odroid U2. CV wasn't as important for us this year as our autonomous didn't need realignment like 2012. We ran into stability issues with the PS3 EYE camera and USB, which was fixed with an external USB port. A super fast little (I do mean little) box. We hooked it up to an external monitor and programmed directly from the ARM desktop. Hardkernel has announced a new revision of this, the U3, which is only $65.

2014 - Odroid XU. The biggest difference here is no need for an external USB hub, with 5 ports. I've tested it with 3 running USB cameras (2 PS3 Eye's and 1 Microsoft Life HD) with no issues. Ubuntu doesn't yet support the GPU or running on all 8 cores, but a quad-core A15 running at 1.6ghz is pretty epic. If your team is more cost concerned, this is pretty pricey at $170. At this point the U3 is probably going to be able to keep up with it in terms of processing, and adding an additional powered USB hub is not too expensive.

I've played with both the beaglebone black and the pandaboard, but with the amount of work we're having our vision processor do this year (see ocupus) I think we're addicted to the quadcore systems now.

Quote:
Originally Posted by ubeatlenine View Post

2. What programming language did you use? @yash101's poll seems to indicate that openCV is the most popular choice for processing. Our team is using java, and while openCV has java bindings, I suspect these will be too slow for our purposes. Java teams, how did you deal with this issue?
Python's OpenCV bindings. Performance won't make that much of a difference. The way OpenCV's various language bindings are built, most of the performance intensive stuff happens in the native code layer.
Quote:
Originally Posted by ubeatlenine View Post
3. What camera did you use? I have seen mention of the Logitech C110 camera and the PS3 eye camera. Why not just use the axis camera?
PS3 eye camera. We originally picked it in 2012 mostly because we thought it would be cute to have alongside the Kinect. At one of the regionals though we had really bad lighting conditions that had us switch over the IR for 2013, of which there are a lot of tutorials online.

As for not the axis camera, it's heavy, requires separate power, isn't easily convertible to IR, and you pay a latency cost going in and out of MJPEG format.

Quote:
Originally Posted by ubeatlenine View Post
4. What communication protocols did you use? The FRC manual is pretty clean on communications restrictions:

Is one of these protocols best for sending images and raw data (like numerical and string results of image processing) ?
In 2012 we had no restrictions, so just ran a TCP server in the CRIO's code. 2013 we used network tables which is nicely integrated. We'll use that this year. In 2013 we did not send the raw camera feed. This year, we've put together the ocupus toolkit to support doing that. This uses OpenVPN to tunnel between DS and Robot over port 1180.