Go to Post - Just because you can buy 50 giant pixie stix with your roommates Sam's club membership does not mean you should eat them all at once. - Not2B [more]
Home
Go Back   Chief Delphi > Technical > Programming
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
Closed Thread
Thread Tools Rating: Thread Rating: 14 votes, 4.93 average. Display Modes
  #1   Spotlight this post!  
Unread 20-01-2014, 14:56
ubeatlenine's Avatar
ubeatlenine ubeatlenine is offline
Registered User
AKA: David
FRC #1512 (Metal Vidsters)
Team Role: Programmer
 
Join Date: Jan 2014
Rookie Year: 2012
Location: Concord, NH
Posts: 3
ubeatlenine is an unknown quantity at this point
Yet Another Vision Processing Thread

Team 1512 will be giving vision processing its first serious attempt this year. I have been amazed by the wide variety of approaches to vision processing presented on this forum and am having trouble weighing the advantages and disadvantages of each approach. If your team has successfully implemented a vision processing system in the past, I would like to know four things:

1. What co-processor did you use? The only information I have really been able to gather here is that Pi is too slow, but do Arduino/BeagleBone/Pandaboard/ODROID have any significant advantages over each other? Teams that used the DS, why not a co-processor?

2. What programming language did you use? @yash101's poll seems to indicate that openCV is the most popular choice for processing. Our team is using java, and while openCV has java bindings, I suspect these will be too slow for our purposes. Java teams, how did you deal with this issue?

3. What camera did you use? I have seen mention of the Logitech C110 camera and the PS3 eye camera. Why not just use the axis camera?

4. What communication protocols did you use? The FRC manual is pretty clean on communications restrictions:
Quote:
Communication between the ROBOT and the OPERATOR CONSOLE is restricted as follows:

Network Ports:
TCP 1180: This port is typically used for camera data from the cRIO to the Driver Station (DS) when the camera is connected to port 2 on the 8-slot cRIO (P/N: cRIO-FRC). This port is bidirectional.
TCP 1735: SmartDashboard, bidirectional
UDP 1130: Dashboard-to-ROBOT control data, directional
UDP 1140: ROBOT-to-Dashboard status data, directional
HTTP 80: Camera connected via switch on the ROBOT, bidirectional
HTTP 443: Camera connected via switch on the ROBOT, bidirectional


Teams may use these ports as they wish if they do not employ them as outlined above (i.e. TCP 1180 can be used to pass data back and forth between the ROBOT and the DS if the Team chooses not to use the camera on port 2).

Bandwidth: no more than 7 Mbits/second.
Is one of these protocols best for sending images and raw data (like numerical and string results of image processing) ?
  #2   Spotlight this post!  
Unread 20-01-2014, 15:07
bvisness's Avatar
bvisness bvisness is offline
Programming Mentor, Former Driver
FRC #2175 (The Fighting Calculators)
Team Role: Mentor
 
Join Date: Feb 2011
Rookie Year: 2010
Location: Woodbury, MN
Posts: 183
bvisness is a glorious beacon of lightbvisness is a glorious beacon of lightbvisness is a glorious beacon of lightbvisness is a glorious beacon of lightbvisness is a glorious beacon of lightbvisness is a glorious beacon of light
Re: Yet Another Vision Processing Thread

1. We plan to use our DS (running RoboRealm) for vision processing this year. We've never tried using a co-processor, but since we don't have very complicated uses for the vision system, we've had good success with doing the processing on the DS (even with the minimal lag it introduces.)

2. We've used NI's vision code (running inside the Dashboard program) in the past, but this year we'll be using RoboRealm (as mentioned above.) In my tests I've found that it's much easier to make changes on the fly and the tracking is very fast and robust.

3. We just use the Axis camera. Since we need to get the camera feed remotely, IP cameras are pretty much the best way to go.

4. We just use NetworkTables, since it's efficient and easy and integrates nicely with RoboRealm. It's also easier for new programmers on the team to understand (and we have a lot of them this year...)
  #3   Spotlight this post!  
Unread 20-01-2014, 15:08
MoosingIn3space MoosingIn3space is offline
Programming Division Captain
FRC #3334 (Eagle Robotics)
 
Join Date: Jul 2013
Rookie Year: 2012
Location: Salt Lake City, UT
Posts: 13
MoosingIn3space is an unknown quantity at this point
Team 3334 here. We are using a custom-built computer using a dual-core AMD Athlon II with 4 GB of RAM. In order to power it, we are using a Mini-box picoPSU DC-DC converter.

On the software side, we are using C++ and OpenCV running atop Arch Linux. The capabilities of OpenCV are quite impressive, so read the tutorials!
I'm not sure about JavaCV, but a board like the one we built would definitely be fast enough to run Oracle Java 7.
  #4   Spotlight this post!  
Unread 20-01-2014, 15:27
sparkytwd's Avatar
sparkytwd sparkytwd is offline
Registered User
FRC #3574
Team Role: Mentor
 
Join Date: Feb 2012
Rookie Year: 2012
Location: Seattle
Posts: 102
sparkytwd will become famous soon enough
Re: Yet Another Vision Processing Thread

Team 3574 here:

Quote:
Originally Posted by ubeatlenine View Post

1. What co-processor did you use? The only information I have really been able to gather here is that Pi is too slow, but do Arduino/BeagleBone/Pandaboard/ODROID have any significant advantages over each other? Teams that used the DS, why not a co-processor?
2012 - We used an Intel i3 with the hopes of leveraging the Kinect. We got this working, and in my opinion was our most succesful CV approach, though we dumped the kinect. There were mounting and power issues. Running a system that requires stable 12 volts off of a source that can hit 8 and regularly hits 10 was the biggest issue.

2013 - Odroid U2. CV wasn't as important for us this year as our autonomous didn't need realignment like 2012. We ran into stability issues with the PS3 EYE camera and USB, which was fixed with an external USB port. A super fast little (I do mean little) box. We hooked it up to an external monitor and programmed directly from the ARM desktop. Hardkernel has announced a new revision of this, the U3, which is only $65.

2014 - Odroid XU. The biggest difference here is no need for an external USB hub, with 5 ports. I've tested it with 3 running USB cameras (2 PS3 Eye's and 1 Microsoft Life HD) with no issues. Ubuntu doesn't yet support the GPU or running on all 8 cores, but a quad-core A15 running at 1.6ghz is pretty epic. If your team is more cost concerned, this is pretty pricey at $170. At this point the U3 is probably going to be able to keep up with it in terms of processing, and adding an additional powered USB hub is not too expensive.

I've played with both the beaglebone black and the pandaboard, but with the amount of work we're having our vision processor do this year (see ocupus) I think we're addicted to the quadcore systems now.

Quote:
Originally Posted by ubeatlenine View Post

2. What programming language did you use? @yash101's poll seems to indicate that openCV is the most popular choice for processing. Our team is using java, and while openCV has java bindings, I suspect these will be too slow for our purposes. Java teams, how did you deal with this issue?
Python's OpenCV bindings. Performance won't make that much of a difference. The way OpenCV's various language bindings are built, most of the performance intensive stuff happens in the native code layer.
Quote:
Originally Posted by ubeatlenine View Post
3. What camera did you use? I have seen mention of the Logitech C110 camera and the PS3 eye camera. Why not just use the axis camera?
PS3 eye camera. We originally picked it in 2012 mostly because we thought it would be cute to have alongside the Kinect. At one of the regionals though we had really bad lighting conditions that had us switch over the IR for 2013, of which there are a lot of tutorials online.

As for not the axis camera, it's heavy, requires separate power, isn't easily convertible to IR, and you pay a latency cost going in and out of MJPEG format.

Quote:
Originally Posted by ubeatlenine View Post
4. What communication protocols did you use? The FRC manual is pretty clean on communications restrictions:

Is one of these protocols best for sending images and raw data (like numerical and string results of image processing) ?
In 2012 we had no restrictions, so just ran a TCP server in the CRIO's code. 2013 we used network tables which is nicely integrated. We'll use that this year. In 2013 we did not send the raw camera feed. This year, we've put together the ocupus toolkit to support doing that. This uses OpenVPN to tunnel between DS and Robot over port 1180.
  #5   Spotlight this post!  
Unread 20-01-2014, 15:46
Jared's Avatar
Jared Jared is offline
Registered User
no team
Team Role: Programmer
 
Join Date: Aug 2013
Rookie Year: 2012
Location: Connecticut
Posts: 602
Jared has a reputation beyond reputeJared has a reputation beyond reputeJared has a reputation beyond reputeJared has a reputation beyond reputeJared has a reputation beyond reputeJared has a reputation beyond reputeJared has a reputation beyond reputeJared has a reputation beyond reputeJared has a reputation beyond reputeJared has a reputation beyond reputeJared has a reputation beyond repute
Re: Yet Another Vision Processing Thread

1. We use the driver station laptop, like 341 did in 2012
2. LabVIEW. We took the vision example meant for the cRIO and copy and pasted in into our LabVIEW dashboard.
3. Axis Camera
4. Network Tables

The main advantage to this setup is ease of use. Getting a more complicated setup working can be difficult and have lots of tricky bugs to find. Using our driver station laptop (intel core 2 duo @ 2.0 GHz, 3 Gb RAM) gave us more than enough processing power (we could go up to 30 fps) and was the cheapest and simplest solution. The LabVIEW software is great for debugging because you can see the value of any variable/image at any time, so it's easy to find out what isn't working, and a pretty decent example was provided for us. The axis camera was another easy solution because we already had one and the library to communicate/change exposure settings was already there.

The network tables approach worked really well too and we got very little latency with this approach. We were able to auto line up (using our drive system, not a turret) with the goal in about a second, and we had it working before the end of week one, after spending about 2 hours with two people. In the end we didn't need it in competition, we could line up by hitting the tower.

We're doing the same approach this year for the hot/not hot goal. Compared to the other solutions, this is the cheapest/quickest/simplest, but you loose the advanced features of openCV. NI's vision libraries are pretty good, and the vision assistant program works nicely too, but in the end, some people say openCV has more. You need to decide if the extra features are worth the extra work for your team.
  #6   Spotlight this post!  
Unread 20-01-2014, 15:47
sparkytwd's Avatar
sparkytwd sparkytwd is offline
Registered User
FRC #3574
Team Role: Mentor
 
Join Date: Feb 2012
Rookie Year: 2012
Location: Seattle
Posts: 102
sparkytwd will become famous soon enough
Re: Yet Another Vision Processing Thread

Quote:
Originally Posted by MoosingIn3space View Post
Team 3334 here. We are using a custom-built computer using a dual-core AMD Athlon II with 4 GB of RAM. In order to power it, we are using a Mini-box picoPSU DC-DC converter.

On the software side, we are using C++ and OpenCV running atop Arch Linux. The capabilities of OpenCV are quite impressive, so read the tutorials!
I'm not sure about JavaCV, but a board like the one we built would definitely be fast enough to run Oracle Java 7.
You might need to upgrade your power supply to the M4. When we went with an onboard x86, we started with the 160W pico. The problem you'll hit is when you run all your motors at top speed and your system voltage drops down to 10 volts, which caused our computer to shutdown.
  #7   Spotlight this post!  
Unread 20-01-2014, 16:50
MoosingIn3space MoosingIn3space is offline
Programming Division Captain
FRC #3334 (Eagle Robotics)
 
Join Date: Jul 2013
Rookie Year: 2012
Location: Salt Lake City, UT
Posts: 13
MoosingIn3space is an unknown quantity at this point
Re: Yet Another Vision Processing Thread

In your experience, the M4 is stable? Ahh, thank goodness it's been less than 30 days since I bought that pico

Thanks!
  #8   Spotlight this post!  
Unread 20-01-2014, 17:39
sparkytwd's Avatar
sparkytwd sparkytwd is offline
Registered User
FRC #3574
Team Role: Mentor
 
Join Date: Feb 2012
Rookie Year: 2012
Location: Seattle
Posts: 102
sparkytwd will become famous soon enough
Re: Yet Another Vision Processing Thread

Quote:
Originally Posted by MoosingIn3space View Post
In your experience, the M4 is stable? Ahh, thank goodness it's been less than 30 days since I bought that pico

Thanks!
Yes, the M4 was rock solid. At 6-24v input range it should handle any battery condition. The biggest risk is wiring it up backwards. We used a sharpie paint pen and colored the + terminal to mitigate that.
  #9   Spotlight this post!  
Unread 20-01-2014, 18:25
yash101 yash101 is offline
Curiosity | I have too much of it!
AKA: null
no team
 
Join Date: Oct 2012
Rookie Year: 2012
Location: devnull
Posts: 1,191
yash101 is an unknown quantity at this point
Re: Yet Another Vision Processing Thread

As the OP says, "Java would be slower," This is not necessarily true. It would be faster than the Python bindings. However, the C++ code would still be fastest. The reason why I (and many others) prefer to use OpenCV with C or Python is because:
-c/c++
--fast, stable, robust, OpenCV written in C/C++, easy, well-documented
-python
--easy to program, and easy to put together a program quickly and with little notice. You don't need to go around, compiling the program every time you change very little code
-Java
--It's really just for the sake of it. Java is good, but is so similar to C, that you'd probably be better off learning the better-documented C/C++ API instead of the Java one. However, it is personal preference. You are going to use similar commands, so C might just be easier to use. Also, with so many C compilers out, it is actually much more portable than Java which has a JVM for most, but not all systems. Java is really only nice if you are programming for Android, where you need maximum portability without the need to recompile the code every time it is downloaded!

I am actually thinking about starting an OpenCV journal, that explains what I have done, and what not to do, to not shoot yourself in the foot! Beware! It will be long

By the way, today, I was working on setting up the Kinect for OpenCV, which I will make a thread about in a few minutes.
  #10   Spotlight this post!  
Unread 20-01-2014, 20:02
faust1706's Avatar
faust1706 faust1706 is offline
Registered User
FRC #1706 (Ratchet Rockers)
Team Role: College Student
 
Join Date: Apr 2012
Rookie Year: 2011
Location: St Louis
Posts: 498
faust1706 is infamous around these partsfaust1706 is infamous around these parts
Re: Yet Another Vision Processing Thread

Quote:
Originally Posted by ubeatlenine View Post
1. What co-processor did you use?

2. What programming language did you use?

3. What camera did you use?

4. What communication protocols did you use?
2012:
1. A custom build computer running ubuntu
2. OpenCV in C
3. Microsoft Kinect
4. UDP
2013:
1. O-Droid X2
2. OpenCV in C
3. Microsoft kinect with added illuminator
4. UDP
2014.
1. 3 or 4 O-Droid XUs
2. OpenCV in C++ and OpenNI
3. Genius 120 with the ir filter removed and the asus xtion for depth
4. UDP

Having sent off a number of people well on their way for computer vision, I can now offer help to more people. If you want some sample code in OpenCV in C and C++, pm your email so I can share a dropbox with you.

Tutorial on how to set up vision like we did: http://ratchetrockers1706.org/vision-setup/
__________________
"You're a gentleman," they used to say to him. "You shouldn't have gone murdering people with a hatchet; that's no occupation for a gentleman."
  #11   Spotlight this post!  
Unread 20-01-2014, 20:36
Joe Ross's Avatar Unsung FIRST Hero
Joe Ross Joe Ross is offline
Registered User
FRC #0330 (Beachbots)
Team Role: Engineer
 
Join Date: Jun 2001
Rookie Year: 1997
Location: Los Angeles, CA
Posts: 8,600
Joe Ross has a reputation beyond reputeJoe Ross has a reputation beyond reputeJoe Ross has a reputation beyond reputeJoe Ross has a reputation beyond reputeJoe Ross has a reputation beyond reputeJoe Ross has a reputation beyond reputeJoe Ross has a reputation beyond reputeJoe Ross has a reputation beyond reputeJoe Ross has a reputation beyond reputeJoe Ross has a reputation beyond reputeJoe Ross has a reputation beyond repute
Re: Yet Another Vision Processing Thread

2012:
1. Driver Station
2. NI Vision
3. Axis M1011
4. UDP
2013:
1. Driver Station
2. NI Vision
3. Axis M1011
4. Network Tables
2014:
1. Driver Station or cRIO (TBD)
2. NI Vision
3. Axis M1011
4. Network Tables

By using the provided examples and libraries, we were able to get a working solution in a minimal amount of time. The reason we use the driver station rather then an on-board processor is it significantly simplifies the system, both as far as reducing part counts (limits failure points) and it uses examples and libraries that already exist and have been tested by FIRST/NI.
  #12   Spotlight this post!  
Unread 20-01-2014, 21:28
MoosingIn3space MoosingIn3space is offline
Programming Division Captain
FRC #3334 (Eagle Robotics)
 
Join Date: Jul 2013
Rookie Year: 2012
Location: Salt Lake City, UT
Posts: 13
MoosingIn3space is an unknown quantity at this point
Getting OpenCV to receive images from the Kinect is quite simple with libfreenect's C++ interface and boost threads. If there is enough interest, I'll post my code.
  #13   Spotlight this post!  
Unread 20-01-2014, 21:36
cmwilson13's Avatar
cmwilson13 cmwilson13 is offline
Registered User
AKA: Christopher Wilson
no team
Team Role: Mentor
 
Join Date: Apr 2008
Rookie Year: 2008
Location: buford GA
Posts: 91
cmwilson13 has a spectacular aura aboutcmwilson13 has a spectacular aura aboutcmwilson13 has a spectacular aura about
Re: Yet Another Vision Processing Thread

i don't think a co-processor is necessary you have plenty of power to do the analysis on the crio. this was done on the first crio released in 2009 which is slower and has a much smaller processor cache then the new crios and it worked fine.

your not even tracking a moving target this year so it should be even easier

http://www.youtube.com/watch?v=Jl6MyCSELvM
__________________
"Like the WWF, but for smart people." -George HW Bush

Team Member 1771 2008-2009
Team Mentor 1771 2010-2012 2014-2016
Team Mentor 4509 2013-2014
Team Mentor 3998 2013-2014
  #14   Spotlight this post!  
Unread 20-01-2014, 21:49
MoosingIn3space MoosingIn3space is offline
Programming Division Captain
FRC #3334 (Eagle Robotics)
 
Join Date: Jul 2013
Rookie Year: 2012
Location: Salt Lake City, UT
Posts: 13
MoosingIn3space is an unknown quantity at this point
Re: Yet Another Vision Processing Thread

Quote:
Originally Posted by cmwilson13 View Post
i don't think a co-processor is necessary you have plenty of power to do the analysis on the crio. this was done on the first crio released in 2009 which is slower and has a much smaller processor cache then the new crios and it worked fine.

your not even tracking a moving target this year so it should be even easier

http://www.youtube.com/watch?v=Jl6MyCSELvM
There is a very good reason to use a co-processor: any USB camera can be used. Other cameras with better framerates, resolutions, sensors, or other desirable characteristics can be used through a co-processor. That's why my team committed to using one this season, after using cRIO-based analysis every season of our existence.
  #15   Spotlight this post!  
Unread 20-01-2014, 21:52
cmwilson13's Avatar
cmwilson13 cmwilson13 is offline
Registered User
AKA: Christopher Wilson
no team
Team Role: Mentor
 
Join Date: Apr 2008
Rookie Year: 2008
Location: buford GA
Posts: 91
cmwilson13 has a spectacular aura aboutcmwilson13 has a spectacular aura aboutcmwilson13 has a spectacular aura about
Re: Yet Another Vision Processing Thread

why, you don't need any better cameras if the ones available can do they job as it has for us every year
__________________
"Like the WWF, but for smart people." -George HW Bush

Team Member 1771 2008-2009
Team Mentor 1771 2010-2012 2014-2016
Team Mentor 4509 2013-2014
Team Mentor 3998 2013-2014
Closed Thread


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 02:35.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi