Go to Post In my opinion, watching someone CAD is about as exciting as watching them sleep. - DampRobot [more]
Home
Go Back   Chief Delphi > Technical > Programming
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
Closed Thread
Thread Tools Rating: Thread Rating: 14 votes, 4.93 average. Display Modes
  #16   Spotlight this post!  
Unread 20-01-2014, 22:04
yash101 yash101 is offline
Curiosity | I have too much of it!
AKA: null
no team
 
Join Date: Oct 2012
Rookie Year: 2012
Location: devnull
Posts: 1,191
yash101 is an unknown quantity at this point
Re: Yet Another Vision Processing Thread

Quote:
Originally Posted by faust1706 View Post
2012:
1. A custom build computer running ubuntu
2. OpenCV in C
3. Microsoft Kinect
4. UDP
2013:
1. O-Droid X2
2. OpenCV in C
3. Microsoft kinect with added illuminator
4. UDP
2014.
1. 3 or 4 O-Droid XUs
2. OpenCV in C++ and OpenNI
3. Genius 120 with the ir filter removed and the asus xtion for depth
4. UDP

Having sent off a number of people well on their way for computer vision, I can now offer help to more people. If you want some sample code in OpenCV in C and C++, pm your email so I can share a dropbox with you.

Tutorial on how to set up vision like we did: http://ratchetrockers1706.org/vision-setup/
I think we are already scared by the processing power of one ODROID! Your robot is going to catch on fire with so much processing power. (not literally).

Before you switch to 3 XUs, try one X2 again, but measure it's processor usage, and then place your order after that. I'm pretty sure that 4XUs will be equal to one CIM continuously, especially with the 3 120Degree cameras and the Kinect!
Be careful there, or everyone will wonder why the voltage drops every time you boot the XUs!
  #17   Spotlight this post!  
Unread 20-01-2014, 23:02
MoosingIn3space MoosingIn3space is offline
Programming Division Captain
FRC #3334 (Eagle Robotics)
 
Join Date: Jul 2013
Rookie Year: 2012
Location: Salt Lake City, UT
Posts: 13
MoosingIn3space is an unknown quantity at this point
Re: Yet Another Vision Processing Thread

Quote:
Originally Posted by faust1706 View Post
2012:
1. A custom build computer running ubuntu
2. OpenCV in C
3. Microsoft Kinect
4. UDP
2013:
1. O-Droid X2
2. OpenCV in C
3. Microsoft kinect with added illuminator
4. UDP
2014.
1. 3 or 4 O-Droid XUs
2. OpenCV in C++ and OpenNI
3. Genius 120 with the ir filter removed and the asus xtion for depth
4. UDP

Having sent off a number of people well on their way for computer vision, I can now offer help to more people. If you want some sample code in OpenCV in C and C++, pm your email so I can share a dropbox with you.

Tutorial on how to set up vision like we did: http://ratchetrockers1706.org/vision-setup/
My team already has our architecture set, but I'm curious about yours.
How do you intend to coordinate the image processing over the multiple nodes? Do you already have a multi-node OpenCV library? As far as I know,
OpenCV doesn't have an MPI-enabled version, only multithreaded.
  #18   Spotlight this post!  
Unread 21-01-2014, 00:03
yash101 yash101 is offline
Curiosity | I have too much of it!
AKA: null
no team
 
Join Date: Oct 2012
Rookie Year: 2012
Location: devnull
Posts: 1,191
yash101 is an unknown quantity at this point
Re: Yet Another Vision Processing Thread

Quote:
Originally Posted by MoosingIn3space View Post
My team already has our architecture set, but I'm curious about yours.
How do you intend to coordinate the image processing over the multiple nodes? Do you already have a multi-node OpenCV library? As far as I know,
OpenCV doesn't have an MPI-enabled version, only multithreaded.
Well, 1706 loves using the Kinect, and they say that they will use 3 cameras. I guess that there will be one for each camera and one for the kinect. Maybe one of them does the data manipulation, or maybe it is done on the cRIO.

By the way, Hunter, how do you prevent your ODROIDs from corrupting from the rapid power-down? Do you have a special mechanism to shut down each node? Also, which converter are you using to power the Kinect? I don't think it would be wise to connect it directly to the battery/PDB, etc. You'd need some 12v-12v converter to eliminate the voltage drops/spikes!

As for your query of multi-noded, I think you are misunderstanding what he is doing, having a computer for each of the 3-4 cameras onboard the bot. Hunter will probably just use regular UDP sockets, as he said in his post. Either one UDP connection per XU can be used to the cRIO, or maybe there can be a master XU, that communicates with each slave XU, processed what they see, and beams the info to the cRIO!

However, I think it is still overkill to have more than 2 onboard computers, except the cRIO!
  #19   Spotlight this post!  
Unread 21-01-2014, 01:03
MoosingIn3space MoosingIn3space is offline
Programming Division Captain
FRC #3334 (Eagle Robotics)
 
Join Date: Jul 2013
Rookie Year: 2012
Location: Salt Lake City, UT
Posts: 13
MoosingIn3space is an unknown quantity at this point
Okay I see now. I was under the impression that he was using them like an HPC cluster. Thanks for clarification!
  #20   Spotlight this post!  
Unread 21-01-2014, 11:10
faust1706's Avatar
faust1706 faust1706 is offline
Registered User
FRC #1706 (Ratchet Rockers)
Team Role: College Student
 
Join Date: Apr 2012
Rookie Year: 2011
Location: St Louis
Posts: 498
faust1706 is infamous around these partsfaust1706 is infamous around these parts
Re: Yet Another Vision Processing Thread

We actually aren't using the kinect this year (sad for me, but my mentor wanted to get away from it). The genius 120 has a FOV of 117.5x80. Our plan is to have 360 degree vision processing. We have a mentor that is a computer vision professor at a local state university and a kid's dad is the head of the comp sci department at said university who stops by on occasion. Both of them know how to multi tread, and qt apparently has a method of doing it too.

The x2 is slightly slower than the xu according to our tests. A company bought us all these boards and cameras in exchange for us teaching them what we did with them. They are a biomed company and do a decent amount of biomed imaging.

We're apprehensive about relying on ball and robot tracking on the asus xtion entirely because of the amount of ir light that gets pur onto the field.

One task at a time. Field location and oreintation is almost done. Next to ball tracking

We are using 3 or 4 cameras. Through the xu is powerful, I dont want fps to drop below 20, and I think that would happen. So it's easier for each camera to get their own board (especially when you already have the xus on hand).

We had a voltage regulator for our x2 and kinect, which helped. We didn't noticed a problem with justing off the robot as a means of turning the computer off.

Using so many boards is sort of a proof of concept. We could have an autonomous Robot, but it'd be a one mam team, which isnt how we are going to play the game. We are trying as many things as we can while still within reason so we learn more and can do more in the future.
__________________
"You're a gentleman," they used to say to him. "You shouldn't have gone murdering people with a hatchet; that's no occupation for a gentleman."

Last edited by faust1706 : 21-01-2014 at 11:19.
  #21   Spotlight this post!  
Unread 21-01-2014, 11:29
ArzaanK ArzaanK is offline
Registered User
FRC #1325 (Inverse Paradox)
Team Role: Programmer
 
Join Date: Jan 2013
Rookie Year: 2012
Location: Mississauga, Ontario, Canada
Posts: 40
ArzaanK is an unknown quantity at this point
Re: Yet Another Vision Processing Thread

This was our first year using vision processing at team 1325. I use the following:

1. We plan to use our driver station running RoboRealm. It is quite powerful and easy software to use. I managed to track the hot goal and send data to the robot within a few hours with little help from a mentor.

2. We program our robot in C++. Using RoboRealm with it is a breeze.

3. We are currently using an old Axis 206 camera for testing, but will be receiving a new M1013 camera very shortly. (Actually, it's somewhere in the school, we just have to find it.)

4. We use the Network Tables. They integrate quite nicely with RoboRealm, and they are easy to operate.
__________________
Arzaan Khairulla
Programmer/Driver
2013 Greater Toronto Regional East Winners with 1114 and 2056
2013 Galileo Division
  #22   Spotlight this post!  
Unread 21-01-2014, 11:38
Tom Line's Avatar
Tom Line Tom Line is offline
Raptors can't turn doorknobs.
FRC #1718 (The Fighting Pi)
Team Role: Mentor
 
Join Date: Jan 2007
Rookie Year: 1999
Location: Armada, Michigan
Posts: 2,554
Tom Line has a reputation beyond reputeTom Line has a reputation beyond reputeTom Line has a reputation beyond reputeTom Line has a reputation beyond reputeTom Line has a reputation beyond reputeTom Line has a reputation beyond reputeTom Line has a reputation beyond reputeTom Line has a reputation beyond reputeTom Line has a reputation beyond reputeTom Line has a reputation beyond reputeTom Line has a reputation beyond repute
Re: Yet Another Vision Processing Thread

In the past, we used LabView and the associted vision libraries with the cRIO to do our vision processing and it was more than powerful enough.

This year, for reasons completely unrelated to processing power, we have moved our vision processing to the driver station.
  #23   Spotlight this post!  
Unread 21-01-2014, 13:16
sparkytwd's Avatar
sparkytwd sparkytwd is offline
Registered User
FRC #3574
Team Role: Mentor
 
Join Date: Feb 2012
Rookie Year: 2012
Location: Seattle
Posts: 102
sparkytwd will become famous soon enough
Re: Yet Another Vision Processing Thread

Quote:
Originally Posted by faust1706 View Post
We are using 3 or 4 cameras. Through the xu is powerful, I dont want fps to drop below 20, and I think that would happen. So it's easier for each camera to get their own board (especially when you already have the xus on hand).
I'd recommend testing multiple cameras on a single XU. It's got 4 A15 processors in it, so it's worth seeing what impact side-by-side has. That'll simplify your wiring as well.
  #24   Spotlight this post!  
Unread 21-01-2014, 14:11
apalrd's Avatar
apalrd apalrd is offline
More Torque!
AKA: Andrew Palardy (Most people call me Palardy)
VRC #3333
Team Role: College Student
 
Join Date: Mar 2009
Rookie Year: 2009
Location: Auburn Hills, MI
Posts: 1,347
apalrd has a reputation beyond reputeapalrd has a reputation beyond reputeapalrd has a reputation beyond reputeapalrd has a reputation beyond reputeapalrd has a reputation beyond reputeapalrd has a reputation beyond reputeapalrd has a reputation beyond reputeapalrd has a reputation beyond reputeapalrd has a reputation beyond reputeapalrd has a reputation beyond reputeapalrd has a reputation beyond repute
Re: Yet Another Vision Processing Thread

Our history with vision:

2009: Could not get it to work with the given targets, no vision on dashboard

2010: Vision through Axis 206 through cRio for driver display only. Only use ever was to take pictures at the beginning of auton to verify driver setup.

2011: Tested vision code using Axis 206 on cRio, vision was not useful at all

2012: Developed robust vision tracking system. Specifics:
Driver Station laptop
Axis 206 with LED ring
UDP from Dashboard to Robot
Framerate was 20fps, although the pipelined design meant ~100ms total latency.

2013: Vision not even attempted. Axis M1013 camera on robot for drivers, who never used it. Eventually removed camera for weight.

2014:
???
Plans include driver station laptop, Axis M1013, and UDP. Similar in design to 2012 code.

IMHO we really only need to process 1 image per match this year.
__________________
Kettering University - Computer Engineering
Kettering Motorsports
Williams International - Commercial Engines - Controls and Accessories
FRC 33 - The Killer Bees - 2009-2012 Student, 2013-2014 Advisor
VEX IQ 3333 - The Bumble Bees - 2014+ Mentor

"Sometimes, the elegant implementation is a function. Not a method. Not a class. Not a framework. Just a function." ~ John Carmack
  #25   Spotlight this post!  
Unread 21-01-2014, 19:10
yash101 yash101 is offline
Curiosity | I have too much of it!
AKA: null
no team
 
Join Date: Oct 2012
Rookie Year: 2012
Location: devnull
Posts: 1,191
yash101 is an unknown quantity at this point
Re: Yet Another Vision Processing Thread

Quote:
Originally Posted by sparkytwd View Post
I'd recommend testing multiple cameras on a single XU. It's got 4 A15 processors in it, so it's worth seeing what impact side-by-side has. That'll simplify your wiring as well.
Yeah. each XU's got some power! I actually think an XU is more powerful than my laptop, with an i3-2367m! I however only get ~2400BMIPS processing speed in Ubuntu

Not only will the wiring be hairy, the computers will draw just a ton of current, so batterywork will not be fun!

By the way, the D'Link only has 4 ethernet ports. Are you guys going to have an extra switch on the bot to give you more ports? I think you'll have 120lbs of computer, not of robot, aluminum and other important robot stuff!

Think wisely of what you will lose by having so many onboard computers!
  #26   Spotlight this post!  
Unread 21-01-2014, 21:57
Dr.Bot
 
Posts: n/a
Re: Yet Another Vision Processing Thread

I am experimenting with the Pandaboard and ROS - Robot Operating System from Willow Garage. So far I've got Ubuntu 13.04 and ROS Hydro on the board, and loaded the openni stacks. Seems to be working but not completely with the Kinect. I've run the ROS code on a Raspberry and Beglebone previously. The Kinect code did work on the Pi, but I can't recall if it worked on the Beaglebone.

The advantage of ROS is you can integrate navigation sensors with vision, and openni support is built in. THe disadvantage is learning curve and integration into the driver station - not clear it can be done with our Java code. NI has ROS bindings so it may work that way.
  #27   Spotlight this post!  
Unread 21-01-2014, 21:58
faust1706's Avatar
faust1706 faust1706 is offline
Registered User
FRC #1706 (Ratchet Rockers)
Team Role: College Student
 
Join Date: Apr 2012
Rookie Year: 2011
Location: St Louis
Posts: 498
faust1706 is infamous around these partsfaust1706 is infamous around these parts
Re: Yet Another Vision Processing Thread

Quote:
Originally Posted by yash101 View Post
Yeah. each XU's got some power! I actually think an XU is more powerful than my laptop, with an i3-2367m! I however only get ~2400BMIPS processing speed in Ubuntu

Not only will the wiring be hairy, the computers will draw just a ton of current, so batterywork will not be fun!

By the way, the D'Link only has 4 ethernet ports. Are you guys going to have an extra switch on the bot to give you more ports? I think you'll have 120lbs of computer, not of robot, aluminum and other important robot stuff!

Think wisely of what you will lose by having so many onboard computers!
Considering each XU weighs about 150 grams tops, if we use 4 that is 600 grams or 1.32 lbs. The genius 120 weighs 82.0 grams. Times that by 3 and we are at 246 g. That puts out total weight of our vision system at 1.86 pounds not including wires weight. The Kinect weighs at least a pound. So we really aren't that much different that last year. 2 years ago on our custom build computer, our vision system weighed over 6 pounds. Weight will not be an issue.

I know the XU has power, but when I tested for the 3 genius 120 cameras, the fps dropped to 25 when all I was doing was grabbing the image and displaying it. Last year the slowest the vision algorithm ran during a match was 27 fps.

We are sending all of our data into a program that will be running on one of the boards that calculates our xy position and yaw given how far away we are from the targets that we see. If I only see one, I will do a pose calculation to get xy field location and yaw. Then, the xy coordinate and yaw gets sent to the cRIO. So, We only need to send data from 1 XU to the labview side of things. The XU has an IO connector, so we can have the boards communicate through that.
__________________
"You're a gentleman," they used to say to him. "You shouldn't have gone murdering people with a hatchet; that's no occupation for a gentleman."
  #28   Spotlight this post!  
Unread 21-01-2014, 22:23
yash101 yash101 is offline
Curiosity | I have too much of it!
AKA: null
no team
 
Join Date: Oct 2012
Rookie Year: 2012
Location: devnull
Posts: 1,191
yash101 is an unknown quantity at this point
Re: Yet Another Vision Processing Thread

If you have a good onboard switch, I suggest that you attempt to cluster them. That way, if some camera temporarily needs more resources, another XU will aid it! Vice versa! That will try to keep every XU at 100% throttle and the framerates from each camera equal, and the load spread out!
I think it might be better to just get an i7, with a GTX GPU capable of CUDA, on a mini-ITX board, so you can juice performance!

What is the subtotal of all the cameras and the XUs? I bet you that it might be hard on the BOM!
  #29   Spotlight this post!  
Unread 23-01-2014, 12:21
Invictus3593's Avatar
Invictus3593 Invictus3593 is offline
time you like wasting is not wasted
FRC #3593 (Team Invictus)
Team Role: Leadership
 
Join Date: Jan 2013
Rookie Year: 2010
Location: Tulsa, OK
Posts: 318
Invictus3593 is just really niceInvictus3593 is just really niceInvictus3593 is just really niceInvictus3593 is just really nice
Re: Yet Another Vision Processing Thread

Quote:
Originally Posted by ubeatlenine View Post
Teams that used the DS, why not a co-processor?
Our team does vision on the Dashboard simply because we dont need a coprocessor. If the Dashboard gets the image from the robot anyways, we don't see the need to process it somewhere on the robot.

After processing, we just send a few different variables back to the crio, cutting down on bandwidth usage and keeping all our code simple, yet effective without the need for another $50 spent and countless extra hours coding and setting the thing up.
__________________
Per Audacia Ad Astra
  #30   Spotlight this post!  
Unread 23-01-2014, 20:19
yash101 yash101 is offline
Curiosity | I have too much of it!
AKA: null
no team
 
Join Date: Oct 2012
Rookie Year: 2012
Location: devnull
Posts: 1,191
yash101 is an unknown quantity at this point
Re: Yet Another Vision Processing Thread

This is really my opinion about vision software(because it is slightly biased):
OpenCV is really the King of all vision APIs because it has a very large set of features, out of which it performs exceptionally executing. The documentation available for OpenCV is just unsurpassable, making is one of the easiest libraries to learn if you know what you want to learn. Not only that, the documentation and resources available attract so many users that the community is large enough to Google a question and find some example working code! OpenCV is also quite resource efficient! I believe the most complex program I have written requires only 64MB of RAM, a resource available on even many older computers!

OpenCV is also multithreaded, and supports CUDA, making it possible to maximize both your CPU and GPU to accelerate the processing. I actually think the Raspberry Pi could possibly run OpenCV quite well as soon as GPU coding libraries, like OpenCL are released! That makes it possible to make a very inexpensive system capable of doing much more than you'd ever think possible!

OpenCV's learning curve morphs around you! Just pick and choose what you want to learn first, and as you glean knowledge on vision processing, the jigsaw of Artificial Intelligence/Computer Vision will come together, allowing you to solve problems you previously thought, impossible! You could either start with HighGUI, learning how to draw a Snowman in a Matrix and displaying it on a Window, and move into more complex things, or you could start by putting code together, to make a powerful application, solving the jigsaw as you become more efficient at coding.

There are even books on OpenCV, something that NiVision doesn't have. Nothing can beat having a hard-copy book to use as a reference when you can't remember what a function does!

I really only see the problem that it would be hard to set up OpenCV to run on a Robot, and that it would be semihard to communicate from the computer to the cRIO!
Closed Thread


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 02:35.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi