Go to Post Leave it to CD to misconstrue a joke this badly. - connor.worley [more]
Home
Go Back   Chief Delphi > Technical > Programming
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
Closed Thread
Thread Tools Rating: Thread Rating: 14 votes, 4.93 average. Display Modes
  #31   Spotlight this post!  
Unread 27-01-2014, 21:28
Dr.Bot
 
Posts: n/a
Re: Yet Another Vision Processing Thread

I am continuing to experiment with the Pandaboard, as a possible on board co- processor. Would there be any rule prohibiting the use of an on-board mini-lcd monitor and keyboard on the Robot? This would be for displaying on-board status of sensors/electronics and aligning the robot with vision targets for autonomous mode.

While I have been successful capturing Kinect depth camera data on the Pandaboard, it is unlikely that we will be using the Kinect this year. Having a lot of success the Axis/Roborealm and the vision targets on the DS. We have replaced our classmate with $300 Asus netbook.
  #32   Spotlight this post!  
Unread 27-01-2014, 21:36
faust1706's Avatar
faust1706 faust1706 is offline
Registered User
FRC #1706 (Ratchet Rockers)
Team Role: College Student
 
Join Date: Apr 2012
Rookie Year: 2011
Location: St Louis
Posts: 498
faust1706 is infamous around these partsfaust1706 is infamous around these parts
Re: Yet Another Vision Processing Thread

Quote:
Originally Posted by Dr.Bot View Post
I am continuing to experiment with the Pandaboard, as a possible on board co- processor. Would there be any rule prohibiting the use of an on-board mini-lcd monitor and keyboard on the Robot? This would be for displaying on-board status of sensors/electronics and aligning the robot with vision targets for autonomous mode.

While I have been successful capturing Kinect depth camera data on the Pandaboard, it is unlikely that we will be using the Kinect this year. Having a lot of success the Axis/Roborealm and the vision targets on the DS. We have replaced our classmate with $300 Asus netbook.

Do NOT assume kinect depth will work on the field. Think about how much lighting is being thrown onto the field. The depth camera will not be able to read the IR pattern emitted. Consider yourself warned.
__________________
"You're a gentleman," they used to say to him. "You shouldn't have gone murdering people with a hatchet; that's no occupation for a gentleman."
  #33   Spotlight this post!  
Unread 28-01-2014, 14:00
sparkytwd's Avatar
sparkytwd sparkytwd is offline
Registered User
FRC #3574
Team Role: Mentor
 
Join Date: Feb 2012
Rookie Year: 2012
Location: Seattle
Posts: 102
sparkytwd will become famous soon enough
Re: Yet Another Vision Processing Thread

Quote:
Originally Posted by Dr.Bot View Post
I am continuing to experiment with the Pandaboard, as a possible on board co- processor. Would there be any rule prohibiting the use of an on-board mini-lcd monitor and keyboard on the Robot? This would be for displaying on-board status of sensors/electronics and aligning the robot with vision targets for autonomous mode.

While I have been successful capturing Kinect depth camera data on the Pandaboard, it is unlikely that we will be using the Kinect this year. Having a lot of success the Axis/Roborealm and the vision targets on the DS. We have replaced our classmate with $300 Asus netbook.
No rules directly prohibit it. Keep it mind the wiring and power distribution rules. You might also need a DC-DC boost power supply if the monitor requires 12volts, as the battery can dip below 10 volts during operation.

One of the features I'm working on for ocupus is using an android tablet as a tethered VNC client, that could even be removed before competition start.
  #34   Spotlight this post!  
Unread 02-02-2014, 16:48
charr charr is offline
Registered User
FRC #3504
 
Join Date: Aug 2013
Location: Pittsburgh
Posts: 20
charr is an unknown quantity at this point
Re: Yet Another Vision Processing Thread

Could you share some more progress info on using the Odroid-XU? We purchased one but it hasn't got much use since there is very little which has been written about using it.
- Chris

Quote:
Originally Posted by faust1706 View Post
2012:
1. A custom build computer running ubuntu
2. OpenCV in C
3. Microsoft Kinect
4. UDP
2013:
1. O-Droid X2
2. OpenCV in C
3. Microsoft kinect with added illuminator
4. UDP
2014.
1. 3 or 4 O-Droid XUs
2. OpenCV in C++ and OpenNI
3. Genius 120 with the ir filter removed and the asus xtion for depth
4. UDP

Having sent off a number of people well on their way for computer vision, I can now offer help to more people. If you want some sample code in OpenCV in C and C++, pm your email so I can share a dropbox with you.

Tutorial on how to set up vision like we did: http://ratchetrockers1706.org/vision-setup/
  #35   Spotlight this post!  
Unread 02-02-2014, 16:51
charr charr is offline
Registered User
FRC #3504
 
Join Date: Aug 2013
Location: Pittsburgh
Posts: 20
charr is an unknown quantity at this point
Re: Yet Another Vision Processing Thread

Any suggestions for a 12V-12V converter? Anyone have some results on issues with voltage stability with Odroids or other SBCs?
- Chris


Quote:
Originally Posted by yash101 View Post
Well, 1706 loves using the Kinect, and they say that they will use 3 cameras. I guess that there will be one for each camera and one for the kinect. Maybe one of them does the data manipulation, or maybe it is done on the cRIO.

By the way, Hunter, how do you prevent your ODROIDs from corrupting from the rapid power-down? Do you have a special mechanism to shut down each node? Also, which converter are you using to power the Kinect? I don't think it would be wise to connect it directly to the battery/PDB, etc. You'd need some 12v-12v converter to eliminate the voltage drops/spikes!

As for your query of multi-noded, I think you are misunderstanding what he is doing, having a computer for each of the 3-4 cameras onboard the bot. Hunter will probably just use regular UDP sockets, as he said in his post. Either one UDP connection per XU can be used to the cRIO, or maybe there can be a master XU, that communicates with each slave XU, processed what they see, and beams the info to the cRIO!

However, I think it is still overkill to have more than 2 onboard computers, except the cRIO!
  #36   Spotlight this post!  
Unread 02-02-2014, 17:18
yash101 yash101 is offline
Curiosity | I have too much of it!
AKA: null
no team
 
Join Date: Oct 2012
Rookie Year: 2012
Location: devnull
Posts: 1,191
yash101 is an unknown quantity at this point
Re: Yet Another Vision Processing Thread

Well, for an SBC, you'd like a 12v-5v converter. a 12v-12v bfore that can stabilize the voltage a bit too. Typically an SBC would be very stable because the 5v converters are a high quality. however, try to make sure you properly shut down the sbc after the match!

By the way, I am talking about the same model VR as what powers the D'Link!

Last edited by yash101 : 02-02-2014 at 17:20.
  #37   Spotlight this post!  
Unread 03-02-2014, 00:35
charr charr is offline
Registered User
FRC #3504
 
Join Date: Aug 2013
Location: Pittsburgh
Posts: 20
charr is an unknown quantity at this point
Re: Yet Another Vision Processing Thread

How about the other direction - 12V to 19V. We are going to try using an Intel NUC and need to figure how how to give it stable power. Currently we are using a car power adapter.

Quote:
Originally Posted by yash101 View Post
Well, for an SBC, you'd like a 12v-5v converter. a 12v-12v bfore that can stabilize the voltage a bit too. Typically an SBC would be very stable because the 5v converters are a high quality. however, try to make sure you properly shut down the sbc after the match!

By the way, I am talking about the same model VR as what powers the D'Link!
  #38   Spotlight this post!  
Unread 03-02-2014, 08:37
yash101 yash101 is offline
Curiosity | I have too much of it!
AKA: null
no team
 
Join Date: Oct 2012
Rookie Year: 2012
Location: devnull
Posts: 1,191
yash101 is an unknown quantity at this point
Re: Yet Another Vision Processing Thread

That will be good. However, spend at least $50 and buy from a very reputable company. Voltage spikes are many and they will either damage the NUC or they will cause it to reset once-in-a-while.

If that's not too easy to find, get a 12-20 or a bit higher boost converter and use an LDO to bring it down to 19v. BEWARE. THAT WILL GET VERY HOT

So, for now, just find a very high quality boost converter, 12 to 19 volts. Also make sure it has a good driver because a faulty chipset can cause stray voltages to leak in. The NUC is expensive so not the type of thing to break.

By the way, where'd you purchase the NUC from and how much did it cost? Also, how much time did it take from order to delivery?
  #39   Spotlight this post!  
Unread 03-02-2014, 10:51
charr charr is offline
Registered User
FRC #3504
 
Join Date: Aug 2013
Location: Pittsburgh
Posts: 20
charr is an unknown quantity at this point
Re: Yet Another Vision Processing Thread

Newegg and Amazon have them. They are in stock so is only delivery time (<week)
Quote:
Originally Posted by yash101 View Post
That will be good. However, spend at least $50 and buy from a very reputable company. Voltage spikes are many and they will either damage the NUC or they will cause it to reset once-in-a-while.

If that's not too easy to find, get a 12-20 or a bit higher boost converter and use an LDO to bring it down to 19v. BEWARE. THAT WILL GET VERY HOT

So, for now, just find a very high quality boost converter, 12 to 19 volts. Also make sure it has a good driver because a faulty chipset can cause stray voltages to leak in. The NUC is expensive so not the type of thing to break.

By the way, where'd you purchase the NUC from and how much did it cost? Also, how much time did it take from order to delivery?
  #40   Spotlight this post!  
Unread 04-02-2014, 00:13
Dr.Bot
 
Posts: n/a
Re: Yet Another Vision Processing Thread

On the Pandaboard, I had trouble using the Kinect and openni drivers. A friend told me that in general OpenCV and Openni are optimized for Intel/AMD and don't work well with ARM. I did have better luck with the Freenect drivers with my Pandaboard/ROS/Kinect experiment. I did get depth camera data from this arrangement. I didn't think it was worth the effort to convert to PCL(point cloud library) for range to the wall in autonomous mode.

I think Odroid is AM based? So I don't know. Besides, I think the Axis camera will do just fine, and I don't see any compelling reason to use the Kinect for this years game. Just got a hold of a Radxa ock (ARM) board. It looks pretty powerful but there is no time to get on this year's bot.
  #41   Spotlight this post!  
Unread 04-02-2014, 01:13
faust1706's Avatar
faust1706 faust1706 is offline
Registered User
FRC #1706 (Ratchet Rockers)
Team Role: College Student
 
Join Date: Apr 2012
Rookie Year: 2011
Location: St Louis
Posts: 498
faust1706 is infamous around these partsfaust1706 is infamous around these parts
Re: Yet Another Vision Processing Thread

Quote:
Originally Posted by Dr.Bot View Post
On the Pandaboard, I had trouble using the Kinect and openni drivers. A friend told me that in general OpenCV and Openni are optimized for Intel/AMD and don't work well with ARM. I did have better luck with the Freenect drivers with my Pandaboard/ROS/Kinect experiment. I did get depth camera data from this arrangement. I didn't think it was worth the effort to convert to PCL(point cloud library) for range to the wall in autonomous mode.

I think Odroid is AM based? So I don't know. Besides, I think the Axis camera will do just fine, and I don't see any compelling reason to use the Kinect for this years game. Just got a hold of a Radxa ock (ARM) board. It looks pretty powerful but there is no time to get on this year's bot.
OpenCV and OpenNI work with ARM just fine in my expirience with using 2 different ARM boards with the libraries (The odroid x2 and XU)

I have said this before, but I will reiterate it: Do not rely on the kinect giving accurate depth measurements at a competition. There are so many stage lights saturating the field with IR light that the ir pattern the kinect emits will be flooded out and the depth map most likely won't work.

I have a quick install of opencv and libfreenect if you're interested, and a bunch of demo programs I wrote that explores a lot of the opencv and opencv2 libraries.
__________________
"You're a gentleman," they used to say to him. "You shouldn't have gone murdering people with a hatchet; that's no occupation for a gentleman."
  #42   Spotlight this post!  
Unread 04-02-2014, 01:19
charr charr is offline
Registered User
FRC #3504
 
Join Date: Aug 2013
Location: Pittsburgh
Posts: 20
charr is an unknown quantity at this point
Re: Yet Another Vision Processing Thread

I'd be very interested in samples and info on OpenCV and the Odroid-XU

Quote:
Originally Posted by faust1706 View Post
OpenCV and OpenNI work with ARM just fine in my expirience with using 2 different ARM boards with the libraries (The odroid x2 and XU)

I have said this before, but I will reiterate it: Do not rely on the kinect giving accurate depth measurements at a competition. There are so many stage lights saturating the field with IR light that the ir pattern the kinect emits will be flooded out and the depth map most likely won't work.

I have a quick install of opencv and libfreenect if you're interested, and a bunch of demo programs I wrote that explores a lot of the opencv and opencv2 libraries.
  #43   Spotlight this post!  
Unread 04-02-2014, 08:04
Jerry Ballard's Avatar
Jerry Ballard Jerry Ballard is offline
Registered User
AKA: Jerry Ballard
FRC #0456 (Siege Robotics)
Team Role: Mentor
 
Join Date: Dec 2012
Rookie Year: 2011
Location: Vicksburg, MS, USA
Posts: 13
Jerry Ballard is a jewel in the roughJerry Ballard is a jewel in the roughJerry Ballard is a jewel in the rough
Re: Yet Another Vision Processing Thread

Quote:
Originally Posted by charr View Post
I'd be very interested in samples and info on OpenCV and the Odroid-XU
The ODROID variants can be found at hardkernel.com (http://www.hardkernel.com/main/main.php) and Pandaboard info can be found at pandaboard.org (http://pandaboard.org/).

This year we are using the ODROID U3 and have found it to be more on the bleeding edge than the Pandaboard we used last year. Both systems can run ubuntu linux and OpenCV, so for a beginning vision programming team I would recommend starting with the Pandaboard.
  #44   Spotlight this post!  
Unread 04-02-2014, 08:17
yash101 yash101 is offline
Curiosity | I have too much of it!
AKA: null
no team
 
Join Date: Oct 2012
Rookie Year: 2012
Location: devnull
Posts: 1,191
yash101 is an unknown quantity at this point
Re: Yet Another Vision Processing Thread

I'd suggest compiling OpenCV yourself if you want ARM. That way, you can even select the features you want. I think that's how you get opencv to run multi-core. running opencv on my ubuntu at home only runs on one core because I didn't compile it with TBB support!
  #45   Spotlight this post!  
Unread 09-02-2014, 18:33
Ben Wolsieffer Ben Wolsieffer is offline
Dartmouth 2020
AKA: lopsided98
FRC #2084 (Robots by the C)
Team Role: Alumni
 
Join Date: Jan 2011
Rookie Year: 2011
Location: Manchester, MA (Hanover, NH)
Posts: 520
Ben Wolsieffer has much to be proud ofBen Wolsieffer has much to be proud ofBen Wolsieffer has much to be proud ofBen Wolsieffer has much to be proud ofBen Wolsieffer has much to be proud ofBen Wolsieffer has much to be proud ofBen Wolsieffer has much to be proud ofBen Wolsieffer has much to be proud of
Post Re: Yet Another Vision Processing Thread

I just wanted to weigh in with our team's setup. This is the first year ever we have attempted to do vision processing. I am using the new Java interface (not javacv) for OpenCV to do processing in a SmartDashboard extension which communicates back to the robot using Network Tables. I started out using javacv but found it archaic and difficult. The new Java interface is really easy to work with.

I am noticing that a lot of teams are using a second computer on the robot to do vision. It seems like the power supply system would make it a pain to get working correctly. What's the advantage of that over doing vision on the driver station?

I can't wait until next year where we can have that type of processing power to do vision (even with a Kinect via the USB host port) on the RoboRio.
Closed Thread


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 02:35.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi