Chief Delphi

Chief Delphi (http://www.chiefdelphi.com/forums/index.php)
-   Programming (http://www.chiefdelphi.com/forums/forumdisplay.php?f=51)
-   -   Yet Another Vision Processing Thread (http://www.chiefdelphi.com/forums/showthread.php?t=124962)

Dr.Bot 27-01-2014 21:28

Re: Yet Another Vision Processing Thread
 
I am continuing to experiment with the Pandaboard, as a possible on board co- processor. Would there be any rule prohibiting the use of an on-board mini-lcd monitor and keyboard on the Robot? This would be for displaying on-board status of sensors/electronics and aligning the robot with vision targets for autonomous mode.

While I have been successful capturing Kinect depth camera data on the Pandaboard, it is unlikely that we will be using the Kinect this year. Having a lot of success the Axis/Roborealm and the vision targets on the DS. We have replaced our classmate with $300 Asus netbook.

faust1706 27-01-2014 21:36

Re: Yet Another Vision Processing Thread
 
Quote:

Originally Posted by Dr.Bot (Post 1333419)
I am continuing to experiment with the Pandaboard, as a possible on board co- processor. Would there be any rule prohibiting the use of an on-board mini-lcd monitor and keyboard on the Robot? This would be for displaying on-board status of sensors/electronics and aligning the robot with vision targets for autonomous mode.

While I have been successful capturing Kinect depth camera data on the Pandaboard, it is unlikely that we will be using the Kinect this year. Having a lot of success the Axis/Roborealm and the vision targets on the DS. We have replaced our classmate with $300 Asus netbook.


Do NOT assume kinect depth will work on the field. Think about how much lighting is being thrown onto the field. The depth camera will not be able to read the IR pattern emitted. Consider yourself warned.

sparkytwd 28-01-2014 14:00

Re: Yet Another Vision Processing Thread
 
Quote:

Originally Posted by Dr.Bot (Post 1333419)
I am continuing to experiment with the Pandaboard, as a possible on board co- processor. Would there be any rule prohibiting the use of an on-board mini-lcd monitor and keyboard on the Robot? This would be for displaying on-board status of sensors/electronics and aligning the robot with vision targets for autonomous mode.

While I have been successful capturing Kinect depth camera data on the Pandaboard, it is unlikely that we will be using the Kinect this year. Having a lot of success the Axis/Roborealm and the vision targets on the DS. We have replaced our classmate with $300 Asus netbook.

No rules directly prohibit it. Keep it mind the wiring and power distribution rules. You might also need a DC-DC boost power supply if the monitor requires 12volts, as the battery can dip below 10 volts during operation.

One of the features I'm working on for ocupus is using an android tablet as a tethered VNC client, that could even be removed before competition start.

charr 02-02-2014 16:48

Re: Yet Another Vision Processing Thread
 
Could you share some more progress info on using the Odroid-XU? We purchased one but it hasn't got much use since there is very little which has been written about using it.
- Chris

Quote:

Originally Posted by faust1706 (Post 1329955)
2012:
1. A custom build computer running ubuntu
2. OpenCV in C
3. Microsoft Kinect
4. UDP
2013:
1. O-Droid X2
2. OpenCV in C
3. Microsoft kinect with added illuminator
4. UDP
2014.
1. 3 or 4 O-Droid XUs
2. OpenCV in C++ and OpenNI
3. Genius 120 with the ir filter removed and the asus xtion for depth
4. UDP

Having sent off a number of people well on their way for computer vision, I can now offer help to more people. If you want some sample code in OpenCV in C and C++, pm your email so I can share a dropbox with you.

Tutorial on how to set up vision like we did: http://ratchetrockers1706.org/vision-setup/


charr 02-02-2014 16:51

Re: Yet Another Vision Processing Thread
 
Any suggestions for a 12V-12V converter? Anyone have some results on issues with voltage stability with Odroids or other SBCs?
- Chris


Quote:

Originally Posted by yash101 (Post 1330096)
Well, 1706 loves using the Kinect, and they say that they will use 3 cameras. I guess that there will be one for each camera and one for the kinect. Maybe one of them does the data manipulation, or maybe it is done on the cRIO.

By the way, Hunter, how do you prevent your ODROIDs from corrupting from the rapid power-down? Do you have a special mechanism to shut down each node? Also, which converter are you using to power the Kinect? I don't think it would be wise to connect it directly to the battery/PDB, etc. You'd need some 12v-12v converter to eliminate the voltage drops/spikes!

As for your query of multi-noded, I think you are misunderstanding what he is doing, having a computer for each of the 3-4 cameras onboard the bot. Hunter will probably just use regular UDP sockets, as he said in his post. Either one UDP connection per XU can be used to the cRIO, or maybe there can be a master XU, that communicates with each slave XU, processed what they see, and beams the info to the cRIO!

However, I think it is still overkill to have more than 2 onboard computers, except the cRIO!


yash101 02-02-2014 17:18

Re: Yet Another Vision Processing Thread
 
Well, for an SBC, you'd like a 12v-5v converter. a 12v-12v bfore that can stabilize the voltage a bit too. Typically an SBC would be very stable because the 5v converters are a high quality. however, try to make sure you properly shut down the sbc after the match!

By the way, I am talking about the same model VR as what powers the D'Link!

charr 03-02-2014 00:35

Re: Yet Another Vision Processing Thread
 
How about the other direction - 12V to 19V. We are going to try using an Intel NUC and need to figure how how to give it stable power. Currently we are using a car power adapter.

Quote:

Originally Posted by yash101 (Post 1336467)
Well, for an SBC, you'd like a 12v-5v converter. a 12v-12v bfore that can stabilize the voltage a bit too. Typically an SBC would be very stable because the 5v converters are a high quality. however, try to make sure you properly shut down the sbc after the match!

By the way, I am talking about the same model VR as what powers the D'Link!


yash101 03-02-2014 08:37

Re: Yet Another Vision Processing Thread
 
That will be good. However, spend at least $50 and buy from a very reputable company. Voltage spikes are many and they will either damage the NUC or they will cause it to reset once-in-a-while.

If that's not too easy to find, get a 12-20 or a bit higher boost converter and use an LDO to bring it down to 19v. BEWARE. THAT WILL GET VERY HOT

So, for now, just find a very high quality boost converter, 12 to 19 volts. Also make sure it has a good driver because a faulty chipset can cause stray voltages to leak in. The NUC is expensive so not the type of thing to break.

By the way, where'd you purchase the NUC from and how much did it cost? Also, how much time did it take from order to delivery?

charr 03-02-2014 10:51

Re: Yet Another Vision Processing Thread
 
Newegg and Amazon have them. They are in stock so is only delivery time (<week)
Quote:

Originally Posted by yash101 (Post 1336718)
That will be good. However, spend at least $50 and buy from a very reputable company. Voltage spikes are many and they will either damage the NUC or they will cause it to reset once-in-a-while.

If that's not too easy to find, get a 12-20 or a bit higher boost converter and use an LDO to bring it down to 19v. BEWARE. THAT WILL GET VERY HOT

So, for now, just find a very high quality boost converter, 12 to 19 volts. Also make sure it has a good driver because a faulty chipset can cause stray voltages to leak in. The NUC is expensive so not the type of thing to break.

By the way, where'd you purchase the NUC from and how much did it cost? Also, how much time did it take from order to delivery?


Dr.Bot 04-02-2014 00:13

Re: Yet Another Vision Processing Thread
 
On the Pandaboard, I had trouble using the Kinect and openni drivers. A friend told me that in general OpenCV and Openni are optimized for Intel/AMD and don't work well with ARM. I did have better luck with the Freenect drivers with my Pandaboard/ROS/Kinect experiment. I did get depth camera data from this arrangement. I didn't think it was worth the effort to convert to PCL(point cloud library) for range to the wall in autonomous mode.

I think Odroid is AM based? So I don't know. Besides, I think the Axis camera will do just fine, and I don't see any compelling reason to use the Kinect for this years game. Just got a hold of a Radxa ock (ARM) board. It looks pretty powerful but there is no time to get on this year's bot.

faust1706 04-02-2014 01:13

Re: Yet Another Vision Processing Thread
 
Quote:

Originally Posted by Dr.Bot (Post 1337287)
On the Pandaboard, I had trouble using the Kinect and openni drivers. A friend told me that in general OpenCV and Openni are optimized for Intel/AMD and don't work well with ARM. I did have better luck with the Freenect drivers with my Pandaboard/ROS/Kinect experiment. I did get depth camera data from this arrangement. I didn't think it was worth the effort to convert to PCL(point cloud library) for range to the wall in autonomous mode.

I think Odroid is AM based? So I don't know. Besides, I think the Axis camera will do just fine, and I don't see any compelling reason to use the Kinect for this years game. Just got a hold of a Radxa ock (ARM) board. It looks pretty powerful but there is no time to get on this year's bot.

OpenCV and OpenNI work with ARM just fine in my expirience with using 2 different ARM boards with the libraries (The odroid x2 and XU)

I have said this before, but I will reiterate it: Do not rely on the kinect giving accurate depth measurements at a competition. There are so many stage lights saturating the field with IR light that the ir pattern the kinect emits will be flooded out and the depth map most likely won't work.

I have a quick install of opencv and libfreenect if you're interested, and a bunch of demo programs I wrote that explores a lot of the opencv and opencv2 libraries.

charr 04-02-2014 01:19

Re: Yet Another Vision Processing Thread
 
I'd be very interested in samples and info on OpenCV and the Odroid-XU

Quote:

Originally Posted by faust1706 (Post 1337316)
OpenCV and OpenNI work with ARM just fine in my expirience with using 2 different ARM boards with the libraries (The odroid x2 and XU)

I have said this before, but I will reiterate it: Do not rely on the kinect giving accurate depth measurements at a competition. There are so many stage lights saturating the field with IR light that the ir pattern the kinect emits will be flooded out and the depth map most likely won't work.

I have a quick install of opencv and libfreenect if you're interested, and a bunch of demo programs I wrote that explores a lot of the opencv and opencv2 libraries.


Jerry Ballard 04-02-2014 08:04

Re: Yet Another Vision Processing Thread
 
Quote:

Originally Posted by charr (Post 1337323)
I'd be very interested in samples and info on OpenCV and the Odroid-XU

The ODROID variants can be found at hardkernel.com (http://www.hardkernel.com/main/main.php) and Pandaboard info can be found at pandaboard.org (http://pandaboard.org/).

This year we are using the ODROID U3 and have found it to be more on the bleeding edge than the Pandaboard we used last year. Both systems can run ubuntu linux and OpenCV, so for a beginning vision programming team I would recommend starting with the Pandaboard.

yash101 04-02-2014 08:17

Re: Yet Another Vision Processing Thread
 
I'd suggest compiling OpenCV yourself if you want ARM. That way, you can even select the features you want. I think that's how you get opencv to run multi-core. running opencv on my ubuntu at home only runs on one core because I didn't compile it with TBB support!

Ben Wolsieffer 09-02-2014 18:33

Re: Yet Another Vision Processing Thread
 
I just wanted to weigh in with our team's setup. This is the first year ever we have attempted to do vision processing. I am using the new Java interface (not javacv) for OpenCV to do processing in a SmartDashboard extension which communicates back to the robot using Network Tables. I started out using javacv but found it archaic and difficult. The new Java interface is really easy to work with.

I am noticing that a lot of teams are using a second computer on the robot to do vision. It seems like the power supply system would make it a pain to get working correctly. What's the advantage of that over doing vision on the driver station?

I can't wait until next year where we can have that type of processing power to do vision (even with a Kinect via the USB host port) on the RoboRio.


All times are GMT -5. The time now is 21:56.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi