![]() |
Re: Yet Another Vision Processing Thread
I am continuing to experiment with the Pandaboard, as a possible on board co- processor. Would there be any rule prohibiting the use of an on-board mini-lcd monitor and keyboard on the Robot? This would be for displaying on-board status of sensors/electronics and aligning the robot with vision targets for autonomous mode.
While I have been successful capturing Kinect depth camera data on the Pandaboard, it is unlikely that we will be using the Kinect this year. Having a lot of success the Axis/Roborealm and the vision targets on the DS. We have replaced our classmate with $300 Asus netbook. |
Re: Yet Another Vision Processing Thread
Quote:
Do NOT assume kinect depth will work on the field. Think about how much lighting is being thrown onto the field. The depth camera will not be able to read the IR pattern emitted. Consider yourself warned. |
Re: Yet Another Vision Processing Thread
Quote:
One of the features I'm working on for ocupus is using an android tablet as a tethered VNC client, that could even be removed before competition start. |
Re: Yet Another Vision Processing Thread
Could you share some more progress info on using the Odroid-XU? We purchased one but it hasn't got much use since there is very little which has been written about using it.
- Chris Quote:
|
Re: Yet Another Vision Processing Thread
Any suggestions for a 12V-12V converter? Anyone have some results on issues with voltage stability with Odroids or other SBCs?
- Chris Quote:
|
Re: Yet Another Vision Processing Thread
Well, for an SBC, you'd like a 12v-5v converter. a 12v-12v bfore that can stabilize the voltage a bit too. Typically an SBC would be very stable because the 5v converters are a high quality. however, try to make sure you properly shut down the sbc after the match!
By the way, I am talking about the same model VR as what powers the D'Link! |
Re: Yet Another Vision Processing Thread
How about the other direction - 12V to 19V. We are going to try using an Intel NUC and need to figure how how to give it stable power. Currently we are using a car power adapter.
Quote:
|
Re: Yet Another Vision Processing Thread
That will be good. However, spend at least $50 and buy from a very reputable company. Voltage spikes are many and they will either damage the NUC or they will cause it to reset once-in-a-while.
If that's not too easy to find, get a 12-20 or a bit higher boost converter and use an LDO to bring it down to 19v. BEWARE. THAT WILL GET VERY HOT So, for now, just find a very high quality boost converter, 12 to 19 volts. Also make sure it has a good driver because a faulty chipset can cause stray voltages to leak in. The NUC is expensive so not the type of thing to break. By the way, where'd you purchase the NUC from and how much did it cost? Also, how much time did it take from order to delivery? |
Re: Yet Another Vision Processing Thread
Newegg and Amazon have them. They are in stock so is only delivery time (<week)
Quote:
|
Re: Yet Another Vision Processing Thread
On the Pandaboard, I had trouble using the Kinect and openni drivers. A friend told me that in general OpenCV and Openni are optimized for Intel/AMD and don't work well with ARM. I did have better luck with the Freenect drivers with my Pandaboard/ROS/Kinect experiment. I did get depth camera data from this arrangement. I didn't think it was worth the effort to convert to PCL(point cloud library) for range to the wall in autonomous mode.
I think Odroid is AM based? So I don't know. Besides, I think the Axis camera will do just fine, and I don't see any compelling reason to use the Kinect for this years game. Just got a hold of a Radxa ock (ARM) board. It looks pretty powerful but there is no time to get on this year's bot. |
Re: Yet Another Vision Processing Thread
Quote:
I have said this before, but I will reiterate it: Do not rely on the kinect giving accurate depth measurements at a competition. There are so many stage lights saturating the field with IR light that the ir pattern the kinect emits will be flooded out and the depth map most likely won't work. I have a quick install of opencv and libfreenect if you're interested, and a bunch of demo programs I wrote that explores a lot of the opencv and opencv2 libraries. |
Re: Yet Another Vision Processing Thread
I'd be very interested in samples and info on OpenCV and the Odroid-XU
Quote:
|
Re: Yet Another Vision Processing Thread
Quote:
This year we are using the ODROID U3 and have found it to be more on the bleeding edge than the Pandaboard we used last year. Both systems can run ubuntu linux and OpenCV, so for a beginning vision programming team I would recommend starting with the Pandaboard. |
Re: Yet Another Vision Processing Thread
I'd suggest compiling OpenCV yourself if you want ARM. That way, you can even select the features you want. I think that's how you get opencv to run multi-core. running opencv on my ubuntu at home only runs on one core because I didn't compile it with TBB support!
|
Re: Yet Another Vision Processing Thread
I just wanted to weigh in with our team's setup. This is the first year ever we have attempted to do vision processing. I am using the new Java interface (not javacv) for OpenCV to do processing in a SmartDashboard extension which communicates back to the robot using Network Tables. I started out using javacv but found it archaic and difficult. The new Java interface is really easy to work with.
I am noticing that a lot of teams are using a second computer on the robot to do vision. It seems like the power supply system would make it a pain to get working correctly. What's the advantage of that over doing vision on the driver station? I can't wait until next year where we can have that type of processing power to do vision (even with a Kinect via the USB host port) on the RoboRio. |
| All times are GMT -5. The time now is 21:56. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi