![]() |
Re: Yet Another Vision Processing Thread
Quote:
Before you switch to 3 XUs, try one X2 again, but measure it's processor usage, and then place your order after that. I'm pretty sure that 4XUs will be equal to one CIM continuously, especially with the 3 120Degree cameras and the Kinect! Be careful there, or everyone will wonder why the voltage drops every time you boot the XUs! |
Re: Yet Another Vision Processing Thread
Quote:
How do you intend to coordinate the image processing over the multiple nodes? Do you already have a multi-node OpenCV library? As far as I know, OpenCV doesn't have an MPI-enabled version, only multithreaded. |
Re: Yet Another Vision Processing Thread
Quote:
By the way, Hunter, how do you prevent your ODROIDs from corrupting from the rapid power-down? Do you have a special mechanism to shut down each node? Also, which converter are you using to power the Kinect? I don't think it would be wise to connect it directly to the battery/PDB, etc. You'd need some 12v-12v converter to eliminate the voltage drops/spikes! As for your query of multi-noded, I think you are misunderstanding what he is doing, having a computer for each of the 3-4 cameras onboard the bot. Hunter will probably just use regular UDP sockets, as he said in his post. Either one UDP connection per XU can be used to the cRIO, or maybe there can be a master XU, that communicates with each slave XU, processed what they see, and beams the info to the cRIO! However, I think it is still overkill to have more than 2 onboard computers, except the cRIO! |
Okay I see now. I was under the impression that he was using them like an HPC cluster. Thanks for clarification!
|
Re: Yet Another Vision Processing Thread
We actually aren't using the kinect this year (sad for me, but my mentor wanted to get away from it). The genius 120 has a FOV of 117.5x80. Our plan is to have 360 degree vision processing. We have a mentor that is a computer vision professor at a local state university and a kid's dad is the head of the comp sci department at said university who stops by on occasion. Both of them know how to multi tread, and qt apparently has a method of doing it too.
The x2 is slightly slower than the xu according to our tests. A company bought us all these boards and cameras in exchange for us teaching them what we did with them. They are a biomed company and do a decent amount of biomed imaging. We're apprehensive about relying on ball and robot tracking on the asus xtion entirely because of the amount of ir light that gets pur onto the field. One task at a time. Field location and oreintation is almost done. Next to ball tracking :D We are using 3 or 4 cameras. Through the xu is powerful, I dont want fps to drop below 20, and I think that would happen. So it's easier for each camera to get their own board (especially when you already have the xus on hand). We had a voltage regulator for our x2 and kinect, which helped. We didn't noticed a problem with justing off the robot as a means of turning the computer off. Using so many boards is sort of a proof of concept. We could have an autonomous Robot, but it'd be a one mam team, which isnt how we are going to play the game. We are trying as many things as we can while still within reason so we learn more and can do more in the future. |
Re: Yet Another Vision Processing Thread
This was our first year using vision processing at team 1325. I use the following:
1. We plan to use our driver station running RoboRealm. It is quite powerful and easy software to use. I managed to track the hot goal and send data to the robot within a few hours with little help from a mentor. 2. We program our robot in C++. Using RoboRealm with it is a breeze. 3. We are currently using an old Axis 206 camera for testing, but will be receiving a new M1013 camera very shortly. (Actually, it's somewhere in the school, we just have to find it.) 4. We use the Network Tables. They integrate quite nicely with RoboRealm, and they are easy to operate. |
Re: Yet Another Vision Processing Thread
In the past, we used LabView and the associted vision libraries with the cRIO to do our vision processing and it was more than powerful enough.
This year, for reasons completely unrelated to processing power, we have moved our vision processing to the driver station. |
Re: Yet Another Vision Processing Thread
Quote:
|
Re: Yet Another Vision Processing Thread
Our history with vision:
2009: Could not get it to work with the given targets, no vision on dashboard 2010: Vision through Axis 206 through cRio for driver display only. Only use ever was to take pictures at the beginning of auton to verify driver setup. 2011: Tested vision code using Axis 206 on cRio, vision was not useful at all 2012: Developed robust vision tracking system. Specifics: Driver Station laptop Axis 206 with LED ring UDP from Dashboard to Robot Framerate was 20fps, although the pipelined design meant ~100ms total latency. 2013: Vision not even attempted. Axis M1013 camera on robot for drivers, who never used it. Eventually removed camera for weight. 2014: ??? Plans include driver station laptop, Axis M1013, and UDP. Similar in design to 2012 code. IMHO we really only need to process 1 image per match this year. |
Re: Yet Another Vision Processing Thread
Quote:
Not only will the wiring be hairy, the computers will draw just a ton of current, so batterywork will not be fun! By the way, the D'Link only has 4 ethernet ports. Are you guys going to have an extra switch on the bot to give you more ports? I think you'll have 120lbs of computer, not of robot, aluminum and other important robot stuff! Think wisely of what you will lose by having so many onboard computers! |
Re: Yet Another Vision Processing Thread
I am experimenting with the Pandaboard and ROS - Robot Operating System from Willow Garage. So far I've got Ubuntu 13.04 and ROS Hydro on the board, and loaded the openni stacks. Seems to be working but not completely with the Kinect. I've run the ROS code on a Raspberry and Beglebone previously. The Kinect code did work on the Pi, but I can't recall if it worked on the Beaglebone.
The advantage of ROS is you can integrate navigation sensors with vision, and openni support is built in. THe disadvantage is learning curve and integration into the driver station - not clear it can be done with our Java code. NI has ROS bindings so it may work that way. |
Re: Yet Another Vision Processing Thread
Quote:
I know the XU has power, but when I tested for the 3 genius 120 cameras, the fps dropped to 25 when all I was doing was grabbing the image and displaying it. Last year the slowest the vision algorithm ran during a match was 27 fps. We are sending all of our data into a program that will be running on one of the boards that calculates our xy position and yaw given how far away we are from the targets that we see. If I only see one, I will do a pose calculation to get xy field location and yaw. Then, the xy coordinate and yaw gets sent to the cRIO. So, We only need to send data from 1 XU to the labview side of things. The XU has an IO connector, so we can have the boards communicate through that. |
Re: Yet Another Vision Processing Thread
If you have a good onboard switch, I suggest that you attempt to cluster them. That way, if some camera temporarily needs more resources, another XU will aid it! Vice versa! That will try to keep every XU at 100% throttle and the framerates from each camera equal, and the load spread out!
I think it might be better to just get an i7, with a GTX GPU capable of CUDA, on a mini-ITX board, so you can juice performance! What is the subtotal of all the cameras and the XUs? I bet you that it might be hard on the BOM! |
Re: Yet Another Vision Processing Thread
Quote:
After processing, we just send a few different variables back to the crio, cutting down on bandwidth usage and keeping all our code simple, yet effective without the need for another $50 spent and countless extra hours coding and setting the thing up. |
Re: Yet Another Vision Processing Thread
This is really my opinion about vision software(because it is slightly biased):
OpenCV is really the King of all vision APIs because it has a very large set of features, out of which it performs exceptionally executing. The documentation available for OpenCV is just unsurpassable, making is one of the easiest libraries to learn if you know what you want to learn. Not only that, the documentation and resources available attract so many users that the community is large enough to Google a question and find some example working code! OpenCV is also quite resource efficient! I believe the most complex program I have written requires only 64MB of RAM, a resource available on even many older computers! OpenCV is also multithreaded, and supports CUDA, making it possible to maximize both your CPU and GPU to accelerate the processing. I actually think the Raspberry Pi could possibly run OpenCV quite well as soon as GPU coding libraries, like OpenCL are released! That makes it possible to make a very inexpensive system capable of doing much more than you'd ever think possible! OpenCV's learning curve morphs around you! Just pick and choose what you want to learn first, and as you glean knowledge on vision processing, the jigsaw of Artificial Intelligence/Computer Vision will come together, allowing you to solve problems you previously thought, impossible! You could either start with HighGUI, learning how to draw a Snowman in a Matrix and displaying it on a Window, and move into more complex things, or you could start by putting code together, to make a powerful application, solving the jigsaw as you become more efficient at coding. There are even books on OpenCV, something that NiVision doesn't have. Nothing can beat having a hard-copy book to use as a reference when you can't remember what a function does! I really only see the problem that it would be hard to set up OpenCV to run on a Robot, and that it would be semihard to communicate from the computer to the cRIO! |
| All times are GMT -5. The time now is 21:56. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi