View Full Version : Yet Another Vision Processing Thread
ubeatlenine
20-01-2014, 14:56
Team 1512 will be giving vision processing its first serious attempt this year. I have been amazed by the wide variety of approaches to vision processing presented on this forum and am having trouble weighing the advantages and disadvantages of each approach. If your team has successfully implemented a vision processing system in the past, I would like to know four things:
1. What co-processor did you use? The only information I have really been able to gather here is that Pi is too slow, but do Arduino/BeagleBone/Pandaboard/ODROID have any significant advantages over each other? Teams that used the DS, why not a co-processor?
2. What programming language did you use? @yash101's poll (http://www.chiefdelphi.com/forums/showthread.php?t=124283) seems to indicate that openCV is the most popular choice for processing. Our team is using java, and while openCV has java bindings, I suspect these will be too slow for our purposes. Java teams, how did you deal with this issue?
3. What camera did you use? I have seen mention of the Logitech C110 camera and the PS3 eye camera. Why not just use the axis camera?
4. What communication protocols did you use? The FRC manual is pretty clean on communications restrictions:
Communication between the ROBOT and the OPERATOR CONSOLE is restricted as follows:
Network Ports:
TCP 1180: This port is typically used for camera data from the cRIO to the Driver Station (DS) when the camera is connected to port 2 on the 8-slot cRIO (P/N: cRIO-FRC). This port is bidirectional.
TCP 1735: SmartDashboard, bidirectional
UDP 1130: Dashboard-to-ROBOT control data, directional
UDP 1140: ROBOT-to-Dashboard status data, directional
HTTP 80: Camera connected via switch on the ROBOT, bidirectional
HTTP 443: Camera connected via switch on the ROBOT, bidirectional
Teams may use these ports as they wish if they do not employ them as outlined above (i.e. TCP 1180 can be used to pass data back and forth between the ROBOT and the DS if the Team chooses not to use the camera on port 2).
Bandwidth: no more than 7 Mbits/second.
Is one of these protocols best for sending images and raw data (like numerical and string results of image processing) ?
bvisness
20-01-2014, 15:07
1. We plan to use our DS (running RoboRealm) for vision processing this year. We've never tried using a co-processor, but since we don't have very complicated uses for the vision system, we've had good success with doing the processing on the DS (even with the minimal lag it introduces.)
2. We've used NI's vision code (running inside the Dashboard program) in the past, but this year we'll be using RoboRealm (as mentioned above.) In my tests I've found that it's much easier to make changes on the fly and the tracking is very fast and robust.
3. We just use the Axis camera. Since we need to get the camera feed remotely, IP cameras are pretty much the best way to go.
4. We just use NetworkTables, since it's efficient and easy and integrates nicely with RoboRealm. It's also easier for new programmers on the team to understand (and we have a lot of them this year...)
MoosingIn3space
20-01-2014, 15:08
Team 3334 here. We are using a custom-built computer using a dual-core AMD Athlon II with 4 GB of RAM. In order to power it, we are using a Mini-box picoPSU DC-DC converter.
On the software side, we are using C++ and OpenCV running atop Arch Linux. The capabilities of OpenCV are quite impressive, so read the tutorials!
I'm not sure about JavaCV, but a board like the one we built would definitely be fast enough to run Oracle Java 7.
sparkytwd
20-01-2014, 15:27
Team 3574 here:
1. What co-processor did you use? The only information I have really been able to gather here is that Pi is too slow, but do Arduino/BeagleBone/Pandaboard/ODROID have any significant advantages over each other? Teams that used the DS, why not a co-processor?
2012 - We used an Intel i3 with the hopes of leveraging the Kinect. We got this working, and in my opinion was our most succesful CV approach, though we dumped the kinect. There were mounting and power issues. Running a system that requires stable 12 volts off of a source that can hit 8 and regularly hits 10 was the biggest issue.
2013 - Odroid U2. CV wasn't as important for us this year as our autonomous didn't need realignment like 2012. We ran into stability issues with the PS3 EYE camera and USB, which was fixed with an external USB port. A super fast little (I do mean little) box. We hooked it up to an external monitor and programmed directly from the ARM desktop. Hardkernel has announced a new revision of this, the U3, which is only $65.
2014 - Odroid XU. The biggest difference here is no need for an external USB hub, with 5 ports. I've tested it with 3 running USB cameras (2 PS3 Eye's and 1 Microsoft Life HD) with no issues. Ubuntu doesn't yet support the GPU or running on all 8 cores, but a quad-core A15 running at 1.6ghz is pretty epic. If your team is more cost concerned, this is pretty pricey at $170. At this point the U3 is probably going to be able to keep up with it in terms of processing, and adding an additional powered USB hub is not too expensive.
I've played with both the beaglebone black and the pandaboard, but with the amount of work we're having our vision processor do this year (see ocupus (https://github.com/hackcasual/ocupus)) I think we're addicted to the quadcore systems now.
2. What programming language did you use? @yash101's poll (http://www.chiefdelphi.com/forums/showthread.php?t=124283) seems to indicate that openCV is the most popular choice for processing. Our team is using java, and while openCV has java bindings, I suspect these will be too slow for our purposes. Java teams, how did you deal with this issue?
Python's OpenCV bindings. Performance won't make that much of a difference. The way OpenCV's various language bindings are built, most of the performance intensive stuff happens in the native code layer.
3. What camera did you use? I have seen mention of the Logitech C110 camera and the PS3 eye camera. Why not just use the axis camera?
PS3 eye camera. We originally picked it in 2012 mostly because we thought it would be cute to have alongside the Kinect. At one of the regionals though we had really bad lighting conditions that had us switch over the IR for 2013, of which there are a lot of tutorials online.
As for not the axis camera, it's heavy, requires separate power, isn't easily convertible to IR, and you pay a latency cost going in and out of MJPEG format.
4. What communication protocols did you use? The FRC manual is pretty clean on communications restrictions:
Is one of these protocols best for sending images and raw data (like numerical and string results of image processing) ?
In 2012 we had no restrictions, so just ran a TCP server in the CRIO's code. 2013 we used network tables which is nicely integrated. We'll use that this year. In 2013 we did not send the raw camera feed. This year, we've put together the ocupus (https://github.com/hackcasual/ocupus) toolkit to support doing that. This uses OpenVPN to tunnel between DS and Robot over port 1180.
1. We use the driver station laptop, like 341 did in 2012
2. LabVIEW. We took the vision example meant for the cRIO and copy and pasted in into our LabVIEW dashboard.
3. Axis Camera
4. Network Tables
The main advantage to this setup is ease of use. Getting a more complicated setup working can be difficult and have lots of tricky bugs to find. Using our driver station laptop (intel core 2 duo @ 2.0 GHz, 3 Gb RAM) gave us more than enough processing power (we could go up to 30 fps) and was the cheapest and simplest solution. The LabVIEW software is great for debugging because you can see the value of any variable/image at any time, so it's easy to find out what isn't working, and a pretty decent example was provided for us. The axis camera was another easy solution because we already had one and the library to communicate/change exposure settings was already there.
The network tables approach worked really well too and we got very little latency with this approach. We were able to auto line up (using our drive system, not a turret) with the goal in about a second, and we had it working before the end of week one, after spending about 2 hours with two people. In the end we didn't need it in competition, we could line up by hitting the tower.
We're doing the same approach this year for the hot/not hot goal. Compared to the other solutions, this is the cheapest/quickest/simplest, but you loose the advanced features of openCV. NI's vision libraries are pretty good, and the vision assistant program works nicely too, but in the end, some people say openCV has more. You need to decide if the extra features are worth the extra work for your team.
sparkytwd
20-01-2014, 15:47
Team 3334 here. We are using a custom-built computer using a dual-core AMD Athlon II with 4 GB of RAM. In order to power it, we are using a Mini-box picoPSU DC-DC converter.
On the software side, we are using C++ and OpenCV running atop Arch Linux. The capabilities of OpenCV are quite impressive, so read the tutorials!
I'm not sure about JavaCV, but a board like the one we built would definitely be fast enough to run Oracle Java 7.
You might need to upgrade your power supply to the M4. When we went with an onboard x86, we started with the 160W pico. The problem you'll hit is when you run all your motors at top speed and your system voltage drops down to 10 volts, which caused our computer to shutdown.
MoosingIn3space
20-01-2014, 16:50
In your experience, the M4 is stable? Ahh, thank goodness it's been less than 30 days since I bought that pico :D
Thanks!
sparkytwd
20-01-2014, 17:39
In your experience, the M4 is stable? Ahh, thank goodness it's been less than 30 days since I bought that pico :D
Thanks!
Yes, the M4 was rock solid. At 6-24v input range it should handle any battery condition. The biggest risk is wiring it up backwards. We used a sharpie paint pen and colored the + terminal to mitigate that.
As the OP says, "Java would be slower," This is not necessarily true. It would be faster than the Python bindings. However, the C++ code would still be fastest. The reason why I (and many others) prefer to use OpenCV with C or Python is because:
-c/c++
--fast, stable, robust, OpenCV written in C/C++, easy, well-documented
-python
--easy to program, and easy to put together a program quickly and with little notice. You don't need to go around, compiling the program every time you change very little code
-Java
--It's really just for the sake of it. Java is good, but is so similar to C, that you'd probably be better off learning the better-documented C/C++ API instead of the Java one. However, it is personal preference. You are going to use similar commands, so C might just be easier to use. Also, with so many C compilers out, it is actually much more portable than Java which has a JVM for most, but not all systems. Java is really only nice if you are programming for Android, where you need maximum portability without the need to recompile the code every time it is downloaded!
I am actually thinking about starting an OpenCV journal, that explains what I have done, and what not to do, to not shoot yourself in the foot! Beware! It will be long :D
By the way, today, I was working on setting up the Kinect for OpenCV, which I will make a thread about in a few minutes.
faust1706
20-01-2014, 20:02
1. What co-processor did you use?
2. What programming language did you use?
3. What camera did you use?
4. What communication protocols did you use?
2012:
1. A custom build computer running ubuntu
2. OpenCV in C
3. Microsoft Kinect
4. UDP
2013:
1. O-Droid X2
2. OpenCV in C
3. Microsoft kinect with added illuminator
4. UDP
2014.
1. 3 or 4 O-Droid XUs
2. OpenCV in C++ and OpenNI
3. Genius 120 with the ir filter removed and the asus xtion for depth
4. UDP
Having sent off a number of people well on their way for computer vision, I can now offer help to more people. If you want some sample code in OpenCV in C and C++, pm your email so I can share a dropbox with you.
Tutorial on how to set up vision like we did: http://ratchetrockers1706.org/vision-setup/
Joe Ross
20-01-2014, 20:36
2012:
1. Driver Station
2. NI Vision
3. Axis M1011
4. UDP
2013:
1. Driver Station
2. NI Vision
3. Axis M1011
4. Network Tables
2014:
1. Driver Station or cRIO (TBD)
2. NI Vision
3. Axis M1011
4. Network Tables
By using the provided examples and libraries, we were able to get a working solution in a minimal amount of time. The reason we use the driver station rather then an on-board processor is it significantly simplifies the system, both as far as reducing part counts (limits failure points) and it uses examples and libraries that already exist and have been tested by FIRST/NI.
MoosingIn3space
20-01-2014, 21:28
Getting OpenCV to receive images from the Kinect is quite simple with libfreenect's C++ interface and boost threads. If there is enough interest, I'll post my code.
cmwilson13
20-01-2014, 21:36
i don't think a co-processor is necessary you have plenty of power to do the analysis on the crio. this was done on the first crio released in 2009 which is slower and has a much smaller processor cache then the new crios and it worked fine.
your not even tracking a moving target this year so it should be even easier
http://www.youtube.com/watch?v=Jl6MyCSELvM
MoosingIn3space
20-01-2014, 21:49
i don't think a co-processor is necessary you have plenty of power to do the analysis on the crio. this was done on the first crio released in 2009 which is slower and has a much smaller processor cache then the new crios and it worked fine.
your not even tracking a moving target this year so it should be even easier
http://www.youtube.com/watch?v=Jl6MyCSELvM
There is a very good reason to use a co-processor: any USB camera can be used. Other cameras with better framerates, resolutions, sensors, or other desirable characteristics can be used through a co-processor. That's why my team committed to using one this season, after using cRIO-based analysis every season of our existence.
cmwilson13
20-01-2014, 21:52
why, you don't need any better cameras if the ones available can do they job as it has for us every year
2012:
1. A custom build computer running ubuntu
2. OpenCV in C
3. Microsoft Kinect
4. UDP
2013:
1. O-Droid X2
2. OpenCV in C
3. Microsoft kinect with added illuminator
4. UDP
2014.
1. 3 or 4 O-Droid XUs
2. OpenCV in C++ and OpenNI
3. Genius 120 with the ir filter removed and the asus xtion for depth
4. UDP
Having sent off a number of people well on their way for computer vision, I can now offer help to more people. If you want some sample code in OpenCV in C and C++, pm your email so I can share a dropbox with you.
Tutorial on how to set up vision like we did: http://ratchetrockers1706.org/vision-setup/
I think we are already scared by the processing power of one ODROID! Your robot is going to catch on fire with so much processing power. (not literally).
Before you switch to 3 XUs, try one X2 again, but measure it's processor usage, and then place your order after that. I'm pretty sure that 4XUs will be equal to one CIM continuously, especially with the 3 120Degree cameras and the Kinect!
Be careful there, or everyone will wonder why the voltage drops every time you boot the XUs!
MoosingIn3space
20-01-2014, 23:02
2012:
1. A custom build computer running ubuntu
2. OpenCV in C
3. Microsoft Kinect
4. UDP
2013:
1. O-Droid X2
2. OpenCV in C
3. Microsoft kinect with added illuminator
4. UDP
2014.
1. 3 or 4 O-Droid XUs
2. OpenCV in C++ and OpenNI
3. Genius 120 with the ir filter removed and the asus xtion for depth
4. UDP
Having sent off a number of people well on their way for computer vision, I can now offer help to more people. If you want some sample code in OpenCV in C and C++, pm your email so I can share a dropbox with you.
Tutorial on how to set up vision like we did: http://ratchetrockers1706.org/vision-setup/
My team already has our architecture set, but I'm curious about yours.
How do you intend to coordinate the image processing over the multiple nodes? Do you already have a multi-node OpenCV library? As far as I know,
OpenCV doesn't have an MPI-enabled version, only multithreaded.
My team already has our architecture set, but I'm curious about yours.
How do you intend to coordinate the image processing over the multiple nodes? Do you already have a multi-node OpenCV library? As far as I know,
OpenCV doesn't have an MPI-enabled version, only multithreaded.
Well, 1706 loves using the Kinect, and they say that they will use 3 cameras. I guess that there will be one for each camera and one for the kinect. Maybe one of them does the data manipulation, or maybe it is done on the cRIO.
By the way, Hunter, how do you prevent your ODROIDs from corrupting from the rapid power-down? Do you have a special mechanism to shut down each node? Also, which converter are you using to power the Kinect? I don't think it would be wise to connect it directly to the battery/PDB, etc. You'd need some 12v-12v converter to eliminate the voltage drops/spikes!
As for your query of multi-noded, I think you are misunderstanding what he is doing, having a computer for each of the 3-4 cameras onboard the bot. Hunter will probably just use regular UDP sockets, as he said in his post. Either one UDP connection per XU can be used to the cRIO, or maybe there can be a master XU, that communicates with each slave XU, processed what they see, and beams the info to the cRIO!
However, I think it is still overkill to have more than 2 onboard computers, except the cRIO!
MoosingIn3space
21-01-2014, 01:03
Okay I see now. I was under the impression that he was using them like an HPC cluster. Thanks for clarification!
faust1706
21-01-2014, 11:10
We actually aren't using the kinect this year (sad for me, but my mentor wanted to get away from it). The genius 120 has a FOV of 117.5x80. Our plan is to have 360 degree vision processing. We have a mentor that is a computer vision professor at a local state university and a kid's dad is the head of the comp sci department at said university who stops by on occasion. Both of them know how to multi tread, and qt apparently has a method of doing it too.
The x2 is slightly slower than the xu according to our tests. A company bought us all these boards and cameras in exchange for us teaching them what we did with them. They are a biomed company and do a decent amount of biomed imaging.
We're apprehensive about relying on ball and robot tracking on the asus xtion entirely because of the amount of ir light that gets pur onto the field.
One task at a time. Field location and oreintation is almost done. Next to ball tracking :D
We are using 3 or 4 cameras. Through the xu is powerful, I dont want fps to drop below 20, and I think that would happen. So it's easier for each camera to get their own board (especially when you already have the xus on hand).
We had a voltage regulator for our x2 and kinect, which helped. We didn't noticed a problem with justing off the robot as a means of turning the computer off.
Using so many boards is sort of a proof of concept. We could have an autonomous Robot, but it'd be a one mam team, which isnt how we are going to play the game. We are trying as many things as we can while still within reason so we learn more and can do more in the future.
This was our first year using vision processing at team 1325. I use the following:
1. We plan to use our driver station running RoboRealm. It is quite powerful and easy software to use. I managed to track the hot goal and send data to the robot within a few hours with little help from a mentor.
2. We program our robot in C++. Using RoboRealm with it is a breeze.
3. We are currently using an old Axis 206 camera for testing, but will be receiving a new M1013 camera very shortly. (Actually, it's somewhere in the school, we just have to find it.)
4. We use the Network Tables. They integrate quite nicely with RoboRealm, and they are easy to operate.
Tom Line
21-01-2014, 11:38
In the past, we used LabView and the associted vision libraries with the cRIO to do our vision processing and it was more than powerful enough.
This year, for reasons completely unrelated to processing power, we have moved our vision processing to the driver station.
sparkytwd
21-01-2014, 13:16
We are using 3 or 4 cameras. Through the xu is powerful, I dont want fps to drop below 20, and I think that would happen. So it's easier for each camera to get their own board (especially when you already have the xus on hand).
I'd recommend testing multiple cameras on a single XU. It's got 4 A15 processors in it, so it's worth seeing what impact side-by-side has. That'll simplify your wiring as well.
Our history with vision:
2009: Could not get it to work with the given targets, no vision on dashboard
2010: Vision through Axis 206 through cRio for driver display only. Only use ever was to take pictures at the beginning of auton to verify driver setup.
2011: Tested vision code using Axis 206 on cRio, vision was not useful at all
2012: Developed robust vision tracking system. Specifics:
Driver Station laptop
Axis 206 with LED ring
UDP from Dashboard to Robot
Framerate was 20fps, although the pipelined design meant ~100ms total latency.
2013: Vision not even attempted. Axis M1013 camera on robot for drivers, who never used it. Eventually removed camera for weight.
2014:
???
Plans include driver station laptop, Axis M1013, and UDP. Similar in design to 2012 code.
IMHO we really only need to process 1 image per match this year.
I'd recommend testing multiple cameras on a single XU. It's got 4 A15 processors in it, so it's worth seeing what impact side-by-side has. That'll simplify your wiring as well.
Yeah. each XU's got some power! I actually think an XU is more powerful than my laptop, with an i3-2367m! I however only get ~2400BMIPS processing speed in Ubuntu :(
Not only will the wiring be hairy, the computers will draw just a ton of current, so batterywork will not be fun!
By the way, the D'Link only has 4 ethernet ports. Are you guys going to have an extra switch on the bot to give you more ports? I think you'll have 120lbs of computer, not of robot, aluminum and other important robot stuff!
Think wisely of what you will lose by having so many onboard computers!
I am experimenting with the Pandaboard and ROS - Robot Operating System from Willow Garage. So far I've got Ubuntu 13.04 and ROS Hydro on the board, and loaded the openni stacks. Seems to be working but not completely with the Kinect. I've run the ROS code on a Raspberry and Beglebone previously. The Kinect code did work on the Pi, but I can't recall if it worked on the Beaglebone.
The advantage of ROS is you can integrate navigation sensors with vision, and openni support is built in. THe disadvantage is learning curve and integration into the driver station - not clear it can be done with our Java code. NI has ROS bindings so it may work that way.
faust1706
21-01-2014, 21:58
Yeah. each XU's got some power! I actually think an XU is more powerful than my laptop, with an i3-2367m! I however only get ~2400BMIPS processing speed in Ubuntu :(
Not only will the wiring be hairy, the computers will draw just a ton of current, so batterywork will not be fun!
By the way, the D'Link only has 4 ethernet ports. Are you guys going to have an extra switch on the bot to give you more ports? I think you'll have 120lbs of computer, not of robot, aluminum and other important robot stuff!
Think wisely of what you will lose by having so many onboard computers!
Considering each XU weighs about 150 grams tops, if we use 4 that is 600 grams or 1.32 lbs. The genius 120 weighs 82.0 grams. Times that by 3 and we are at 246 g. That puts out total weight of our vision system at 1.86 pounds not including wires weight. The Kinect weighs at least a pound. So we really aren't that much different that last year. 2 years ago on our custom build computer, our vision system weighed over 6 pounds. Weight will not be an issue.
I know the XU has power, but when I tested for the 3 genius 120 cameras, the fps dropped to 25 when all I was doing was grabbing the image and displaying it. Last year the slowest the vision algorithm ran during a match was 27 fps.
We are sending all of our data into a program that will be running on one of the boards that calculates our xy position and yaw given how far away we are from the targets that we see. If I only see one, I will do a pose calculation to get xy field location and yaw. Then, the xy coordinate and yaw gets sent to the cRIO. So, We only need to send data from 1 XU to the labview side of things. The XU has an IO connector, so we can have the boards communicate through that.
If you have a good onboard switch, I suggest that you attempt to cluster them. That way, if some camera temporarily needs more resources, another XU will aid it! Vice versa! That will try to keep every XU at 100% throttle and the framerates from each camera equal, and the load spread out!
I think it might be better to just get an i7, with a GTX GPU capable of CUDA, on a mini-ITX board, so you can juice performance!
What is the subtotal of all the cameras and the XUs? I bet you that it might be hard on the BOM!
Invictus3593
23-01-2014, 12:21
Teams that used the DS, why not a co-processor?
Our team does vision on the Dashboard simply because we dont need a coprocessor. If the Dashboard gets the image from the robot anyways, we don't see the need to process it somewhere on the robot.
After processing, we just send a few different variables back to the crio, cutting down on bandwidth usage and keeping all our code simple, yet effective without the need for another $50 spent and countless extra hours coding and setting the thing up.
This is really my opinion about vision software(because it is slightly biased):
OpenCV is really the King of all vision APIs because it has a very large set of features, out of which it performs exceptionally executing. The documentation available for OpenCV is just unsurpassable, making is one of the easiest libraries to learn if you know what you want to learn. Not only that, the documentation and resources available attract so many users that the community is large enough to Google a question and find some example working code! OpenCV is also quite resource efficient! I believe the most complex program I have written requires only 64MB of RAM, a resource available on even many older computers!
OpenCV is also multithreaded, and supports CUDA, making it possible to maximize both your CPU and GPU to accelerate the processing. I actually think the Raspberry Pi could possibly run OpenCV quite well as soon as GPU coding libraries, like OpenCL are released! That makes it possible to make a very inexpensive system capable of doing much more than you'd ever think possible!
OpenCV's learning curve morphs around you! Just pick and choose what you want to learn first, and as you glean knowledge on vision processing, the jigsaw of Artificial Intelligence/Computer Vision will come together, allowing you to solve problems you previously thought, impossible! You could either start with HighGUI, learning how to draw a Snowman in a Matrix and displaying it on a Window, and move into more complex things, or you could start by putting code together, to make a powerful application, solving the jigsaw as you become more efficient at coding.
There are even books on OpenCV, something that NiVision doesn't have. Nothing can beat having a hard-copy book to use as a reference when you can't remember what a function does!
I really only see the problem that it would be hard to set up OpenCV to run on a Robot, and that it would be semihard to communicate from the computer to the cRIO!
I am continuing to experiment with the Pandaboard, as a possible on board co- processor. Would there be any rule prohibiting the use of an on-board mini-lcd monitor and keyboard on the Robot? This would be for displaying on-board status of sensors/electronics and aligning the robot with vision targets for autonomous mode.
While I have been successful capturing Kinect depth camera data on the Pandaboard, it is unlikely that we will be using the Kinect this year. Having a lot of success the Axis/Roborealm and the vision targets on the DS. We have replaced our classmate with $300 Asus netbook.
faust1706
27-01-2014, 21:36
I am continuing to experiment with the Pandaboard, as a possible on board co- processor. Would there be any rule prohibiting the use of an on-board mini-lcd monitor and keyboard on the Robot? This would be for displaying on-board status of sensors/electronics and aligning the robot with vision targets for autonomous mode.
While I have been successful capturing Kinect depth camera data on the Pandaboard, it is unlikely that we will be using the Kinect this year. Having a lot of success the Axis/Roborealm and the vision targets on the DS. We have replaced our classmate with $300 Asus netbook.
Do NOT assume kinect depth will work on the field. Think about how much lighting is being thrown onto the field. The depth camera will not be able to read the IR pattern emitted. Consider yourself warned.
sparkytwd
28-01-2014, 14:00
I am continuing to experiment with the Pandaboard, as a possible on board co- processor. Would there be any rule prohibiting the use of an on-board mini-lcd monitor and keyboard on the Robot? This would be for displaying on-board status of sensors/electronics and aligning the robot with vision targets for autonomous mode.
While I have been successful capturing Kinect depth camera data on the Pandaboard, it is unlikely that we will be using the Kinect this year. Having a lot of success the Axis/Roborealm and the vision targets on the DS. We have replaced our classmate with $300 Asus netbook.
No rules directly prohibit it. Keep it mind the wiring and power distribution rules. You might also need a DC-DC boost power supply if the monitor requires 12volts, as the battery can dip below 10 volts during operation.
One of the features I'm working on for ocupus is using an android tablet as a tethered VNC client, that could even be removed before competition start.
Could you share some more progress info on using the Odroid-XU? We purchased one but it hasn't got much use since there is very little which has been written about using it.
- Chris
2012:
1. A custom build computer running ubuntu
2. OpenCV in C
3. Microsoft Kinect
4. UDP
2013:
1. O-Droid X2
2. OpenCV in C
3. Microsoft kinect with added illuminator
4. UDP
2014.
1. 3 or 4 O-Droid XUs
2. OpenCV in C++ and OpenNI
3. Genius 120 with the ir filter removed and the asus xtion for depth
4. UDP
Having sent off a number of people well on their way for computer vision, I can now offer help to more people. If you want some sample code in OpenCV in C and C++, pm your email so I can share a dropbox with you.
Tutorial on how to set up vision like we did: http://ratchetrockers1706.org/vision-setup/
Any suggestions for a 12V-12V converter? Anyone have some results on issues with voltage stability with Odroids or other SBCs?
- Chris
Well, 1706 loves using the Kinect, and they say that they will use 3 cameras. I guess that there will be one for each camera and one for the kinect. Maybe one of them does the data manipulation, or maybe it is done on the cRIO.
By the way, Hunter, how do you prevent your ODROIDs from corrupting from the rapid power-down? Do you have a special mechanism to shut down each node? Also, which converter are you using to power the Kinect? I don't think it would be wise to connect it directly to the battery/PDB, etc. You'd need some 12v-12v converter to eliminate the voltage drops/spikes!
As for your query of multi-noded, I think you are misunderstanding what he is doing, having a computer for each of the 3-4 cameras onboard the bot. Hunter will probably just use regular UDP sockets, as he said in his post. Either one UDP connection per XU can be used to the cRIO, or maybe there can be a master XU, that communicates with each slave XU, processed what they see, and beams the info to the cRIO!
However, I think it is still overkill to have more than 2 onboard computers, except the cRIO!
Well, for an SBC, you'd like a 12v-5v converter. a 12v-12v bfore that can stabilize the voltage a bit too. Typically an SBC would be very stable because the 5v converters are a high quality. however, try to make sure you properly shut down the sbc after the match!
By the way, I am talking about the same model VR as what powers the D'Link!
How about the other direction - 12V to 19V. We are going to try using an Intel NUC and need to figure how how to give it stable power. Currently we are using a car power adapter.
Well, for an SBC, you'd like a 12v-5v converter. a 12v-12v bfore that can stabilize the voltage a bit too. Typically an SBC would be very stable because the 5v converters are a high quality. however, try to make sure you properly shut down the sbc after the match!
By the way, I am talking about the same model VR as what powers the D'Link!
That will be good. However, spend at least $50 and buy from a very reputable company. Voltage spikes are many and they will either damage the NUC or they will cause it to reset once-in-a-while.
If that's not too easy to find, get a 12-20 or a bit higher boost converter and use an LDO to bring it down to 19v. BEWARE. THAT WILL GET VERY HOT
So, for now, just find a very high quality boost converter, 12 to 19 volts. Also make sure it has a good driver because a faulty chipset can cause stray voltages to leak in. The NUC is expensive so not the type of thing to break.
By the way, where'd you purchase the NUC from and how much did it cost? Also, how much time did it take from order to delivery?
Newegg and Amazon have them. They are in stock so is only delivery time (<week)
That will be good. However, spend at least $50 and buy from a very reputable company. Voltage spikes are many and they will either damage the NUC or they will cause it to reset once-in-a-while.
If that's not too easy to find, get a 12-20 or a bit higher boost converter and use an LDO to bring it down to 19v. BEWARE. THAT WILL GET VERY HOT
So, for now, just find a very high quality boost converter, 12 to 19 volts. Also make sure it has a good driver because a faulty chipset can cause stray voltages to leak in. The NUC is expensive so not the type of thing to break.
By the way, where'd you purchase the NUC from and how much did it cost? Also, how much time did it take from order to delivery?
On the Pandaboard, I had trouble using the Kinect and openni drivers. A friend told me that in general OpenCV and Openni are optimized for Intel/AMD and don't work well with ARM. I did have better luck with the Freenect drivers with my Pandaboard/ROS/Kinect experiment. I did get depth camera data from this arrangement. I didn't think it was worth the effort to convert to PCL(point cloud library) for range to the wall in autonomous mode.
I think Odroid is AM based? So I don't know. Besides, I think the Axis camera will do just fine, and I don't see any compelling reason to use the Kinect for this years game. Just got a hold of a Radxa ock (ARM) board. It looks pretty powerful but there is no time to get on this year's bot.
faust1706
04-02-2014, 01:13
On the Pandaboard, I had trouble using the Kinect and openni drivers. A friend told me that in general OpenCV and Openni are optimized for Intel/AMD and don't work well with ARM. I did have better luck with the Freenect drivers with my Pandaboard/ROS/Kinect experiment. I did get depth camera data from this arrangement. I didn't think it was worth the effort to convert to PCL(point cloud library) for range to the wall in autonomous mode.
I think Odroid is AM based? So I don't know. Besides, I think the Axis camera will do just fine, and I don't see any compelling reason to use the Kinect for this years game. Just got a hold of a Radxa ock (ARM) board. It looks pretty powerful but there is no time to get on this year's bot.
OpenCV and OpenNI work with ARM just fine in my expirience with using 2 different ARM boards with the libraries (The odroid x2 and XU)
I have said this before, but I will reiterate it: Do not rely on the kinect giving accurate depth measurements at a competition. There are so many stage lights saturating the field with IR light that the ir pattern the kinect emits will be flooded out and the depth map most likely won't work.
I have a quick install of opencv and libfreenect if you're interested, and a bunch of demo programs I wrote that explores a lot of the opencv and opencv2 libraries.
I'd be very interested in samples and info on OpenCV and the Odroid-XU
OpenCV and OpenNI work with ARM just fine in my expirience with using 2 different ARM boards with the libraries (The odroid x2 and XU)
I have said this before, but I will reiterate it: Do not rely on the kinect giving accurate depth measurements at a competition. There are so many stage lights saturating the field with IR light that the ir pattern the kinect emits will be flooded out and the depth map most likely won't work.
I have a quick install of opencv and libfreenect if you're interested, and a bunch of demo programs I wrote that explores a lot of the opencv and opencv2 libraries.
Jerry Ballard
04-02-2014, 08:04
I'd be very interested in samples and info on OpenCV and the Odroid-XU
The ODROID variants can be found at hardkernel.com (http://www.hardkernel.com/main/main.php) and Pandaboard info can be found at pandaboard.org (http://pandaboard.org/).
This year we are using the ODROID U3 and have found it to be more on the bleeding edge than the Pandaboard we used last year. Both systems can run ubuntu linux and OpenCV, so for a beginning vision programming team I would recommend starting with the Pandaboard.
I'd suggest compiling OpenCV yourself if you want ARM. That way, you can even select the features you want. I think that's how you get opencv to run multi-core. running opencv on my ubuntu at home only runs on one core because I didn't compile it with TBB support!
Ben Wolsieffer
09-02-2014, 18:33
I just wanted to weigh in with our team's setup. This is the first year ever we have attempted to do vision processing. I am using the new Java interface (not javacv) for OpenCV to do processing in a SmartDashboard extension which communicates back to the robot using Network Tables. I started out using javacv but found it archaic and difficult. The new Java interface is really easy to work with.
I am noticing that a lot of teams are using a second computer on the robot to do vision. It seems like the power supply system would make it a pain to get working correctly. What's the advantage of that over doing vision on the driver station?
I can't wait until next year where we can have that type of processing power to do vision (even with a Kinect via the USB host port) on the RoboRio.
The advantage, as I understand it, is to eliminate sending video over the wireless connection.
I just wanted to weigh in with our team's setup. This is the first year ever we have attempted to do vision processing. I am using the new Java interface (not javacv) for OpenCV to do processing in a SmartDashboard extension which communicates back to the robot using Network Tables. I started out using javacv but found it archaic and difficult. The new Java interface is really easy to work with.
I am noticing that a lot of teams are using a second computer on the robot to do vision. It seems like the power supply system would make it a pain to get working correctly. What's the advantage of that over doing vision on the driver station?
I can't wait until next year where we can have that type of processing power to do vision (even with a Kinect via the USB host port) on the RoboRio.
Ben Wolsieffer
09-02-2014, 19:23
The advantage, as I understand it, is to eliminate sending video over the wireless connection.
That seems like it could be a valid reason for not using dashboard vision, but we have never had much trouble with bandwidth, even though we used two cameras to help the driver climb last year.
Alan Anderson
09-02-2014, 22:16
I am noticing that a lot of teams are using a second computer on the robot to do vision. It seems like the power supply system would make it a pain to get working correctly. What's the advantage of that over doing vision on the driver station?
If you use a separate computer, you can use a (cheaper) USB camera instead of the (expensive) Axis one. That might be an advantage.
That makes sense - especially if you want to have a couple cameras just for driver assist.
If you use the smartdashboard software, can you tell it to get its feed from a USB camera?
If you use a separate computer, you can use a (cheaper) USB camera instead of the (expensive) Axis one. That might be an advantage.
sparkytwd
10-02-2014, 00:38
I just wanted to weigh in with our team's setup. This is the first year ever we have attempted to do vision processing. I am using the new Java interface (not javacv) for OpenCV to do processing in a SmartDashboard extension which communicates back to the robot using Network Tables. I started out using javacv but found it archaic and difficult. The new Java interface is really easy to work with.
I am noticing that a lot of teams are using a second computer on the robot to do vision. It seems like the power supply system would make it a pain to get working correctly. What's the advantage of that over doing vision on the driver station?
I can't wait until next year where we can have that type of processing power to do vision (even with a Kinect via the USB host port) on the RoboRio.
Power supply isn't an issue for the SBCs that run on 5v. You actually already do that with the DLink (though you can't hook the SBC into that power supply). Stability usually isn't an issue unless you get a super cheap supply. Last year we used an RC plan BEC which I'd consider pretty low quality and it worked fine.
For us the important thing this year is being able to stream 640x480@30fps with a custom lens assembly. That image quality is only possible with an onboard computer to handle the more expensive WebM encoding.
vBulletin® v3.6.4, Copyright ©2000-2017, Jelsoft Enterprises Ltd.