![]() |
Vision Software comparison
OpenCV vs RoboRealm vs SimpleCV...What's the main difference? Also, what do you teams (with vision) use? I would like to know which is better prformance-wise, feature-wise and simplicity-wise. I currently have OpenCV and RoboRealm.
|
Re: Vision Software comparison
An option that isn't listed is the NI Vision libraries. This is what the example/tutorial in LV uses. It runs on Windows and cRIO, and has LV and C entry points for its functions.
Greg McKaskle |
Re: Vision Software comparison
We use NI Vision on a LabVIEW dashboard, communicating to a java program using Network Tables. We've been much more successful with this approach then anything else we've tried (and we've been playing with vision since 2005).
I think much of the success is attributable to the great examples that NI puts together for FIRST, rather then something inherent in the library. In addition, doing vision processing on the dashboard is much easier then trying to do it on the cRIO or on a robot-based co-processor. |
Re: Vision Software comparison
Quote:
|
Re: Vision Software comparison
This is dawned on me:in order for a team to be competitive this year, they are required to do vision processing, and not just sending a feed to the driver station. Reason being that there is absolutely no way to tell unless you are analyzing the hot goal and/or the dynamic target. It will be interesting. A lot of teams get away with not using vision and just indexing on the field, but they can't do that this year, and computer vision has a (very) steep learning curve.
|
Re: Vision Software comparison
Quote:
|
Re: Vision Software comparison
Quote:
|
Re: Vision Software comparison
NI vision library on the cRIO.
|
Re: Vision Software comparison
NI library for Java, on the cRIO for the last several years. Debating using OpenCV on a BeagleBone Black coprocessor arrangement this year.
|
Re: Vision Software comparison
I'll throw in a plug for RoboRealm. In the past I've used NI Vision libraries on the robot and OpenCV on the driver station. Yesterday I decided to try making a program with RoboRealm running on the driver station sending data back to the robot with Network tables.
In about an hour I had a program that was detecting the hot target, and in another half hour had the data back to the robot using the built-in Network Tables implementation in RoboRealm. It's not yet doing distance or anything complicated, but it does make my little desktop robot open and close the claw based on the target being hot. I'll leave the hard parts for the students. I just wanted to see how well it worked. I was able to test the program using the sample images that are included with the sample programs that come with all the languages. No field construction required, and didn't leave my desk. Brad |
Re: Vision Software comparison
Quote:
|
Re: Vision Software comparison
So, I should ask why your team picked these different technologies. They all work well. I, myself, prefer OpenCV because it is so powerful. It is also well-documented, so learning it can be conquered within weeks!
Also, how do you communicate with the robot? Serial-RS232/UART, i2C, SPI, NetworkTables, Winsocks? By the way, would you guys suggest using winsocks? It seems quite easy enough to use and make a socket server using! |
Re: Vision Software comparison
Last year we used SimpleCV (python) running on a ODROID-U2 with network tables from here:
https://github.com/robotpy/pynetworktables So we could have it all Python and it worked fine. The year before we did OpenCV in C++ using visual studio to compile an app on the driver station that interacted with the Java network tables on the driver side. We got great distances and it ran real fast except we ran c++ on the robot side and we could not get network tables that year working. We were going to convert the robot code to Java but ran out of time. The next year network tables worked great in c++ on the robot. We played with the code that comes with the WPLIB this year and after adjusting the HSV setting we got decent distances with that. I am still thinking we might stick with off processing. Depends on how much we have to do in Autonomous and if we want it for other uses. |
Re: Vision Software comparison
We ran everything on a PandaBoard last year, but we will be going with an ODROID board this year, most likely. We used OpenCV and wrote our own networking libraries. The networking stuff is pretty simple in C++, so it was easy enough to write our own and it gave us more flexibility.
The kids want to use Java this year, so that will require some adjustments in the networking code, but all of that stuff is fairly universal. |
Re: Vision Software comparison
Our vision isn't very heavy on any processor except the cRio's, so we use the dashboard. :D
|
| All times are GMT -5. The time now is 03:36. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi