OpenCV for RoboRio - Release

OpenCV For the RoboRio - Release

A few hours before kickoff and were finally able to release this. We have been working very hard to provide a working OpenCV
solution for everyone that not only supports the Axis IP camera we all have, but also the Microsoft HD 3000 USB webcam.

Our github page has everything to help you get starting using OpenCV, including our 2014 vision code which we used throughout
the 2014 season for hot target detection. The example supports reading images from a USB camera, Axis IP camera, or from
a file, by simply changing command line arguments

We have successfully ran this code on The RoboRio, and on a beaglebone black, so you can pick your poison. We also have this running
on the Tegra TK1, and will release a how-to on that very soon.

The OpenCV version we are providing is version 2.4.10 which we then patched to support the various camera settings of the
MS HD 3000 camera. It will also support other USB cameras although we havent tested it. And of course it supports the Axis
IP cameras.

The OpenCV libraries have been cross compiled using GCC 4.6.3 and has VFP_V3 and NEON support.

Some of the options it was compiled with:
TBB, FFMPEG, GSTREAMER, JAVA Bindings, JPEG, VL42, GTK

Java bindings means you can use this in Java as well, although we never tested it, so please report back if you do use it in Java.

Our github page has everything to help you get set up using C++ (Even though we program our Robot in Java, we program our Vision in C++).

Please let me know if you have any questions, we are here to help.

Link to Release: https://github.com/Team2168/2168_Vision_Example

The Program will take an input image like input.jpg (provided by the camera) and produce an output like that of output.jpg.

Regards,
Kevin

Happy New Year, and Happy New Season

And as always, if this is what you were waiting for, please provide you thoughts or any feedback you may have

input.jpg




input.jpg

Looks very cool, we’ll definitely try the java bindings out. How’s the performance on the RoboRIO?

not too bad actually… we started doing more controlled vision performance tests a few weeks back, but then that got pushed behind some other priorities.

Our original tests were documented here: http://khengineering.github.io/RoboRio/vision/cameratest/

and still hold some valid data, however, the new openCV libraries we just produced are a bit more streamlined and should run a little bit faster.

It is too early to give a good recommendation on the Rio’s performance, but we have been able to get 320x240 at 15fps without any lag using the axis camera under 2014 game scenarios, and that was more than what we needed for last years game.

Everything we have done so far shows that the IP camera outperforms the USB HD 3000 Camera on the Rio, but I don’t want to official state that as of yet until we do more controlled observations. We should have a new set of comparative data by the end of week 1 for both IP and USB cameras.

The main page where that information will be held is at http://khengineering.github.io/RoboRio/faq/vision/

Awesome work. I’ve been planning to attempt to compile OpenCV on the roboRIO itself once we finally get one, but this seems like a better way.

Wow. Looks really well done and documented. We’ll definitely be using this as a resource this season. I’m assuming there will be some vision challenge this year given the Microsoft camera beta testing.

Good work.

Good to know it works! We’ve got numpy packaged for the RoboRIO in opkg format, so OpenCV will probably next on the list of things to package.

Thanks Kevin! This is really valuable resource you have created.

While our whole team is brainstorming and prototyping I’ve been “secretly” toying around with the control system. What I’ve achieved so far is: I cloned the 2168_Vision_Example project from GitHub, successfully built it in Eclipse with the arm-frc-linux-gnueabi toolchain. Then transferred the libraries and the binary onto roboRIO and successfully executed it in there. All the how-to steps are clearly documented in the project’s README.md file. Thank you very much, Kevin!

Next will be to modify the 2168’s code until it breaks and then start prototyping something my own :smiley:

Any way to do some processing with opencv and then display the results in the dashboard? (Processed image stream)

PS. We are using java.

I just tried this out today using the Java libraries and I got it to work.

After installing OpenCV to the roboRIO via the instructions on GitHub, I had to do some more work to make it work in Java.

First I had to add:

<var name="classpath" value="${classpath}:/path/to/opencv-2410.jar" />

to the end of the build.xml in the robot code Eclipse project (add it just before the closing tag). This includes the OpenCV classes in the jar file that is uploaded to the roboRIO.

To load the native library in the robot code I had to add this line:

System.load("/usr/local/lib/lib_OpenCV/java/opencv_java2410.so");

Put this somewhere where it will get executed before any OpenCV code (ie. static initializer). The path will obviously be different if you installed OpenCV somewhere else on the roboRIO.

I was able to capture and stream an image from the Microsoft Lifecam HD3000, but it is at a very low resolution and I can’t seem to change it. The same code works fine in Linux on my laptop. I’ll see if I can figure more out tomorrow.

I’m currently running two versions of openCV in eclipse, the official 2.4.10 version and this one, and I’ve installed the java version (for both) as descrbied here. For some reason I’m getting an error with your version that I’m not getting with the 2.4.10 version from opencv.

It appears that this librabry only runs in 32 bit java and not 64. Is there any way to fix this without having to reinstall 32 bit java?

This is awesome news. I haven’t had the time to test out Java myself (it is 1 item on a long to-do list of mine, but glad to hear it is working for you.

If you are having trouble changing the camera settings, try using the patch we added to OpenCV, we modified some of the v4l calls and exposed a new constructor to the videoCap class.

You can work off of the C++ example here: which will allow you to set FPS, image size, and some quality settings.

Let me know if this works for you or not.

Are you trying to run this on your desktop or on a Arm based processor?

The OpenCV libraries we provide are for Arm based processors only so they will only run on a RoboRio, Beaglebone etc or anything running ArmV7 soft eabi. They will not run on an Intel based processor.

Please provide more details of where you are trying to run the program (causing the error), and what you have done thus far getting up to that point so that we can try to help figure it out.

Thanks for trying this out.

So, this happened. Not sure what went wrong. :I

We followed the instructions, and the Rio_Beagle directory was successfully created. However, the main.o that was inside was corrupted to kingdom-come.

I don’t think main.o is necessarily corrupted, it is a binary file so you shouldn’t be trying to open it with a text editor.

Team 2481 is trying to get this working with our C++ project. Our project settings are what was generated by the FRC C++ plugin.

I’ll admit that we don’t have a lot of expertise when it comes to modifying the build/compiler/linker configuration.

We followed the instructions in the readme. The sample project compiles but when when we try to add the libraries to our project we aren’t able to compile. I’m including the build console and our .cproject file.

Any assistance would be greatly appreciated.

Thanks,

Team 2481

opencv_build_log.txt (28.2 KB)
cproject.txt (15.9 KB)


opencv_build_log.txt (28.2 KB)
cproject.txt (15.9 KB)

Nothing went wrong actually. It looks like you successfully compiled the code. The two warnings you get are known warnings which everyone on windows will receive and do not affect code output.

Main.o is an object file and is not human readable… it is not corrupt.

The file named 2168_Vision in that folder is the binary that you can transfer to the arm7 embedded device and run it.

I dont see any obvious problems in your posted images.

If you have any probs, please repprt back.

Regards,
Kevin

The WPILib project is not set up properly. The linker is not directed to the OpenCv libraries supplied by the Sample project _Libraries folder. You will need to manually add that yourself if you are starting from a WPI template Robot project.

Take a look at the build settings in the Sample Project we supplied, and modify the WPI build settings to include the OpenCV specific changes, such as adding the OpenCV _Libraries path to your WPILib project, and adding the -rpath command to the linker misc settings.

The build settings in the Sample Project should be all that you need to get this up and running in your own project.

We double checked all the settings again and did not find any problems. We did however find the problem and I’m nearly certain other people will run into this.

In the WPILib.h file on line 8 you will find the following.

#define REAL

In the core.hpp file on line 4132 you will find the following enum definition.

enum
    {
        NONE=0, //!< empty node
        INT=1, //!< an integer
        REAL=2, //!< floating-point number
        FLOAT=REAL, //!< synonym or REAL
        STR=3, //!< text string in UTF-8 encoding
        STRING=STR, //!< synonym for STR
        REF=4, //!< integer of size size_t. Typically used for storing complex dynamic structures where some elements reference the others
        SEQ=5, //!< sequence
        MAP=6, //!< mapping
        TYPE_MASK=7,
        FLOW=8, //!< compact representation of a sequence or mapping. Used only by YAML writer
        USER=16, //!< a registered object (e.g. a matrix)
        EMPTY=32, //!< empty structure (sequence or mapping)
        NAMED=64 //!< the node has a name (i.e. it is element of a mapping)
    };

The two definitions of REAL conflict. We removed the #define in WPILib.h and were able to compile.

The best I can tell the #define REAL in WPILib is only used in examples provided for the simulator to distinguish between simulation and real mode so it shouldn’t cause a problem to remove it.

Thanks,

Team 2481

Currently I’m running this on my laptop (desktop) but have plans to migrate over to an Odroid (Arm processor) for competition.

As for what I’ve done thus far, I’ve already stated it, I’ve installed it like the official version and run some test code on it that works fine with the main version. I believe the version that’s up for download just hasn’t been properly packaged.

I wonder the same but in C++. What would be the best way to do something similar to imshow() but on the driver station’s dashboard?