Getting an ODROID Up and Running

After some research, we recently purchased an Odroid U2 for the purpose of on-robot vision processing. Our intention is to connect a USB webcam to this board and have it continuously monitor the playing field and send relevant information to the cRIO for targeting purposes (similar to what many teams have done with this and other boards).

From what I understand, the basic steps are:

  1. Install an operating system
  2. Install OpenCV
  3. Write some vision analysis code
  4. Setup “network tables” and somehow get this information to the cRIO

There are a ton of questions that I could ask at this point, so maybe I’ll just start at #1. What is the recommended operating system for an ARM board like this? It sounds like standard Ubuntu is out because it is x86 based. The people that make the Odroid are actively developing their own version of Ubuntu based on Linaro but that whole situation sounds like a bit of a mess. There is also an Android OS developed by the same people, and Cyanogen officially supports the Odroid U2. Arch Linux ARM has a version, and I’m sure there are others.

I was hoping there would be a straightforward solution but it appears there are just a lot of options, each with pros and cons. The Ethernet and webcam requirements make me think a desktop OS is better suited for this task, but the ARM chips and lightweight environment make me think something based on Android would be better.

Anyhow, I have lots to learn here. Can someone with experience with these boards recommend a good direction to being fumbling around in?

  1. We used the X2 board this year booted with the latest version of ubuntu. We did not have a compiler on it and instead did everything through terminal. Ubuntu is used in nearly all research groups with computer science. It has a fast compile time, and best of all, it is free.

2)Download the OpenCV libraries are pretty straight forward. You can easily figure it out from the website.

  1. I would start with simply grabbing an image from the webcam, and going from there. I have a paper posted on here describing 2012’s vision program that I wrote. Although I don’t expect you to write a program that mathematically intensive anytime soon (linear algebra, eigenvalues to eigenvectors), it is a good start to the thought process of how to track an target.

  2. I give this idea’s credit to my mentor, the environment we used is QT. We sent a UDP message from the board, to the router on the robot, to the cRIO, I could send you an example portion of that if you’d like. I do not know how the cRIO programmer read the UDP message, but I’m sure it wasn’t terribly complicated and I could figure out how if you’d like.

I hope this helped. Our team has won 2 engineering excellence rewards with our computer vision and how we used the solutions.

If you have any questions, please pm me.

An update:

I have successfully installed Arch Linux ARM on the Odroid U2. They officially support the platform and provide all the packages necessary for what we’re doing here, which made things nice and simple. With Arch Linux there is no GUI so all interaction with the Odroid has to happen over Ethernet via ssh. I see this as a feature, as it means less overhead. The more CPU cycles available for vision processing the better.

Installing OpenCV was relatively simple. Arch provides a package for this.

I purchased a Logitech C615 webcam. This model was chosen because of price, availability, attempted futureproofness (hopefully it will be available for a little while), and because the internet told me that it works on V4L (video for Linux) without any issues. So far I’ve found this to be accurate.

I have successfully connected to the Odroid on my home network, wrote a basic C++ program, compiled, and executed it.

Next steps:

  • Capture a frame from the camera
  • Add a placeholder for vision analysis code. Students can deal with this later. :slight_smile:
  • Figure out how to send this information to the cRIO

I like the idea above of using the QT framework to deal with all the low level details for sending data. Based on some research (thisthread), the position information should be put in a datagram and transmitted to the cRIO using a UDP packet. I have no idea how to do this, but it sounds like a solid theory.

It looks like there are packages in Arch for QT so I will begin exploring and experimenting. Any tips or tricks are appreciated.

Setting up everything is the hard part, well more boring than hard, in my mind, so congrats!

(I realize that I sent you this nearly identical message as a pm in response to yours, but then only you can learn)

So, assuming everything is up and running properly, this simple c code should be able to display what the webcam is seeing, and when you hit the escape key, it will exit the code and save the last frame where ever the executable is. Also, it will give you the fps.

#include <opencv/cv.h>
#include <opencv/highgui.h>
#include <stdio.h>
#include <time.h>
#include “timer.h”
#include <math.h>
#include <iostream>
#include <iomanip>
#include <fstream>
#include <sys/stat.h>
#include <sys/types.h>
#include <QtNetwork>

using namespace std;

#define IMAGE_HEIGHT 480
#define IMAGE_WIDTH 640

IplImage* img = cvCreateImage(cvSize(IMAGE_WIDTH,IMAGE_HEIGHT),16,3);

char c;
char str[50];

CvFont font;

int main()
{
cvNamedWindow(“Image”, CV_WINDOW_AUTOSIZE);

cvMoveWindow("Image", 750, 10);

cvInitFont(&font, CV_FONT_HERSHEY_COMPLEX_SMALL, 0.75, 0.75, 0, 1, CV_AA);

double frame_time_ms;
double ave_frame_time_ms;

DECLARE_TIMING(RGB_Timer);
START_TIMING(RGB_Timer);

CvCapture* capture0 = cvCreateCameraCapture( 0 );
assert( capture0 );

while(1)
{

     img = cvQueryFrame( capture0 );

    // Write FPS on output image
    STOP_TIMING(RGB_Timer);
    frame_time_ms = GET_TIMING(RGB_Timer);
    ave_frame_time_ms = (GET_AVERAGE_TIMING(RGB_Timer));
    if (frame_time_ms &gt; 0 && ave_frame_time_ms &gt; 0)
    {
        sprintf(str, "Current FPS = %.1f", 1000/frame_time_ms);
        cvPutText(img, str, cvPoint(10, 40),  &font, cvScalar(255, 0, 255, 0));
        sprintf(str, "Average FPS = %.1f", ave_frame_time_ms);
        cvPutText(img, str, cvPoint(10, 20),  &font, cvScalar(255, 0, 255, 0));

    c = cvWaitKey(10);

    if (c==27) //escape key pressed
    {
        break;
    }

    }
    START_TIMING(RGB_Timer);

    cvShowImage("Image", img);

}
cvSaveImage("./raw.png", img);

cvDestroyAllWindows();

}

for a udp packet(with qt):

QUdpSocket udpSocket;

declare this,and include the header #include <QtNetwork>

then, once you have done calculated a variable you want to send the cRIO (distance, x rotation, etc…):

        QByteArray datagram = QByteArray::number(distance) + " "
                + QByteArray::number(Xrot) + " "
                + QByteArray::number((double)TargetType) + " ";

        udpSocket.writeDatagram(datagram.data(), datagram.size(), QHostAddress(0x0A110602), 80);

note: make sure both parties have the same ip address, and port 80 is what we used due to other ones not working.

Hope this helped!

side note not being able to see the thread to which you are replying is rather annoying when you can’t remember exactly what was addressed.

Please do not use the old IplImage. The c++ interface has a new cv:Mat that will be more efficient and easier to work with.

I was going to ask about this. Read the same thing in the openCV documentation.

this is from 2012. I apologize. It is also in c, and not c++.

I’m attempting to use the new(ish) c++ interface for OpenCV and qt5 framework. It seems that these require the use of cmake, a utility that seems to do a lot but in basic terms it provides access to the OpenCV and qt libraries for the compiler and linker.

OpenCV had a good tutorial here. I used their code and was able to get the project up and running. Adding in qt support has been more difficult. All attempts so far have resulted in compiler errors. I’m unsure of what #include statements to make and how to modify the CMakeLists.txt file to provide the correct references.

Any suggestions?

Yikes. I dont used cmake(wow, after looking at the link, the work “make” looks very strange), can’t help you there.

What I do is go into the .pro file and add libraries and headers as needed, such as the libfreenect for the kinect.