Intended for a coprocessor (ODROID / Raspberry Pi)
Uses the OpenCV library
Written on Ubuntu 14.04 linux
Designed for Xbox 360 Kinect (could be used by any color, IR, or depth-map camera)
The program has the ability to track the field totes in three different modes: depth map, infrared light, and color. The use of a depth map to aid in tracking was used by the team for the first time in a competition. It allows the retrieval of more features
A few items still are being worked on: better color thresholds, an edge detector to replace the calibration image, transmitting camera images, multithreading, and so on.
To get the program working, you will need:
Ubuntu 14.04 linux
Xbox 360 kinect
C++ knowledge
Steps to get the program working:
Install the dependencies: sudo apt-get install build-essential libfreenect-dev qtcreator libopencv-dev
Clone the repository to your computer.
Open the vision2015.pro file with Qt Creator.
Configure Qt Creator to build the project for Desktop.
Visit the source code file demo.cpp in the Qt Creator sidebar to change your basic options.
The main function near the bottom of the file selects which mode to use. Select color for basic color tracking, or any other function available in the program by changing the line to call a function available. Check tracker.hpp for the classes that contain the various vision tracker programs. You may need to change the thresholding values for your current lighting conditions (your environment or day/night cycle). You can use my program multithresh (https://github.com/rr1706/multithresh) to select proper threshold values (stored in tracker.hpp).
Haven’t tried to get it running on anything but my laptop and an ODROID-XU. We were thinking of looking into getting one to potentially improve the performance in comparison to the ODROID but haven’t focused on it yet. We may.
This looks exceptional. I have access to a Jetson, and once I get some time, I’ll see if I can get it working on the system. I’ll definitely reply back with performance stats.
Hello,
I have been trying to get this program running on an nvidia Jetson TK1 for the past few hours. I can’t seem to get some of the dependencies to install. apt comes back with an error saying that every package except for build-essential cannot be located. “Unable to locate package x.” I have looked around on the internet for a solution and tried a few but none of them have worked. This may be more of a linux question but does anyone see an easy solution to this? I’m thinking about reflashing linux and trying again.
Apt-get gives those errors all the time when the repos aren’t up-to-date. What is the command you are running?
Also, I suggest that you compile OpenCV manually for the Jetson because it supports a crapload more of optimizations than other boards. I don’t have a Jetson, but I’m sure that it uses the same repository as the other boards, like the ODROID.
The Jetson has CUDA support, one thing that should used to it’s max for it’s glory! It’ll make your code run just a lot faster!
I would love to see the performance of our code on the Jetson. Please someone deliver. I’m willing to thread it and optimize it if someone wants to bench mark it for me.
I’ve tried apt-get update and apt-get upgrade, both completed just fine and I still had the same error. I get the error from running:
sudo apt-get install libavcodec-dev
I get the error for a couple more packages too, that is just an example. I think that all the problems i’m having are with open CV so i think i will reflash ubuntu and try again, this time compiling openCV myself.
Please try running apt-cache search libavcodec > out.txt and upload out.txt to this forum. That’ll search through the repo and possibly output a ton of text! If the package is found, you’re golden.
That returns:
“libavcodec-dev - Development files for libavcodec”
as well as several more lines like it describing other packages. I’m really confused now because this means that it found the package, right? But when I use apt-get to try and install it it says it can’t be found.
It might be because I’m on the school network now think of it. I’ll keep trying.
On the robot, or the driver station? Because if driver station, keep in mind rule G21:
During AUTO, DRIVE TEAMS must not directly or indirectly interact with ROBOTS or OPERATOR CONSOLES.
VIOLATION: FOUL and YELLOW CARD
FIRST salutes the creative and innovative ways in which Teams have interacted with their ROBOTS during AUTO in previous seasons, making the AUTO period more of a hybrid period due to indirect interaction with the OPERATOR CONSOLE. The RECYCLE RUSH AUTO Period, however, is meant to be truly autonomous and ROBOT or OPERATOR CONSOLE interaction (such as through webcam or Kinect™) are prohibited.
you have to utilize the libfreenect library in order to interface with the kinect. If you go to our code, cmastudios made grabbing the rgb, ir, and depth map from the kinect into one line of code.
The rgb image is obviously in color (RGB), the ir is in grayscale, and the depth is an interesting type of image, it is grayscale where pixel values are a representation of depth.