![]() |
1706 Vision Solution: Tracking totes in depth-map, color, and infrared
The Ratchet Rockers team 1706 has been hard at work this build season to bring you the public release of our vision solution.
INTRODUCTORY VIDEO: http://youtu.be/HYWgS2M8Zy4 Source code at: https://rr1706.github.io/vision2015/ Specifications: - C++ code - Intended for a coprocessor (ODROID / Raspberry Pi) - Uses the OpenCV library - Written on Ubuntu 14.04 linux - Designed for Xbox 360 Kinect (could be used by any color, IR, or depth-map camera) The program has the ability to track the field totes in three different modes: depth map, infrared light, and color. The use of a depth map to aid in tracking was used by the team for the first time in a competition. It allows the retrieval of more features Explanation of Depth logic: https://github.com/rr1706/vision2015...th-Explanation Explanation of IR logic: https://github.com/rr1706/vision2015...IR-Explanation A few items still are being worked on: better color thresholds, an edge detector to replace the calibration image, transmitting camera images, multithreading, and so on. To get the program working, you will need: - Ubuntu 14.04 linux - Xbox 360 kinect - C++ knowledge Steps to get the program working: 1. Install the dependencies: sudo apt-get install build-essential libfreenect-dev qtcreator libopencv-dev 2. Clone the repository to your computer. 3. Open the vision2015.pro file with Qt Creator. 4. Configure Qt Creator to build the project for Desktop. 5. Visit the source code file demo.cpp in the Qt Creator sidebar to change your basic options. The main function near the bottom of the file selects which mode to use. Select color for basic color tracking, or any other function available in the program by changing the line to call a function available. Check tracker.hpp for the classes that contain the various vision tracker programs. You may need to change the thresholding values for your current lighting conditions (your environment or day/night cycle). You can use my program multithresh (https://github.com/rr1706/multithresh) to select proper threshold values (stored in tracker.hpp). ![]() |
Re: 1706 Vision Solution: Tracking totes in depth-map, color, and infrared
Fantastic work. We'll be sure to give this a try.
Have you tried it on a Jetson by chance? |
Re: 1706 Vision Solution: Tracking totes in depth-map, color, and infrared
Quote:
|
Re: 1706 Vision Solution: Tracking totes in depth-map, color, and infrared
have you thought about using the i2c communication to the roborio from the odroid c1 instead of sending a udp message?
|
Re: 1706 Vision Solution: Tracking totes in depth-map, color, and infrared
This looks exceptional. I have access to a Jetson, and once I get some time, I'll see if I can get it working on the system. I'll definitely reply back with performance stats.
|
Re: 1706 Vision Solution: Tracking totes in depth-map, color, and infrared
If I get a chance I'll see if I can get a GPU optimized version working to take full advantage of the Jetson's GPU.
|
Re: 1706 Vision Solution: Tracking totes in depth-map, color, and infrared
Hello,
I have been trying to get this program running on an nvidia Jetson TK1 for the past few hours. I can't seem to get some of the dependencies to install. apt comes back with an error saying that every package except for build-essential cannot be located. "Unable to locate package x." I have looked around on the internet for a solution and tried a few but none of them have worked. This may be more of a linux question but does anyone see an easy solution to this? I'm thinking about reflashing linux and trying again. |
Re: 1706 Vision Solution: Tracking totes in depth-map, color, and infrared
did you try sudo apt-get install libopencv-dev?
|
Re: 1706 Vision Solution: Tracking totes in depth-map, color, and infrared
Quote:
Code:
sudo apt-get updateAlso, I suggest that you compile OpenCV manually for the Jetson because it supports a crapload more of optimizations than other boards. I don't have a Jetson, but I'm sure that it uses the same repository as the other boards, like the ODROID. The Jetson has CUDA support, one thing that should used to it's max for it's glory! It'll make your code run just a lot faster! |
Re: 1706 Vision Solution: Tracking totes in depth-map, color, and infrared
I would love to see the performance of our code on the Jetson. Please someone deliver. I'm willing to thread it and optimize it if someone wants to bench mark it for me.
|
Re: 1706 Vision Solution: Tracking totes in depth-map, color, and infrared
Quote:
sudo apt-get install libavcodec-dev I get the error for a couple more packages too, that is just an example. I think that all the problems i'm having are with open CV so i think i will reflash ubuntu and try again, this time compiling openCV myself. |
Re: 1706 Vision Solution: Tracking totes in depth-map, color, and infrared
Quote:
|
Re: 1706 Vision Solution: Tracking totes in depth-map, color, and infrared
Quote:
|
Re: 1706 Vision Solution: Tracking totes in depth-map, color, and infrared
Quote:
"libavcodec-dev - Development files for libavcodec" as well as several more lines like it describing other packages. I'm really confused now because this means that it found the package, right? But when I use apt-get to try and install it it says it can't be found. It might be because I'm on the school network now think of it. I'll keep trying. |
Re: 1706 Vision Solution: Tracking totes in depth-map, color, and infrared
Now it found every one of the dependencies no problem. No idea what was wrong but thank you all for the help.
|
Re: 1706 Vision Solution: Tracking totes in depth-map, color, and infrared
hey everyone,
i have some qustion about using the knicet in auto ,but we can't figure if it is supported by the new wpi code (c++) |
Re: 1706 Vision Solution: Tracking totes in depth-map, color, and infrared
Quote:
Quote:
|
Re: 1706 Vision Solution: Tracking totes in depth-map, color, and infrared
Quote:
|
Re: 1706 Vision Solution: Tracking totes in depth-map, color, and infrared
you have to utilize the libfreenect library in order to interface with the kinect. If you go to our code, cmastudios made grabbing the rgb, ir, and depth map from the kinect into one line of code.
https://github.com/rr1706/vision2015/tree/master/lib the files are free.cpp and free.hpp. The rgb image is obviously in color (RGB), the ir is in grayscale, and the depth is an interesting type of image, it is grayscale where pixel values are a representation of depth. |
Re: 1706 Vision Solution: Tracking totes in depth-map, color, and infrared
Thanks for sharing this. I did get your solution working on a Nvidia Jetson board yesterday.
|
Re: 1706 Vision Solution: Tracking totes in depth-map, color, and infrared
:D I have been stuck on a time consuming project in the lab so I haven't had time to configure it for the jetson board laying on the table next to me. I just stare at it with desire....
How'd it do? On the odroid, we get manageable fps, but it is rather laggy compared to our vision programs in the past that were hitting 30fps. |
Re: 1706 Vision Solution: Tracking totes in depth-map, color, and infrared
Adding the profile times in robot.log gives me an average frame rate of 3-4 fps. This is with X running and both color and ir maps displaying on screen. There is a slight lag when putting your hand quickly in front on the camera. It probably is way less than a quarter second. Only two cores are being used.
It is much less laggy that anything I've been able to do with libpcl on the jetson. With a little work, I think it could work for autonomous navigation. |
Re: 1706 Vision Solution: Tracking totes in depth-map, color, and infrared
I didn't understand the profile times in the post above. With image display now turned off, I'm now seeing 20 fps.
|
Re: 1706 Vision Solution: Tracking totes in depth-map, color, and infrared
Image display is a rather computationally intensive task, to many people's surprise. I expected a decent jump in fps, but not that much. That is really encouraging to see, actually. Thank you so much for doing this.
|
Re: 1706 Vision Solution: Tracking totes in depth-map, color, and infrared
Quote:
If you want to, go for it. @cmastudios informed me that they are now using vision in autonomous, which is exciting. I wrote matlab code that is a basic implementation of a* in 2d. cmastudios converted it to c++, then I changed his c++ code to a custom path finding algorithm that takes robot width into consideration. The custom path finding algorithm is being used currently on a robotics team at MST. There is a step missing between vision output and input to path finding: changing to D.S (data structure) of the vision output to fit that D.S that a * can operate on. Usually it is simply a list of points in a finite, discrete, grid that are deemed in-transversible (obstacles). You cannot simply pass the centers of all detected object to a astar due to the object (in this case totes), having a decent amount of width and length. A big problem with converting from vision to path finding is precision. Yes, you can return every pixel that is an obstacle, but then your grid is extremely discrete and path finding is O(nlogn) if I remember. cma utilized the gnu optimizer when we were toying with the idea of a* this past summer and he got a 900x900 grid to be solved in about 1ms, I forget the exact time, on a decent laptop. |
| All times are GMT -5. The time now is 19:51. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi