Chief Delphi

Chief Delphi (http://www.chiefdelphi.com/forums/index.php)
-   Programming (http://www.chiefdelphi.com/forums/forumdisplay.php?f=51)
-   -   1884 Vision Processing (http://www.chiefdelphi.com/forums/showthread.php?t=135653)

kylesusername 10-03-2015 16:06

1884 Vision Processing
 
Hi, I'm Kyle from the Griffins 1884 in London.

Here's what I've been working on for the past six weeks: It's a vision processing program, written in C, that implements a system of blob image processing from any webcam supported by the Linux UVCVideo driver.

The goals of this project are to
  • Implement blob tracking
  • Send only useful data to the RoboRio
  • Be easily modifiable for future games and alternative strategies

It sends blobs via UDP packets to the RoboRio specified by the IP address and port specified in the header file, (It's best to keep the port on 1884 ;) ).

We're running this software from a Beaglebone Black, with a 5v regulator so that we can use the 2A power from the VRM. The webcam we're using is the Logitech C270, and the operating system is the latest Debian GNU/Linux image from armhf.

The way that it works is that it captures the image, and then traverses the image with a depth-first search to group together colors with similarities determined with a color difference algorithm. Once the blobs have been grouped, they are put into an array and then they await filtering. When being put into the array, the size of the blob, the average X, the average Y and the average color are stored.

There are two levels of filtering. The first level removes all of the blobs with a size less than the one defined in the header, which is currently 1000. Then, it categorizes the blobs based on whether their average color fits within the thresholds defined in the header. Their categorization consists of setting their type to a unique character, and then adding them to a special array.

When it serializes the data before sending, it converts the bytes to network order and on receiving the data, it converts them to host order. This is in order to avoid problems with endianness. As a result, it can work on any system with a processor that supports CGG.

To compile, you need all of the libraries in the header. The only tricky one to get is libv4l2, which you can get and make from here:
http://git.linuxtv.org/v4l-utils.git
and then running
1 ./bootstrap.sh
2: ./configure
3: make -j 4
4: make install

Example packet receiver code is provided in both C(++) and Java in repository.

The information is sent in a Cartesian plane with x=0 and y=0 being the middle of the frame. You can define types of blobs as characters so you can process multiple objects. This year, our strategy only involves tracking one thing - containers - hence currently it only tracks containers. However, it's very easy to modify. There's a tutorial in the documentation.

We are currently planning on using this to move from the step containers area to the containers at the staging zone to get a container set during autonomous.

Here's the repository:
https://gitlab.com/kylesusername/1884-vision-proc

I would love any feedback. Feel free to PM me if you need help with setting it up. Feel free to use and modify - It's free software under the LGPL v3.

See you on the field!
Kyle


All times are GMT -5. The time now is 01:52.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi