Go to Post "FIRST, molding the future of America by inspiring children in the fields of science and technology....and eliminating playground dodgeball forever." - Andy Grady [more]
Home
Go Back   Chief Delphi > CD-Media > White Papers
CD-Events   CD-Media   CD-Spy   FRC-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

photos

papers

everything



987 Kinect Implementation

Hjelstrom

By: Hjelstrom
New: 05-24-2012 10:59 AM
Updated: 05-24-2012 10:59 AM
Total downloads: 1825 times


This paper describes how team 987 integrated the Kinect into their robot.

This paper describes how team 987 integrated the Kinect into their robot.

Attached Files

  • pdf How987UsedTheKinect.pdf

    How987UsedTheKinect.pdf

    downloaddownload file

    uploaded: 05-24-2012 10:59 AM
    filetype: pdf
    filesize: 1.13MB
    downloads: 1823



Recent Downloaders

Discussion

view entire thread

Reply

05-24-2012 11:10 AM

Hjelstrom


Unread Re: paper: 987 Kinect Implementation

Here is a paper describing how we used the Kinect in our FRC robot. Feel free to ask questions!



05-24-2012 11:45 AM

Tom Bottiglieri


Unread Re: paper: 987 Kinect Implementation

Thanks for sharing this. It's great you were able to get this all working with fairly little Linux experience.



05-24-2012 12:25 PM

stundt1


Unread Re: paper: 987 Kinect Implementation

Awesome will read a little later. Here is some questions I have.

By when in build season did you decide that you were going to use the kinect to aid in vision tracking?

By when did you have the vision code working fully functional by?

-Steve



05-24-2012 12:52 PM

Jared Russell


Unread Re: paper: 987 Kinect Implementation

Thanks for sharing! I know that we thought about doing something similar this year, but were scared off by the added complexity of having to power and interface with a second computing device. If it's a viable and strategically valuable option next year, we will definitely put some of the lessons learned in this whitepaper to good use!



05-24-2012 02:09 PM

Hjelstrom


Unread Re: paper: 987 Kinect Implementation

Quote:
Originally Posted by stundt1 View Post
Awesome will read a little later. Here is some questions I have.

By when in build season did you decide that you were going to use the kinect to aid in vision tracking?

By when did you have the vision code working fully functional by?

-Steve
We decided a couple of days in and it took until the LA regional (week 3 I believe) to have the code fully functional. One reason we tried this is that we've done vision systems many times in the past so we wanted to try something different.

It turned out to be a fantastic sensor and worked better than we could have hoped. It really feels like a generational leap in the amount and quality of information available to the robot and we only barely scratched the surface.



05-24-2012 03:00 PM

JesseK


Unread Re: paper: 987 Kinect Implementation

Did you have any issues with the O/S not booting in a reasonable amount of time?



05-24-2012 03:10 PM

stundt1


Unread Re: paper: 987 Kinect Implementation

We didn't have enough time to get our vision shooting working fully. If only we had a few more days with the robot

Great job 987 was a fan of your robot and its accurate shooter.



05-24-2012 03:58 PM

Hjelstrom


Unread Re: paper: 987 Kinect Implementation

Quote:
Originally Posted by Jared341 View Post
Thanks for sharing! I know that we thought about doing something similar this year, but were scared off by the added complexity of having to power and interface with a second computing device. If it's a viable and strategically valuable option next year, we will definitely put some of the lessons learned in this whitepaper to good use!
Yeah, that is the hard part about this. If you read the "algorithms" we used, you can see that part turned out to be way easier than vision! We did have one match in Las Vegas where we sat dead for 30s while our cRio tried to connect to the Pandaboard so the extra complexity is definitely a risk. In an ideal world, NI would come out with a USB module and a port of OpenKinect!

Your team's vision system really inspired us to take another look at vision too though. Using the dashboard to do the processing helps in so many ways. The biggest I think is that you can "see" what the algorithm is doing at all times. When we wanted to see what our Kinect code is doing, we had to drag a monitor, keyboard, mouse, power inverter all onto the field. It was kind of a nightmare.

If anyone can point us in the direction of a way to stream video (stream the frames that the kinect code renders) from the Pandaboard/Ubuntu to the SmartDashboard, that would be a huge improvement for this kind of control system. That would be a good offseason project.



05-24-2012 04:11 PM

JesseK


Unread Re: paper: 987 Kinect Implementation

Quote:
Originally Posted by Hjelstrom View Post
If anyone can point us in the direction of a way to stream video (stream the frames that the kinect code renders) from the Pandaboard/Ubuntu to the SmartDashboard, that would be a huge improvement for this kind of control system. That would be a good offseason project.
If the stream is a true stream, then this may be of help ... if you can't write code/scripting to make a stream of your own, then see below. ffmpeg (on linux) may help create a stream; it's pretty versatile, but I've only used it to convert youtube videos into local files.
www.videolan.org

Otherwise, you may wind up wrapping the images and then coming up with a simple Java display that displays the latest image from the socket. We did this in 2009 & 2012. It's another layer of complexity, so I'd recommend trying to get a video stream going first.



05-24-2012 05:24 PM

Hjelstrom


Unread Re: paper: 987 Kinect Implementation

Quote:
Originally Posted by JesseK View Post
Did you have any issues with the O/S not booting in a reasonable amount of time?
The Pandaboard and our program was always up and running before the cRio. The only case it wasn't was when (we believe) it was doing the equivalent of a "checkdisk" due to the way power sometimes gets cut while its running (someone just turns the robot off without shutting the Pandaboard down first). We put a surprising amount of work into just coming up with a way to safely and quickly shut the pandaboard down.



05-24-2012 05:34 PM

connor.worley


Unread Re: paper: 987 Kinect Implementation

Have you thought of using a ramdisk to solve the shutdown problem? This article looks like it could be useful.



05-24-2012 06:14 PM

Hjelstrom


Unread Re: paper: 987 Kinect Implementation

Quote:
Originally Posted by JesseK View Post
If the stream is a true stream, then this may be of help ... if you can't write code/scripting to make a stream of your own, then see below. ffmpeg (on linux) may help create a stream; it's pretty versatile, but I've only used it to convert youtube videos into local files.
www.videolan.org

Otherwise, you may wind up wrapping the images and then coming up with a simple Java display that displays the latest image from the socket. We did this in 2009 & 2012. It's another layer of complexity, so I'd recommend trying to get a video stream going first.
Thanks for the pointers! Ideally we'd like to be able to take the frames that we would normally render to the viewport and send them to the dashboard. A c or c++ api that lets you just feed frames into it would be ideal. I am afraid of the cpu overhead of this but it could be a good debugging tool that you just turn off in matches. Or maybe the second cpu on the pandaboard could handle it.



05-24-2012 06:35 PM

Joe Ross


Unread Re: paper: 987 Kinect Implementation

You could show the kinect display through SSH X11 forwarding. You'd have to play with port numbers to get something that will work on the field, however.



05-24-2012 08:26 PM

slijin


Unread Re: paper: 987 Kinect Implementation

One of the things that I recall very vividly during Curie finals was that your turret was constantly in motion, and would turn to remain aimed at the region around the center backboard (although if you drove around long enough, it would eventually drift away).

Were you constantly auto-aiming your turret in the main control loop with the Pandaboard-processed data, or was that something else entirely? I ask because in your paper you say

Quote:
during tele-op the driver has a button he can hold to auto-aim



05-25-2012 07:54 AM

JesseK


Unread Re: paper: 987 Kinect Implementation

Quote:
Originally Posted by Hjelstrom View Post
We put a surprising amount of work into just coming up with a way to safely and quickly shut the pandaboard down.
If you ever go into an IT-related industry or work on large multi-system software projects, graceful shutdown procedures will become a norm. At work we can get some nice graceful startup/shutdown times, but it takes many hours of tweaking.

As for using RAMDISK, that's not as straightforward as one might think. For one, most default Linux installs take 1Gb-2GB of total disk space, which would then be put into RAM (unsure of this Ubuntu image though). Then, any changes that are made to the O/S or program settings would have to be re-compressed and re-deployed as the O/S image for RAMDISK to open up at runtime. Usually the data directories (such as /home) and in this case FRC-related application directories (such as /opt) are NFS (network file system) mounted and are actually located on another computer -- yet I wouldn't recommend it for a live FRC field environment. Ergo, the data directories would then have to go somewhere -- presumably still on the SD card that's potentially causing the root issue anyways.

Interestingly, this type of platform (real-time processing on a dynamic system that has weight/space/power constraints) is perfect for a Net-booted disk-less architecture (assuming you're running with enough memory). It's even better when one considers scaling up to over 30,000 individual sensors. Unfortunately, the network has to be very, very (maybe even another very...) reliable for it to work.



05-25-2012 09:32 AM

Hjelstrom


Unread Re: paper: 987 Kinect Implementation

Quote:
Originally Posted by slijin View Post
One of the things that I recall very vividly during Curie finals was that your turret was constantly in motion, and would turn to remain aimed at the region around the center backboard (although if you drove around long enough, it would eventually drift away).

Were you constantly auto-aiming your turret in the main control loop with the Pandaboard-processed data, or was that something else entirely? I ask because in your paper you say
Well its hard for me to say. The turret has several control modes. It can hold its current orientation, it can respond to a point-click command from the operator, it can be manually turned with the joystick and it can auto-aim. Brandon who is the operator and programmer could tell you for sure but I suspect you were seeing a combination of point-click commands with the turret "holding" in between. (as the robot drives, the turret compensates)

We did use the turret very liberally. It could shoot in any direction with equivalent accuracy so the driver often didn't worry about what orientation the robot was in, just get within range and stop.



05-25-2012 10:58 AM

sebflippers


Unread Re: paper: 987 Kinect Implementation

Thanks for the read. Some things that I am considering for next year (with the pandaboard):
1. Use Arch linux instead of Ubuntu. It is officially supported for omap chips. Also, you don't need to install any X server, so you get fast boot times (but only command line).

Obviosly programming in the command line isn't very fun, so...
2.install cloud9 ide. You can program over the network, so there is no need connect to a pandaboard.

well, anyway... just my 2 cents.



05-25-2012 11:09 AM

JesseK


Unread Re: paper: 987 Kinect Implementation

Quote:
Originally Posted by sebflippers View Post
Also, you don't need to install any X server, so you get fast boot times (but only command line).
Start Linux in run level 3 instead of run level 5 -- this prevents XServer from starting, taking several seconds off the boot time. This would allow development to happen on-demand by typing 'init 5' after bootup.



05-25-2012 12:04 PM

Hjelstrom


Unread Re: paper: 987 Kinect Implementation

Quote:
Originally Posted by sebflippers View Post
Thanks for the read. Some things that I am considering for next year (with the pandaboard):
1. Use Arch linux instead of Ubuntu. It is officially supported for omap chips. Also, you don't need to install any X server, so you get fast boot times (but only command line).

Obviosly programming in the command line isn't very fun, so...
2.install cloud9 ide. You can program over the network, so there is no need connect to a pandaboard.

well, anyway... just my 2 cents.
We really didn't have any trouble with the boot time for Ubuntu on the Pandaboard. Our program is running way before the cRio is ready for it. Thanks for the tip on the cloud9 thing, we'll definitely check that out. CodeBlocks was a pleasant surprise on Linux, I figured we'd be programming with a gcc, a makefile and some form of notepad.



05-25-2012 03:21 PM

connor.worley


Unread Re: paper: 987 Kinect Implementation

Quote:
Originally Posted by JesseK View Post
As for using RAMDISK, that's not as straightforward as one might think. For one, most default Linux installs take 1Gb-2GB of total disk space, which would then be put into RAM (unsure of this Ubuntu image though). Then, any changes that are made to the O/S or program settings would have to be re-compressed and re-deployed as the O/S image for RAMDISK to open up at runtime. Usually the data directories (such as /home) and in this case FRC-related application directories (such as /opt) are NFS (network file system) mounted and are actually located on another computer -- yet I wouldn't recommend it for a live FRC field environment. Ergo, the data directories would then have to go somewhere -- presumably still on the SD card that's potentially causing the root issue anyways.
I will have to play with VxWork's nfsdLib. I've never seen the cRIO or switch brown out, but if they did, would the pandaboard be able to recover?



05-25-2012 04:48 PM

sebflippers


Unread Re: paper: 987 Kinect Implementation

To get an idea what cloud9 can do once you set it up, check this out:

http://youtu.be/z6b4zlh0IrE?t=9m30s



09-03-2013 10:44 PM

Chadfrom308


Unread Re: paper: 987 Kinect Implementation

How hard is it to do the autoaiming? Also, what sensors and calculations do you use for holding a target? I visited the las Vegas competition and I saw you guys aim while hanging and I also heard you can aim while moving. How hard is it to do this and well.... How? They are great features to put in! Anyways, I also wanted to say that I am very impressed with your robot and especially your vision!



view entire thread

Reply

Tags

loading ...



All times are GMT -5. The time now is 05:39 AM.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi