Chief Delphi

Chief Delphi (http://www.chiefdelphi.com/forums/index.php)
-   Extra Discussion (http://www.chiefdelphi.com/forums/forumdisplay.php?f=68)
-   -   paper: 987 Kinect Implementation (http://www.chiefdelphi.com/forums/showthread.php?t=106634)

Hjelstrom 24-05-2012 12:07

paper: 987 Kinect Implementation
 
Thread created automatically to discuss a document in CD-Media.

987 Kinect Implementation by Hjelstrom

Hjelstrom 24-05-2012 12:10

Re: paper: 987 Kinect Implementation
 
Here is a paper describing how we used the Kinect in our FRC robot. Feel free to ask questions!

Tom Bottiglieri 24-05-2012 12:45

Re: paper: 987 Kinect Implementation
 
Thanks for sharing this. It's great you were able to get this all working with fairly little Linux experience.

stundt1 24-05-2012 13:25

Re: paper: 987 Kinect Implementation
 
Awesome will read a little later. Here is some questions I have.

By when in build season did you decide that you were going to use the kinect to aid in vision tracking?

By when did you have the vision code working fully functional by?

-Steve

Jared Russell 24-05-2012 13:52

Re: paper: 987 Kinect Implementation
 
Thanks for sharing! I know that we thought about doing something similar this year, but were scared off by the added complexity of having to power and interface with a second computing device. If it's a viable and strategically valuable option next year, we will definitely put some of the lessons learned in this whitepaper to good use!

Hjelstrom 24-05-2012 15:09

Re: paper: 987 Kinect Implementation
 
Quote:

Originally Posted by stundt1 (Post 1171358)
Awesome will read a little later. Here is some questions I have.

By when in build season did you decide that you were going to use the kinect to aid in vision tracking?

By when did you have the vision code working fully functional by?

-Steve

We decided a couple of days in and it took until the LA regional (week 3 I believe) to have the code fully functional. One reason we tried this is that we've done vision systems many times in the past so we wanted to try something different.

It turned out to be a fantastic sensor and worked better than we could have hoped. It really feels like a generational leap in the amount and quality of information available to the robot and we only barely scratched the surface.

JesseK 24-05-2012 16:00

Re: paper: 987 Kinect Implementation
 
Did you have any issues with the O/S not booting in a reasonable amount of time?

stundt1 24-05-2012 16:10

Re: paper: 987 Kinect Implementation
 
We didn't have enough time to get our vision shooting working fully. If only we had a few more days with the robot :P

Great job 987 was a fan of your robot and its accurate shooter.

Hjelstrom 24-05-2012 16:58

Re: paper: 987 Kinect Implementation
 
Quote:

Originally Posted by Jared341 (Post 1171371)
Thanks for sharing! I know that we thought about doing something similar this year, but were scared off by the added complexity of having to power and interface with a second computing device. If it's a viable and strategically valuable option next year, we will definitely put some of the lessons learned in this whitepaper to good use!

Yeah, that is the hard part about this. If you read the "algorithms" we used, you can see that part turned out to be way easier than vision! We did have one match in Las Vegas where we sat dead for 30s while our cRio tried to connect to the Pandaboard so the extra complexity is definitely a risk. In an ideal world, NI would come out with a USB module and a port of OpenKinect!

Your team's vision system really inspired us to take another look at vision too though. Using the dashboard to do the processing helps in so many ways. The biggest I think is that you can "see" what the algorithm is doing at all times. When we wanted to see what our Kinect code is doing, we had to drag a monitor, keyboard, mouse, power inverter all onto the field. It was kind of a nightmare.

If anyone can point us in the direction of a way to stream video (stream the frames that the kinect code renders) from the Pandaboard/Ubuntu to the SmartDashboard, that would be a huge improvement for this kind of control system. That would be a good offseason project.

JesseK 24-05-2012 17:11

Re: paper: 987 Kinect Implementation
 
Quote:

Originally Posted by Hjelstrom (Post 1171407)
If anyone can point us in the direction of a way to stream video (stream the frames that the kinect code renders) from the Pandaboard/Ubuntu to the SmartDashboard, that would be a huge improvement for this kind of control system. That would be a good offseason project.

If the stream is a true stream, then this may be of help ... if you can't write code/scripting to make a stream of your own, then see below. ffmpeg (on linux) may help create a stream; it's pretty versatile, but I've only used it to convert youtube videos into local files.
www.videolan.org

Otherwise, you may wind up wrapping the images and then coming up with a simple Java display that displays the latest image from the socket. We did this in 2009 & 2012. It's another layer of complexity, so I'd recommend trying to get a video stream going first.

Hjelstrom 24-05-2012 18:24

Re: paper: 987 Kinect Implementation
 
Quote:

Originally Posted by JesseK (Post 1171399)
Did you have any issues with the O/S not booting in a reasonable amount of time?

The Pandaboard and our program was always up and running before the cRio. The only case it wasn't was when (we believe) it was doing the equivalent of a "checkdisk" due to the way power sometimes gets cut while its running (someone just turns the robot off without shutting the Pandaboard down first). We put a surprising amount of work into just coming up with a way to safely and quickly shut the pandaboard down.

connor.worley 24-05-2012 18:34

Re: paper: 987 Kinect Implementation
 
Have you thought of using a ramdisk to solve the shutdown problem? This article looks like it could be useful.

Hjelstrom 24-05-2012 19:14

Re: paper: 987 Kinect Implementation
 
Quote:

Originally Posted by JesseK (Post 1171408)
If the stream is a true stream, then this may be of help ... if you can't write code/scripting to make a stream of your own, then see below. ffmpeg (on linux) may help create a stream; it's pretty versatile, but I've only used it to convert youtube videos into local files.
www.videolan.org

Otherwise, you may wind up wrapping the images and then coming up with a simple Java display that displays the latest image from the socket. We did this in 2009 & 2012. It's another layer of complexity, so I'd recommend trying to get a video stream going first.

Thanks for the pointers! Ideally we'd like to be able to take the frames that we would normally render to the viewport and send them to the dashboard. A c or c++ api that lets you just feed frames into it would be ideal. I am afraid of the cpu overhead of this but it could be a good debugging tool that you just turn off in matches. Or maybe the second cpu on the pandaboard could handle it.

Joe Ross 24-05-2012 19:35

Re: paper: 987 Kinect Implementation
 
You could show the kinect display through SSH X11 forwarding. You'd have to play with port numbers to get something that will work on the field, however.

slijin 24-05-2012 21:26

Re: paper: 987 Kinect Implementation
 
One of the things that I recall very vividly during Curie finals was that your turret was constantly in motion, and would turn to remain aimed at the region around the center backboard (although if you drove around long enough, it would eventually drift away).

Were you constantly auto-aiming your turret in the main control loop with the Pandaboard-processed data, or was that something else entirely? I ask because in your paper you say
Quote:

during tele-op the driver has a button he can hold to auto-aim


All times are GMT -5. The time now is 18:52.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi