Go to Post My son just said, "That's the kind of thing that you talk about in strategy sessions then say, 'nah, that would never really work.'" Un-be-freaking-lievable. - Rick TYler [more]
Home
Go Back   Chief Delphi > Technical > Programming
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
Closed Thread
Thread Tools Rate Thread Display Modes
  #76   Spotlight this post!  
Unread 31-01-2012, 23:00
catacon catacon is offline
Registered User
FRC #1444 (Lightning Lancers)
Team Role: Mentor
 
Join Date: Jan 2009
Rookie Year: 2006
Location: St. Louis
Posts: 154
catacon is a splendid one to beholdcatacon is a splendid one to beholdcatacon is a splendid one to beholdcatacon is a splendid one to beholdcatacon is a splendid one to beholdcatacon is a splendid one to beholdcatacon is a splendid one to behold
Re: Running the Kinect on the Robot.

Framerate isn't the best, but I think that mostly has to do with me displaying both video feeds onto a 1080p monitor. Obviously this won't be done on the robot. When I don't display the video feeds, the "framerate" or rather, the output, is much better.

I am using the IR and depth feeds.
  #77   Spotlight this post!  
Unread 01-02-2012, 07:57
mwtidd's Avatar
mwtidd mwtidd is offline
Registered User
AKA: mike
FRC #0319 (Big Bad Bob)
Team Role: Mentor
 
Join Date: Feb 2005
Rookie Year: 2003
Location: Boston, MA
Posts: 714
mwtidd has a reputation beyond reputemwtidd has a reputation beyond reputemwtidd has a reputation beyond reputemwtidd has a reputation beyond reputemwtidd has a reputation beyond reputemwtidd has a reputation beyond reputemwtidd has a reputation beyond reputemwtidd has a reputation beyond reputemwtidd has a reputation beyond reputemwtidd has a reputation beyond reputemwtidd has a reputation beyond repute
Re: Running the Kinect on the Robot.

Quote:
Originally Posted by catacon View Post
Framerate isn't the best, but I think that mostly has to do with me displaying both video feeds onto a 1080p monitor. Obviously this won't be done on the robot. When I don't display the video feeds, the "framerate" or rather, the output, is much better.

I am using the IR and depth feeds.
I had the same results with the MS SDK. By disabling the video feed cpu usage dropped by 5%, which on a i7 quad core is a significant drop.

Thanks for the insight on linux vs windows with open kinect. Unfortunately for me right now I don't have a linux box at my disposal.

It seems using the reflective tape is definitely better for finding the center of the target, and I think I'm probably going to use the same strategy. I use the rgb and depth to find distance, because I believe the carpenters tape is more reliable for the depth measurements. I'm curious have you tried you vision tracking with other shiny aluminum objects in the field of view? That's what killed me last year was forgetting about the reflections on the legit field. Also are you using a clear poly or smoked poly backboard. I'm trying to find someone who has taking a shot of the 1/2" smoked poly backboard with the kinect. I have a feeling it will look closer to wood than clear poly.
__________________
"Never let your schooling interfere with your education" -Mark Twain
  #78   Spotlight this post!  
Unread 01-02-2012, 11:25
catacon catacon is offline
Registered User
FRC #1444 (Lightning Lancers)
Team Role: Mentor
 
Join Date: Jan 2009
Rookie Year: 2006
Location: St. Louis
Posts: 154
catacon is a splendid one to beholdcatacon is a splendid one to beholdcatacon is a splendid one to beholdcatacon is a splendid one to beholdcatacon is a splendid one to beholdcatacon is a splendid one to beholdcatacon is a splendid one to behold
Re: Running the Kinect on the Robot.

I am currently just using clear poly.

Since I am using the IR feed, many "shiny" things are of no concern since they are reflecting (humanly) visible light. The biggest issue comes for light sources that produce IR light (e.g. incandescent bulbs). However, this is not hard to deal with since you can easily filter out small object and setup the algorithm to only look for rectangles.

I am using the retroreflective tape to find the target and then I look at the gaffers tape for the depth (the black stuff on the inside). It's not perfect yet, but I think I can sharpen it up a bit.

I did get OpenKinect to work on Windows, but it took some doing. After I used CMake to generate a VisualStudio solution, I had to go through and build each project individually (skipping some since I didn't care about them). And there were some stupid errors like it would try to build a C++ project as a C project, so I would have to set the projects as C++ manually. But...it did finally work.
  #79   Spotlight this post!  
Unread 28-11-2012, 23:42
yash101 yash101 is offline
Curiosity | I have too much of it!
AKA: null
no team
 
Join Date: Oct 2012
Rookie Year: 2012
Location: devnull
Posts: 1,191
yash101 is an unknown quantity at this point
Cool Re: Running the Kinect on the Robot.

It's simple. Just buy a small ARM computer. They're cheap. I use the Raspberry Pi. Tons of developer resources. It's just $35. Edit a text file to overclock the CPU, GPU or RAM. Get the CodeBlocks IDE and the FreeNect Library.

Just open up terminal and type in:
sudo apt-get install codeblocks freenect openssh-server

type in the 'openssh-server' so that you can shut it down with the command:

shutdown -h now

The Raspberry Pi should run on a 'wide' range of voltages. To power the kinect, get a step-up converter/transformer to get 24 volts. Then, use a switching or LDO Linear voltage regulator. Just not that the Kinect requires 1A of current. Someone posted that it requires 12 Watts and 12 volts. Ohms law will solve this for you. That is all I know about this setup. I might use this, but because of my PHP knowledge, I am going to create a web Point-and-Click-to-Attack protocol service so that we can do things more accurately than any other team even if they have the best drivers!
Thank You!
  #80   Spotlight this post!  
Unread 30-11-2012, 16:06
Golto's Avatar
Golto Golto is offline
Registered User
AKA: Pat Plude
FRC #4572 (BArlow RobAutics)
Team Role: Mentor
 
Join Date: Oct 2006
Rookie Year: 2007
Location: Bethel, CT
Posts: 91
Golto is a glorious beacon of lightGolto is a glorious beacon of lightGolto is a glorious beacon of lightGolto is a glorious beacon of lightGolto is a glorious beacon of light
Re: Running the Kinect on the Robot.

One thing that I may think of:

Possibly using another small computer, such as a Raspberry Pi board, this as far as I recall is legal under the co-processor rules, so long as it doesn't interface with the bot directly. Then the two boards could communicate via I2C. The Pi board would allow for some VERY high level tracking and analysis, then I2C could send some of the tracking info back to the cRIO.

Just a thought.
__________________
  #81   Spotlight this post!  
Unread 30-11-2012, 19:28
sebflippers sebflippers is offline
Registered User
FRC #2914
 
Join Date: Jan 2012
Location: dc
Posts: 56
sebflippers will become famous soon enoughsebflippers will become famous soon enough
Re: Running the Kinect on the Robot.

Our team has been working on this for a while, and we have gotten a kinect feed on our pandaboard. Creating the 640x480 depthImage uses ~60% (30fps), but doing anything with the data (like opencv filtering) brought us down to <5fps. 987 did this before, and they skipped 5 pixels at a time to achieve reasonable framerate. We were thinking of just sending the raw depthImage (using openni) to the driver station dashboard (with opencv), and then back to the cRio, all over Ethernet.

spi & i2c are unnecessary. Just use ethernet.

rPi is waaay too slow.
  #82   Spotlight this post!  
Unread 01-12-2012, 13:37
yash101 yash101 is offline
Curiosity | I have too much of it!
AKA: null
no team
 
Join Date: Oct 2012
Rookie Year: 2012
Location: devnull
Posts: 1,191
yash101 is an unknown quantity at this point
Wink Re: Running the Kinect on the Robot.

The Raspberry Pi should work, as I came up with the plan, HTTPd SSH Telnetd server. The Pi contacts the controller computer, which validates the data and forwards the data to the cRIO. If the terminal is running Windows 8, it is easily possible to create a JavaScript app that creates some sort of point-and-click-to-attack mechanism. In a competition that uses beanbags or balls, You could point and click with a mouse or touchscreen and it will automatically make to Robot go for the ball or beanbag and automatically execute what needs to be done with it, for example, shooting it, placing it, tearing it, etc.
  #83   Spotlight this post!  
Unread 05-12-2012, 17:52
yash101 yash101 is offline
Curiosity | I have too much of it!
AKA: null
no team
 
Join Date: Oct 2012
Rookie Year: 2012
Location: devnull
Posts: 1,191
yash101 is an unknown quantity at this point
Re: Running the Kinect on the Robot.

Doing that will damage the kinect to a point in which you would have to go to microsoft and have them fix it with their robots, or buy a new one. Before going for the conclusion, It is USB, so 5 volts will work, Shouldn't you read the AC adapter? It says '12 Volt 1.08A' Powering it off 5 volts should do some nice amounts of damage to your kinect.
Closed Thread


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 18:51.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi