Go to Post Ban the ingredients for cheesecake, and stores will find substitutes. Remove the market for them, and stores have no reason to make them. - Rachel Lim [more]
Home
Go Back   Chief Delphi > Technical > Programming
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
Closed Thread
Thread Tools Rate Thread Display Modes
  #16   Spotlight this post!  
Unread 05-01-2012, 17:45
davidthefat davidthefat is offline
Alumni
AKA: David Yoon
FRC #0589 (Falkons)
Team Role: Alumni
 
Join Date: Jan 2011
Rookie Year: 2010
Location: California
Posts: 792
davidthefat has much to be proud ofdavidthefat has much to be proud ofdavidthefat has much to be proud ofdavidthefat has much to be proud ofdavidthefat has much to be proud ofdavidthefat has much to be proud ofdavidthefat has much to be proud ofdavidthefat has much to be proud ofdavidthefat has much to be proud of
Re: Running the Kinect on the Robot.

As far as I know, most of the depth perception is done on the Kinect itself. It is just transferring the data and images to the PC or 360. Now, you have to realize, you would have to find a way to power the laptop. Batteries are not allowed.
__________________
Do not say what can or cannot be done, but, instead, say what must be done for the task at hand must be accomplished.
  #17   Spotlight this post!  
Unread 05-01-2012, 17:59
DjMaddius's Avatar
DjMaddius DjMaddius is offline
Registered User
AKA: Matt Smith
FRC #2620 (Southgate Titans)
Team Role: Programmer
 
Join Date: Jan 2010
Rookie Year: 2009
Location: Southgate, Mi
Posts: 161
DjMaddius is an unknown quantity at this point
Re: Running the Kinect on the Robot.

Quote:
Originally Posted by davidthefat View Post
As far as I know, most of the depth perception is done on the Kinect itself. It is just transferring the data and images to the PC or 360. Now, you have to realize, you would have to find a way to power the laptop. Batteries are not allowed.
5 volts from the power distribution shouldn't be a problem as far as I know. Though I'm not the guy wiring it all so I don't know the rules towards that sorta thing.
  #18   Spotlight this post!  
Unread 05-01-2012, 18:19
apalrd's Avatar
apalrd apalrd is offline
More Torque!
AKA: Andrew Palardy (Most people call me Palardy)
VRC #3333
Team Role: College Student
 
Join Date: Mar 2009
Rookie Year: 2009
Location: Auburn Hills, MI
Posts: 1,347
apalrd has a reputation beyond reputeapalrd has a reputation beyond reputeapalrd has a reputation beyond reputeapalrd has a reputation beyond reputeapalrd has a reputation beyond reputeapalrd has a reputation beyond reputeapalrd has a reputation beyond reputeapalrd has a reputation beyond reputeapalrd has a reputation beyond reputeapalrd has a reputation beyond reputeapalrd has a reputation beyond repute
Re: Running the Kinect on the Robot.

-The Kinect we are getting is a standard Kinect, including the AC adapter and cable thingy to connect directly to USB (you would probably need a 12v regulator for the robot)

-I would go with a single-board computer running Linux, and send the data to the cRio via IP. You could send debug data to the driver station while you're at it, if you wanted to. I would probably get all of the useful information out of the image on the co-processor, and feed coordinates or other numerical data back to the robot at the highest rate possible.

-A laptop running Win7 will have (comparatively) high system requirements to an embedded single-board Linux machine, as you aren't running a GUI at all, and you can trim the background processes to just what you need.

-A laptop is very heavy. Just throwing that out there.

-As to power a laptop or other machine, I would probably get an automotive power supply and feed it off of a 12v regulator, since the robot batteries can go down fairly low. Laptop chargers usually run above 12v anyway (the one in front of me is 18.5), so you need a boost converter anyway.

-The FRC Kinect stuff wraps around the Microsoft Kinect SDK (which only runs on Win7), and feeds some stuff to the robot via the Driver Station, including all 20 skeletal coordinates that are tracked. To use the Kinect on the DS, you do not have to write ANY driver station end code, the data is all passed to the robot.
__________________
Kettering University - Computer Engineering
Kettering Motorsports
Williams International - Commercial Engines - Controls and Accessories
FRC 33 - The Killer Bees - 2009-2012 Student, 2013-2014 Advisor
VEX IQ 3333 - The Bumble Bees - 2014+ Mentor

"Sometimes, the elegant implementation is a function. Not a method. Not a class. Not a framework. Just a function." ~ John Carmack
  #19   Spotlight this post!  
Unread 05-01-2012, 19:34
innoying innoying is offline
Registered User
FRC #2264 (inceptus)
Team Role: Leadership
 
Join Date: Jan 2011
Rookie Year: 2010
Location: Plymouth, MN
Posts: 18
innoying is an unknown quantity at this point
Re: Running the Kinect on the Robot.

Quote:
Originally Posted by apalrd View Post
-I would go with a single-board computer running Linux, and send the data to the cRio via IP. You could send debug data to the driver station while you're at it, if you wanted to. I would probably get all of the useful information out of the image on the co-processor, and feed coordinates or other numerical data back to the robot at the highest rate possible.
I was thinking about this as an option. We have some sponsors that we could probably get some custom devices to do this for if we intend to run linux. I was just suggesting the Laptop because of the simplicity to setup. Though I agree the GUI and Windows in general are memory and CPU hogs. Linux would be best, but could prove to have issue since we are using a non-official SDK.

Quote:
Originally Posted by apalrd View Post
-As to power a laptop or other machine, I would probably get an automotive power supply and feed it off of a 12v regulator, since the robot batteries can go down fairly low. Laptop chargers usually run above 12v anyway (the one in front of me is 18.5), so you need a boost converter anyway.
Does anybody know what the classmate power supply is? (Even running ubuntu on it would be an improvement)
  #20   Spotlight this post!  
Unread 05-01-2012, 20:09
bhaidet's Avatar
bhaidet bhaidet is offline
Registered User
FRC #0435
Team Role: Mentor
 
Join Date: Dec 2010
Rookie Year: 2009
Location: Raleigh, NC
Posts: 33
bhaidet is an unknown quantity at this point
Re: Running the Kinect on the Robot.

I was only wring for a half a summer, but I believe there is a DC->DC step up as part of the standard wiring board. It should be the 2.5"ish square block covered with heatsink fins. I think it pulses the straight battery voltage through an inductor and regulates 24v out. I do not now how much current you could pull from this thing and I do not remember what its actually used for, but you should be able to solder up a step-down circuit to take this 24v to laptop voltage (17-18ish?) in about 10 minutes with an LM317 and outboard pass transistor (maybe the MJ2995 if you want overkill safety without heavily heatsinking).

On the topic of image recognition, is there any pre-existing software (especially Linux software?) to determine the shape of "color" (IR distance) blobs in an image? It seems like if you could see a blob and determine how far away it was on average (and therefore its actual height), you should be able to easily detect other robots/structures on the field.

As to whether having your robot autonomously see other robots/tall game objects will be useful this year... that's still up for grabs until Saturday.
  #21   Spotlight this post!  
Unread 05-01-2012, 20:21
apalrd's Avatar
apalrd apalrd is offline
More Torque!
AKA: Andrew Palardy (Most people call me Palardy)
VRC #3333
Team Role: College Student
 
Join Date: Mar 2009
Rookie Year: 2009
Location: Auburn Hills, MI
Posts: 1,347
apalrd has a reputation beyond reputeapalrd has a reputation beyond reputeapalrd has a reputation beyond reputeapalrd has a reputation beyond reputeapalrd has a reputation beyond reputeapalrd has a reputation beyond reputeapalrd has a reputation beyond reputeapalrd has a reputation beyond reputeapalrd has a reputation beyond reputeapalrd has a reputation beyond reputeapalrd has a reputation beyond repute
Re: Running the Kinect on the Robot.

Quote:
Originally Posted by bhaidet View Post
I was only wring for a half a summer, but I believe there is a DC->DC step up as part of the standard wiring board. It should be the 2.5"ish square block covered with heatsink fins. I think it pulses the straight battery voltage through an inductor and regulates 24v out. I do not now how much current you could pull from this thing and I do not remember what its actually used for, but you should be able to solder up a step-down circuit to take this 24v to laptop voltage (17-18ish?) in about 10 minutes with an LM317 and outboard pass transistor (maybe the MJ2995 if you want overkill safety without heavily heatsinking).

On the topic of image recognition, is there any pre-existing software (especially Linux software?) to determine the shape of "color" (IR distance) blobs in an image? It seems like if you could see a blob and determine how far away it was on average (and therefore its actual height), you should be able to easily detect other robots/structures on the field.

As to whether having your robot autonomously see other robots/tall game objects will be useful this year... that's still up for grabs until Saturday.
-The FRC PD board has regulated 12v and 24v supplies which are guaranteed down to 4.5v, but those are restricted to the cRio and bridge only
-There's also a 5v regulator, I don't think that one has any guarantee on it.

-The heat sink device of which you speak (which happens to weigh a whole 1/4lb, I weighed ours last season) reduced the (regulated) 12v down to 5v for the new radio. Confused yet?

-I would probably just find a single-board computer with either a 12v input or a car power supply, then a boost converter to 12v like the one on the PD board for the radio (guaranteed down to 4.5v or so)


-As for image data, the Kinect returns depth data as an image, so you could effectively process it for blobs like a normal 11-bit greyscale image. OpenCV has commonly been used for image processing, although I honestly haven't used it myself.
__________________
Kettering University - Computer Engineering
Kettering Motorsports
Williams International - Commercial Engines - Controls and Accessories
FRC 33 - The Killer Bees - 2009-2012 Student, 2013-2014 Advisor
VEX IQ 3333 - The Bumble Bees - 2014+ Mentor

"Sometimes, the elegant implementation is a function. Not a method. Not a class. Not a framework. Just a function." ~ John Carmack
  #22   Spotlight this post!  
Unread 05-01-2012, 22:35
bhaidet's Avatar
bhaidet bhaidet is offline
Registered User
FRC #0435
Team Role: Mentor
 
Join Date: Dec 2010
Rookie Year: 2009
Location: Raleigh, NC
Posts: 33
bhaidet is an unknown quantity at this point
Re: Running the Kinect on the Robot.

When I was on the programming team in the past, we were always limited to telling it a color and it telling us the blob. Does the software you are talking about detect the color if you tell it how big of a blob you would like to find? (assuming we want to know how far away the robot-sized blob is, not just find out how tall robots exactly 10 ft from the camera are.)

So the brick is a step down to 5v? Does that mean the step-up is embedded in the PDB? I remember looking up their inductive step-up circuit once and being very confused, but that was a long time ago. Do you know of any circuits that are a simple step-up? If we need less than double the battery voltage, we should be able to get away with a simple charge pump with a 555 or similar running the switching.
  #23   Spotlight this post!  
Unread 06-01-2012, 08:05
Joe Johnson's Avatar Unsung FIRST Hero
Joe Johnson Joe Johnson is offline
Engineer at Medrobotics
AKA: Dr. Joe
FRC #0088 (TJ2)
Team Role: Engineer
 
Join Date: May 2001
Rookie Year: 1996
Location: Raynham, MA
Posts: 2,648
Joe Johnson has a reputation beyond reputeJoe Johnson has a reputation beyond reputeJoe Johnson has a reputation beyond reputeJoe Johnson has a reputation beyond reputeJoe Johnson has a reputation beyond reputeJoe Johnson has a reputation beyond reputeJoe Johnson has a reputation beyond reputeJoe Johnson has a reputation beyond reputeJoe Johnson has a reputation beyond reputeJoe Johnson has a reputation beyond reputeJoe Johnson has a reputation beyond repute
Re: Running the Kinect on the Robot.

Quote:
Originally Posted by apalrd View Post
<snip>

-As for image data, the Kinect returns depth data as an image, so you could effectively process it for blobs like a normal 11-bit greyscale image. OpenCV has commonly been used for image processing, although I honestly haven't used it myself.

No not really. It returns the depth data, but not as an image. You can build an image out of the data, but there are a lot of reindeer games involved.

Which isn't to say that it can't be done, it can, but there is bit shifting and such involved. It is far from simply "get a distance image, ship it to an OpenCV routine, ... , here are all the interesting geometric shapes in the field of view"

By the way, I have been noodling on how would I find something interesting, say, I don't know, maybe the center of a ball of radius X and color Y.

I think I would first of all use very rough color filter (say, everything "near enough" to color of interest - where "near enough" is a very wide tolerance). Second, I think I would pass a best fit sphere through the 3D points for each of these candidate points (providing center point and radius). Third, I would filter by radius (only looking for balls of radius X+/- tol). Finally, I would group and average the centers into logical individual balls (e.g. you can't have 2 red balls closer to each other than 2 Radii).

It sounds like a lot but this is all integer math stuff for the most part. I think we could get a reasonable frame rate out of a board like the Panda Board.

Cool stuff... ...there just are not enough hours in a day...

Joe J.
__________________
Joseph M. Johnson, Ph.D., P.E.
Mentor
Team #88, TJ2
  #24   Spotlight this post!  
Unread 06-01-2012, 08:08
Gdeaver Gdeaver is offline
Registered User
FRC #1640
Team Role: Mentor
 
Join Date: Mar 2004
Rookie Year: 2001
Location: West Chester, Pa.
Posts: 1,370
Gdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond repute
Re: Running the Kinect on the Robot.

Remember you have 6 weeks to complete the programming projects. Do you really want to take a low level programming project on during build.
  #25   Spotlight this post!  
Unread 06-01-2012, 08:26
Jared Russell's Avatar
Jared Russell Jared Russell is offline
Taking a year (mostly) off
FRC #0254 (The Cheesy Poofs), FRC #0341 (Miss Daisy)
Team Role: Engineer
 
Join Date: Nov 2002
Rookie Year: 2001
Location: San Francisco, CA
Posts: 3,080
Jared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond repute
Re: Running the Kinect on the Robot.

Quote:
Originally Posted by Joe Johnson View Post
No not really. It returns the depth data, but not as an image. You can build an image out of the data, but there are a lot of reindeer games involved.
The openni Linux driver and C/C++ wrappers can do this for you pretty painlessly.

Quote:
Originally Posted by Joe Johnson View Post
By the way, I have been noodling on how would I find something interesting, say, I don't know, maybe the center of a ball of radius X and color Y.
As long as Y = "a distinct color not found/illegal on robots", you could probably do this pretty well without even using the Kinect's depth image. (OpenCV has built in hough circle routines, for example: http://www.youtube.com/watch?v=IeLeMBU4yJk). For added robustness, you could use the Kinect depth image simply to help select the range of radii to look for. I think you'd get equivalent performance - and much more efficient computation - using this method than with 3D point cloud fitting.
  #26   Spotlight this post!  
Unread 06-01-2012, 08:48
Joe Johnson's Avatar Unsung FIRST Hero
Joe Johnson Joe Johnson is offline
Engineer at Medrobotics
AKA: Dr. Joe
FRC #0088 (TJ2)
Team Role: Engineer
 
Join Date: May 2001
Rookie Year: 1996
Location: Raynham, MA
Posts: 2,648
Joe Johnson has a reputation beyond reputeJoe Johnson has a reputation beyond reputeJoe Johnson has a reputation beyond reputeJoe Johnson has a reputation beyond reputeJoe Johnson has a reputation beyond reputeJoe Johnson has a reputation beyond reputeJoe Johnson has a reputation beyond reputeJoe Johnson has a reputation beyond reputeJoe Johnson has a reputation beyond reputeJoe Johnson has a reputation beyond reputeJoe Johnson has a reputation beyond repute
Re: Running the Kinect on the Robot.

Quote:
Originally Posted by Jared341 View Post
<snip>

As long as Y = "a distinct color not found/illegal on robots", you could probably do this pretty well without even using the Kinect's depth image. (OpenCV has built in hough circle routines, for example: http://www.youtube.com/watch?v=IeLeMBU4yJk).

For added robustness, you could use the Kinect depth image simply to help select the range of radii to look for. I think you'd get equivalent performance - and much more efficient computation - using this method than with 3D point cloud fitting.
First regarding "As long as Y = "a distinct color not found/illegal on robots"" This is a pretty significant as long as.

Second, regarding using standard image processing, my experience with machine vision is that with controlled lighting, life is good, without it, life can be pretty crumby.

An FRC Robotics field is a pretty lousy lighting environment -- may be bright, may be dim, may be spots, may be colored lighting, ...

There were teams in the GA dome whose image processing algorithm ran fine during the day, but had fits after dark (and vice versa). Are you willing to live with the possibility that your algorithm runs fine on your division field but goes whacky on Einstein? Maybe but maybe not...

So... ...I think that the 3D points from the PrimeSense distance data are going to be more robust to ambient lighting conditions.

Joe J.
__________________
Joseph M. Johnson, Ph.D., P.E.
Mentor
Team #88, TJ2
  #27   Spotlight this post!  
Unread 06-01-2012, 08:56
zaphodp.jensen zaphodp.jensen is offline
Alumni and Mentor of 3130
AKA: Pierce Jensen
FRC #3130 (East Ridge Robotics Ominous Raptors (E.R.R.O.R.'s))
Team Role: Mentor
 
Join Date: Oct 2009
Rookie Year: 2009
Location: Minnesota
Posts: 76
zaphodp.jensen is a jewel in the roughzaphodp.jensen is a jewel in the roughzaphodp.jensen is a jewel in the rough
Re: Running the Kinect on the Robot.

I have a feeling that if you keep trying to fit more and more information in through the TCP/IP port, you will start having lag. If you have a second USB port, I would use a usb to serial converter to pass filtered data directly to the cRIO using a high baud rate. This would be easier to set up then a TCP/IP port, imho.
  #28   Spotlight this post!  
Unread 06-01-2012, 09:15
Joe Johnson's Avatar Unsung FIRST Hero
Joe Johnson Joe Johnson is offline
Engineer at Medrobotics
AKA: Dr. Joe
FRC #0088 (TJ2)
Team Role: Engineer
 
Join Date: May 2001
Rookie Year: 1996
Location: Raynham, MA
Posts: 2,648
Joe Johnson has a reputation beyond reputeJoe Johnson has a reputation beyond reputeJoe Johnson has a reputation beyond reputeJoe Johnson has a reputation beyond reputeJoe Johnson has a reputation beyond reputeJoe Johnson has a reputation beyond reputeJoe Johnson has a reputation beyond reputeJoe Johnson has a reputation beyond reputeJoe Johnson has a reputation beyond reputeJoe Johnson has a reputation beyond reputeJoe Johnson has a reputation beyond repute
Re: Running the Kinect on the Robot.

Quote:
Originally Posted by zaphodp.jensen View Post
I have a feeling that if you keep trying to fit more and more information in through the TCP/IP port, you will start having lag. If you have a second USB port, I would use a usb to serial converter to pass filtered data directly to the cRIO using a high baud rate. This would be easier to set up then a TCP/IP port, imho.
I have heard mixed reviews on this topic and I don't know who to believe.

In each case usually reliable sources tell me that 640X480 data (image and distance) CAN and CANNOT be reliably sent at 20-30 fps via the wireless router during a robot competition. Both sides are equally adamant that they are correct.

My problem is that if I guess wrong, I potentially don't find out until the first regional. Yikes!

So... ...my plan is that if we use it at all (and I am leaning toward not using it, at least this year) I want to do all the processing on the USB host (e.g. a Panda Board running an embed friendly distro of linux) we'd only be sending digested data via the TCP/IP link (e.g. the red ball is at coords X1,Y1,Z1, the blue ball is at coords X2,Y2,Z2, the floor is at Distance, Theta , Psi, a wall is at ..., ). It is hard to imagine that this would tax the link very much.

Joe J.
__________________
Joseph M. Johnson, Ph.D., P.E.
Mentor
Team #88, TJ2

Last edited by Joe Johnson : 06-01-2012 at 09:18.
  #29   Spotlight this post!  
Unread 06-01-2012, 09:26
Jared Russell's Avatar
Jared Russell Jared Russell is offline
Taking a year (mostly) off
FRC #0254 (The Cheesy Poofs), FRC #0341 (Miss Daisy)
Team Role: Engineer
 
Join Date: Nov 2002
Rookie Year: 2001
Location: San Francisco, CA
Posts: 3,080
Jared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond repute
Re: Running the Kinect on the Robot.

Quote:
Originally Posted by Joe Johnson View Post
First regarding "As long as Y = "a distinct color not found/illegal on robots"" This is a pretty significant as long as.

Second, regarding using standard image processing, my experience with machine vision is that with controlled lighting, life is good, without it, life can be pretty crumby.
The use of the color threshold in this case would be used only to speed things up (throwing away pixels that are not conceivably part of the ball) and/or to differentiate between balls/spheres of different colors. As long as you can detect the color discontinuity at the edges of the ball with an edge detector (the first step of the hough circle transform), you will still be able to detect the ball. The detection of color/intensity discontinuities is fairly robust to illumination (which is why it is under the hood of SIFT, SURF, etc., features).

There's no question that using the depth sensor would be even more robust, but I have performed reliable shape recognition using only RGB techniques in far less constrained environments than an FRC field (albeit with far more engineering time than I would be willing/able to devote to FRC programming )

Last edited by Jared Russell : 06-01-2012 at 09:29.
  #30   Spotlight this post!  
Unread 07-01-2012, 15:27
sjspry sjspry is offline
Registered User
FRC #1984
Team Role: Programmer
 
Join Date: Jan 2011
Rookie Year: 2010
Location: Kansas
Posts: 125
sjspry has a spectacular aura aboutsjspry has a spectacular aura aboutsjspry has a spectacular aura about
Re: Running the Kinect on the Robot.

Firstly, it needs to be decided whether placing the kinect on the robot will in some way enhance the robot during the hybrid period. Seeing as I can't really think of a reason why it would help to give feedback to your robot during hybrid, we'll assume putting it on the robot is a better idea.

But so far, most of the discussion focuses on interfacing the kinect to the cRIO on the robot, directly or indirectly. Here's why this is not a good idea:
  1. It will likely not be possible to interface the kinect directly to the cRIO (reimplementing USB would not be possible w/o access to the FPGA, which we do not have; USB-serial communication would be too slow and is a subset USB protocol and is not compatible).
  2. The cRIO is slow (300MHz), performing image processing on it is probably a bad idea in the first place (at least in my team's experience).
  3. The additional cocomputers (>1GHz; less w/ uncompressed stream) have a chance of working, but are fairly expensive (at least for some teams, I know we don't have a spare $100-200 or more).

The problem is is that I don't have any counterpoints. The fact that the kinect uses a USB interface is a huge issue. Last year our team worked out a system to have an application on the driver station grab images from the ethernet camera, do the processing on the laptop, and send back commands, but this only worked because we were able to bypass the cRIO entirely when doing our image transmission. To do something similar this season with the kinect, you would need to convert the USB image stream to ethernet... and at this point (due to the hardware required to do this), you might as well put a computer directly on the robot, which is list item #3.

So this turns into an argument of smart cRIO vs. dumb cRIO (in the dumb/smart terminal sense). Last year, our team had a dumb cRIO with a command framework that worked pretty well, interpreting commands sent back from the computer. This year, a similar system would be doable, but only by shelling out for an integrated system and using that to do the image processing.

The deciding factor becomes cost. While you might be able to go cheaper than a Panda Board, someone had already mentioned Beagle Boards and similarly processored boards being too slow. It really depends on how worthwhile you think the depth data from the kinect's IR camera will be. Personally, I don't think it will be that gamechanging, seeing as you should know distance from the basket based on where you start.

As for using it in hybrid mode...? Still seems rather useless, seeing as anything you might want to tell it would be static, and could be accomplished through more orthodox means (like switches on the robot or something). Our team will probably forgo the kinect entirely, and might end up trying to sell it if we can't find an off-season project to put it in.

Last edited by sjspry : 07-01-2012 at 15:29.
Closed Thread


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 17:59.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi