View Single Post
  #5   Spotlight this post!  
Unread 19-01-2014, 21:27
faust1706's Avatar
faust1706 faust1706 is offline
Registered User
FRC #1706 (Ratchet Rockers)
Team Role: College Student
 
Join Date: Apr 2012
Rookie Year: 2011
Location: St Louis
Posts: 498
faust1706 is infamous around these partsfaust1706 is infamous around these parts
Re: Autonomous and kinect doubt

Quote:
Originally Posted by rhp3794 View Post
I want to control the robot with kinect and maybe put it in the robot for ball and obstacle detection.
You can still use skeleton tracking to control the robot (even in autonomous! but you have to stay behind that line), but I personally feel that a human would be better at driving with joysticks or a controller than moving their body, but that's my opinion.

The kinect has a depth camera, an ir emitter for the depth camera, an rgb camera, and an not so well known ir light. The lighting on the field proved for my team to be too much for the depth camera. If you want to still use it, be my guest, but you have been warned.

That leaves you with the ir and rgb camera for tracking stuff. IR will work with thing either emmitting IR light, or reflect light well, a ball is not in either of those categories and a robot might not be as reflective as you'd like.

That leaves you with the RGB camera, which at this point you are getting no benefit from using the kinect except for show. The kinect is heavy compared to a webcam or the playstation eye and it needs to be powered somehow. Don't get me wrong, though, I love the kinect. I've used it for vision the past 2 years in IR for the reflective tape.

Tracking a ball in rgb isn't too big of a challenge, but it is still a challenge nonetheless. An fundamental problem in computer vision is having your program work in different lighting environments. You'll find that your values now will be become invalid in a half an hour. And the values you have at school will not work on the field with all those spotlights. So keep that in mind.

These are some things to keep in mind. You also have to consider how you want to do vision: cRIO, send it to driverstation, or have a computer on your robot that will relay the info to the cRIO. All of which are possible and all of which are used by top level teams.

You need to decide how you want to do vision before you can do anything else really.

Something simple would be to put a camera (a kinect or otherwise) looking at the floor where a ball needs to be to be picked up, and just send that stream to your drivers station, or point it where you shoot and send that stream over to help your drivers line up.

Reiteration: decide how you want to do computer vision before anything else.
__________________
"You're a gentleman," they used to say to him. "You shouldn't have gone murdering people with a hatchet; that's no occupation for a gentleman."