Quote:
Originally Posted by daniel_dsouza
i\or should we use freenect and openCV with linux...
|
I cannot recommend enough using this method. This is the method I use, with ubuntu 12.10. It has proven very effective. I have a paper on here that explains my program last year. I simply used the kinect as a camera though. In the future, as in I'm just starting to, I'm going to incorporate the depth camera to detect if there is a blocker bot in our way, and if it imparring my vision code. If it is, i'll send the distance it is at away from us (if it is <5 or so feet), and the labview programmer will have a button that will tell the robot to go say....7 feet forward and 2 feet left automatically.
Not robotics related: I really want to work on object reconstruction based off the depth map readings after I circle the object with the kinect. Then be able to move the reconstructed image in the computer, be able to spin it, and what not.
What i have done with the kinect is use it as a mouse. I had it detect my hand, then defined a region on the screen. The center of my hand in that region corresponded with where the mouse was on the screen. Then if the area of my hand decreased to a certain amount (I make a fist), then the mouse left clicked. Pretty simple stuff.