Has anyone tried using the a kinect on the raspberry pi with OpenCV for vision tracking? If so any good resources or places to start?
Simple google search
from there you can just use openCV kerneling functions and do a tcp connection to the pi
I tied downloading that software but i ran into an issue when trying to include it into python. I can also access the camera without using that. I was hoping to get access to the infrared and camera for distance calculations.
I would recommend using libfreenect or OpenNI, both of which have Python wrappers that can be easily used with OpenCV. Personally, I would rather use one of these over the library Arhowk linked because they have a much larger user base and has been well tested.
I have personally used OpenNI on a Raspberry Pi 3 (but I didn’t use Python) and it has worked well, apart from some initial issues setting it up.
You used OpenNI to use most if not all of the features of the kinect on the Pi 3?
I have used it to get the RGB and depth images from the Kinect, and I assume the skeleton tracking works, but I haven’t tried it. OpenNI doesn’t allow me to control the angle or the LEDs, but I didn’t need those features.
How well was the depth camera? Was it easy to get the data back?
AFAIK, the quality of the depth image will be the same no matter what library is used. My use case was as part of ROS, so I didn’t use the same system to retrieve the data as you would. Some quick research suggests it should be easy to get the depth data to OpenCV, as OpenNI is directly supported by the VideoCapture class.
Thanks for the help I will have to look into it when I get a chance.
Just curious, what were you planning on using the kinect for?
We had a working kinect system in LabView, but then they made it illegal to control a robot with a kinect in auton, so we haven’t pursued it in python