Hi,Today I pulled out my old kinect 360 and messed around with it a bit. I know that the kinect was used on robots back in 2012 but I’m curious if it still has any use today. I imagine having a depth map would be hugely useful for image processing and such. I know I’ve even seen full SLAM implementations using the kinect. Is this still practical or worth doing?
Kinect on robots in 2012? Not that year to my knowledge (though I could be wrong). It WAS on robots in a few later years, and on at least one driver station in 2014.
2012 had one Kinect off to the side of the field, which wasn’t used much at all. (Except by 254… anybody have that video link or do I have to go looking for it?)
Given the current state of vision etc… I’m not entirely sure it’s worth doing.
987 had the Kinect on their robot IIRC. But that was more an exception rather than the intended use that season.
Here’s the 254 video you’re looking for. Pure gold https://youtu.be/ZaOiaC0I8pY
I don’t think it is practical. The Kinect has been replaced by a wide variety of better sensors and better (less hacky) SDKs. I say this as a maintainer on libfreenect (and the author of a book on the Kinect). You’d be best served by using a RealSense D435 if you want something that will deliver depth, IR and RGB.
1706 used a kinect for vision in both 2012 and 2013.
Use of IR camera was the main purpose (Does not blind drivers, is not effected by colors in background etc) some depth tracking code was developed from post 2013-2015 (using the ASUS Xtion Pro, same sensor as the kinect) but was not used significantly in practice due to the limited use case. Now if we want to use a camera a simple decent webcam with a IR filter removed and an array of IR LEDs seems to work well.
This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.