Exactly, its wasted money, it only costs about $57 actually manufacture one, which has teh motors, cameras, mics and the casing and ect, which is unneeded. Thats what I meant about over priced. All you need is the cameras, so I am just making my own system
The BOM number being thrown around from TechInsights is an estimate based only on the cost of components. It does not include manufacturing costs, recoup on R&D, retail margins, and any other costs. These types of costs will be included in any COTS solution.
The Kinect works very differently than a pair of cameras set up for stereo vision. It uses an IR projector to illuminate the scene with a dot pattern, the reflections are then observed using an IR Camera. The data from the sensor is then fed into a chip made by a company called PrimeSense that processes that data into the depth image.
I also wanted to address something you posted in the other thread about the Kinect, suggesting that similar things have been around in robotics for a while. I am not aware of any prior solutions that provided depth information with no processing at this resolution (in all 3 dimensions) for anywhere near the $150 price point. If you do know of such a device I would love a link to further info.
I do not want to argue, I said the technology is nothing new, never said the said technology was cheap. The fundamental ideas have been used for quite a long time, in fact, its in the book: http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=10138. The projection of an image is projected onto the object which the camera picks up and processes. Nothing new.
Ok, that is a valid point. The basics of the approach are nothing new.
Most prior implementations of this approach either used visible light which has the downside of being, well, visible as well as a number of others or used lasers which are expensive.
Now I had the idea of using an IR laser when I first initially investigated the idea of 3d imaging, but I found out that they are EXPENSIVE. But I read that the laser in a CD/DVD reader is an IR laser, I would have to find out. Using a champagne cup’s “neck” works fine as a prism. IDK I might consider it again
Yes.
Thanks for the link. There is a comment at the end of the article that had the same doubts I did so I dug into the answer a little further.
From the FAQ section (http://www.primesense.com/?p=535) of the PrimeSense website (core technology behind the Kinect):
“The PrimeSensor™ technology is based on PrimeSense’s patent pending Light Coding™ technology. PrimeSense™ is not infringing any depth measurement patents such as patents related to Time-of-Flight or Structured Light – these are different technologies, which have not yet proven viable for mass consumer market.”
Looks like my theory about structured light was wrong as well.
As it relates to FRC rules on lasers, this was also on the PrimeSense website:
“The PrimeSensor™ Reference Design incorporates a class 1 laser device and has been certified by independent laboratories for class 1 compliancy in accordance with IEC 60825-1.”
Although it is an eye safe laser, it is still a laser that is exposed outside the device (unlike in a laser ring gyro where the laser light never leaves the device). This would make it illegal per last seasons rules.
While I can’t find the rule in front of me, I remember in 2009 there being a rule prohibiting IR transmission on robots. I believe one of my team members was looking into using a setup much like Kinect for goal tracking.
As we speak someone has already mounted a kinect on a robot and made it work. http://www.engadget.com/2010/11/17/kinect-sensor-bolted-to-an-irobot-create-starts-looking-for-tro/