Quote:
Originally Posted by Tom Bottiglieri
I agree, doing the processing with a local coprocessor is the way to go. You need to have additional electronics no matter what to deal with pulling the data, so why not spend a bit more and throw a whole Linux at it?
There are a bunch of low cost ARM based boards out there that can act as USB hosts. The panda board, beagle board, and beagle bone are all TI OMAP (TI's mobile device system on chip offering) dev boards. I assume they have enough horsepower to do the necessary CV on the depth maps, but I wouldn't use it without doing a bit more research.
|
Our team was looking into the Panda Board. I know some folks that are using the Panda Board with the PrimeSense sensor in the Kinect. To say the Panda Board has "enough horsepower" is a judgement call. From what I hear, yes, you can get the Kinect driver to work and you can (of course) get OpenCV to run under a Linux OS but you have to be smart. It is easy to use up all the processors horsepower.
Rumor has it that a board with only a slightly less powerful CPU, the Beagle Board, managed only single digit frame rates using the Kinect. Only a report on the interwebs, but it does back up the claim that you have to be careful.
That said, I think that if it can be managed, the Kinect could be an awesome sensor on a FIRST robot (find a ball, find the floor, find a wall, find the corner... ...get ball, put into corner...). It is going to happen. I am not sure if it is this year though (or if it is it will be only a handful of teams that manage it - imho)
Joe J.