Quote:
Originally Posted by Invictus3593
I had heard that you could only do 2 camera feeds in c++, but obviously that's not true.
I had one other question. I had heard that the Kinect has an accelerometer in it and I was wondering if anyone has tried to use an accelerometer to compute where their robot is on the field. It may be a fun idea to collaborate on, if no one has done it yet! If there's enough interest, I'll create another thread for this idea, just let me know!
|
It is very possible to show 3 camera feeds in c++( 2 from the kinect, 1 from a webcam), and c for the matter. I could send you a really good demo program that I wrote for our team meeting last week to teach our new programmers about openCV/computer vision. Pm me and I'll send it to you. It's kind of long (~100ish lines) so it'd really stretch out this post and would be obnoxious.
You're right about the kinect having an accelerometer, but I honest have no idea how to call for it's reading. I emailed my mentor that helps me with vision programming and he sent me this link:
http://www.youtube.com/watch?v=c9bWpE4tm-o. it doesn't give any info in the description, but the fact that is has been done is encouraging

I don't know much about the kinect sdk, so I can't help you there.
Instead of using the accelerometer , we use a gyro for orientation. What we have been working on is using our vision solution instead of a gyro, or as a check for it at the very least. To do this for all 6 desired values, x y and z displacement, and pitch roll yaw, I'd suggest using pose. I digress, however. I would love to work on a project like this with anyone interested (that is, using the accelerometer readings in the kinect)