![]() |
Re: Running the Kinect on the Robot.
Framerate isn't the best, but I think that mostly has to do with me displaying both video feeds onto a 1080p monitor. Obviously this won't be done on the robot. When I don't display the video feeds, the "framerate" or rather, the output, is much better.
I am using the IR and depth feeds. |
Re: Running the Kinect on the Robot.
Quote:
Thanks for the insight on linux vs windows with open kinect. Unfortunately for me right now I don't have a linux box at my disposal. It seems using the reflective tape is definitely better for finding the center of the target, and I think I'm probably going to use the same strategy. I use the rgb and depth to find distance, because I believe the carpenters tape is more reliable for the depth measurements. I'm curious have you tried you vision tracking with other shiny aluminum objects in the field of view? That's what killed me last year was forgetting about the reflections on the legit field. Also are you using a clear poly or smoked poly backboard. I'm trying to find someone who has taking a shot of the 1/2" smoked poly backboard with the kinect. I have a feeling it will look closer to wood than clear poly. |
Re: Running the Kinect on the Robot.
I am currently just using clear poly.
Since I am using the IR feed, many "shiny" things are of no concern since they are reflecting (humanly) visible light. The biggest issue comes for light sources that produce IR light (e.g. incandescent bulbs). However, this is not hard to deal with since you can easily filter out small object and setup the algorithm to only look for rectangles. I am using the retroreflective tape to find the target and then I look at the gaffers tape for the depth (the black stuff on the inside). It's not perfect yet, but I think I can sharpen it up a bit. I did get OpenKinect to work on Windows, but it took some doing. After I used CMake to generate a VisualStudio solution, I had to go through and build each project individually (skipping some since I didn't care about them). And there were some stupid errors like it would try to build a C++ project as a C project, so I would have to set the projects as C++ manually. But...it did finally work. |
Re: Running the Kinect on the Robot.
It's simple. Just buy a small ARM computer. They're cheap. I use the Raspberry Pi. Tons of developer resources. It's just $35. Edit a text file to overclock the CPU, GPU or RAM. Get the CodeBlocks IDE and the FreeNect Library.
Just open up terminal and type in: sudo apt-get install codeblocks freenect openssh-server type in the 'openssh-server' so that you can shut it down with the command: shutdown -h now The Raspberry Pi should run on a 'wide' range of voltages. To power the kinect, get a step-up converter/transformer to get 24 volts. Then, use a switching or LDO Linear voltage regulator. Just not that the Kinect requires 1A of current. Someone posted that it requires 12 Watts and 12 volts. Ohms law will solve this for you. That is all I know about this setup. I might use this, but because of my PHP knowledge, I am going to create a web Point-and-Click-to-Attack protocol service so that we can do things more accurately than any other team even if they have the best drivers! Thank You! |
Re: Running the Kinect on the Robot.
One thing that I may think of:
Possibly using another small computer, such as a Raspberry Pi board, this as far as I recall is legal under the co-processor rules, so long as it doesn't interface with the bot directly. Then the two boards could communicate via I2C. The Pi board would allow for some VERY high level tracking and analysis, then I2C could send some of the tracking info back to the cRIO. Just a thought. |
Re: Running the Kinect on the Robot.
Our team has been working on this for a while, and we have gotten a kinect feed on our pandaboard. Creating the 640x480 depthImage uses ~60% (30fps), but doing anything with the data (like opencv filtering) brought us down to <5fps. 987 did this before, and they skipped 5 pixels at a time to achieve reasonable framerate. We were thinking of just sending the raw depthImage (using openni) to the driver station dashboard (with opencv), and then back to the cRio, all over Ethernet.
spi & i2c are unnecessary. Just use ethernet. rPi is waaay too slow. |
Re: Running the Kinect on the Robot.
The Raspberry Pi should work, as I came up with the plan, HTTPd SSH Telnetd server. The Pi contacts the controller computer, which validates the data and forwards the data to the cRIO. If the terminal is running Windows 8, it is easily possible to create a JavaScript app that creates some sort of point-and-click-to-attack mechanism. In a competition that uses beanbags or balls, You could point and click with a mouse or touchscreen and it will automatically make to Robot go for the ball or beanbag and automatically execute what needs to be done with it, for example, shooting it, placing it, tearing it, etc.:] :] :] :] :]
|
Re: Running the Kinect on the Robot.
Doing that will damage the kinect to a point in which you would have to go to microsoft and have them fix it with their robots, or buy a new one. Before going for the conclusion, It is USB, so 5 volts will work, Shouldn't you read the AC adapter? It says '12 Volt 1.08A' Powering it off 5 volts should do some nice amounts of damage to your kinect.
|
| All times are GMT -5. The time now is 02:15. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi