Team 3142: Week 5 preview

Here is our robot as of today (2/9). We still have a ton of work to do, but things are beginning to come together! And yes, the black box has a Kinect inside…

http://chrismarion.net/robotics/robot_2-9.jpg

Wow this is close to how we have ours :slight_smile:

But still not the same :slight_smile: Great job!!

Does the kinect on the robot actually work?
and if so what role does it play?

like josh was saying how does the kinect actually work on the robot.
and if u will how did u get it to work?

Yep, the Kinect works - if you look closely, you can see how we put wax paper over the infrared laser projector, effectively blurring the light into a homogenous field and taking advantage of the kinect’s infrared camera. When coupled with the retroreflective tape on the targets, it gives us a perfect tracking system completely immune to any changes visible light. The kinect is connected to an onboard computer, which does a huge amount of image processing to send a distance value (accurate to the inch) and information on how to move the turret (preliminary testing shows <1 degree accuracy) to the cRio.

In addition to running the Kinect, the onboard computer processes a feed from a second webcam which is pointed down at the field in front of the robot (not attached in this picture) and sends an augmented-reality video feed back to the driver station, highlighting the closest ball in green (or any other color) and overlaying information to help the driver line the robot up with the ball to pick it up.

Oh my- how does your team have enough time for this!?! Our 5 programmers (including me) have barely gotten tracking (of the backboard) working! Great job!

And what did you use for an onboard computer?

We have a mini-ITX computer running onboard; it has an Atom dual-core 1.8GHz processor, 2GB RAM, a 4GB SSD, and power regulating and supply equipment to allow it to run on anywhere from 6-34vDC (this circuitry also powers the Kinect). We’re all in love with it - it’s small, around six inches square, and draws maybe 30 watts and gets slightly warm while doing all its image processing.

Right now we’re running into framerate issues while processing both feeds - getting only around 6fps from the Kinect and 12 at best from the other camera… although I attribute this to the fact that we like our nice high resolutions too much - today we’ll try moving down from 640x480 and we should see a huge gain in performance. Although the smaller the resolution from the Kinect, the less resolution our distance readings will have. Right now we have a reliable resolution of around 2-3 inches, which is acceptable.

I should also note that we’re running the excellent RoboRealm software on the computer to do our vision processing - it’s an amazingly powerful, GUI-based machine vision platform, and RoboRealm is giving free copies to any FIRST team who is interested (it’s normally $50) - if nothing else, your team should grab a copy to play around with in the off-season.

I’ve found that it’s a great tool for teaching the basics of machine vision, and its easy enough to learn the basics to get students who otherwise wouldn’t step foot near C++ excited about programming and computers. I worked with two freshmen on our team to develop the Kinect targeting software, and both had no previous experience in programming. There is also plenty of room for the experienced programmer - RoboRealm has a full-featured API, built-in HTTP and FTP servers, and you can write custom image processing modules and plugins in C, Python, or Visual Basic. Check it out - http://www.roborealm.com/

WOW. thats awesome. Anyway we could get a writeup of this after season?

Holy!!! How much does that BEAST way?!

looks great except for the weight distibution. your cg is probably going to be a foot off the ground :confused:

I’m also very interested as many probably about how you implemented kinect well enough to be put on your robot! :O. I give props to your programmer, he’s got some real talent if he could do that.

Our programmers have been working on it as well–we use a little linux comptuer called a Beagle Board. What are you guys using?

Impressive-looking robot!

Have you calculated your center of gravity? It does look rather top-heavy, just judging from the picture.

Good luck!

We(1318) are doing a very similar thing but with AXIS cameras instead of a kinect because we did not want to run windows on our onboard computer and microsoft has rules about using the kinect with non windows devices
also we are using a PICO-ITX P830

We have a mini-ITX computer running onboard; it has an Atom dual-core 1.8GHz processor, 2GB RAM, a 4GB SSD, and power regulating and supply equipment to allow it to run on anywhere from 6-34vDC (this circuitry also powers the Kinect). We’re all in love with it - it’s small, around six inches square, and draws maybe 30 watts and gets slightly warm while doing all its image processing.

Right now we’re running into framerate issues while processing both feeds - getting only around 6fps from the Kinect and 12 at best from the other camera… although I attribute this to the fact that we like our nice high resolutions too much - today we’ll try moving down from 640x480 and we should see a huge gain in performance. Although the smaller the resolution from the Kinect, the less resolution our distance readings will have. Right now we have a reliable resolution of around 2-3 inches, which is acceptable.

I should also note that we’re running the excellent RoboRealm software on the computer to do our vision processing - it’s an amazingly powerful, GUI-based machine vision platform, and RoboRealm is giving free copies to any FIRST team who is interested (it’s normally $50) - if nothing else, your team should grab a copy to play around with in the off-season.

I’ve found that it’s a great tool for teaching the basics of machine vision, and its easy enough to learn the basics to get students who otherwise wouldn’t step foot near C++ excited about programming and computers. I worked with two freshmen on our team to develop the Kinect targeting software, and both had no previous experience in programming. There is also plenty of room for the experienced programmer - RoboRealm has a full-featured API, built-in HTTP and FTP servers, and you can write custom image processing modules and plugins in C, Python, or Visual Basic. Check it out - http://www.roborealm.com/

I should also note that we were about 8 pounds overweight as it appeared in the original photo; the kids have since re-designed the robot to weigh less while simultaneously taking 8 inches off of its height to help with the center of gravity. I personally feel that there were much better ways to go about reducing the weight and CG, because now our robot can only hold two balls, and three in a pinch. The students feel that we can make up for this limitation with our shooting accuracy, however.

We are using a BeagleBone with the Kinect. It is communicating with the cRIO over ethernet. It is finding the center of the backboard(s) as well as our distance from the wall and aspect ratio. We are hoping to get a whitepaper together on it at some point this spring, as well as answering questions at the Queen City Regional.

Thought I’d give this thread a bump.

I was wondering how this robot performed in competition.

Did the kinect work well?

Peyton

We tried that for Rebound Rumble but me and our other main programmer couldn’t get it running in time. We got the hardware working between a panda board and the Kinect all powered by the robot but we didn’t have any image processing experience to get the software working in time.

From what I know the system never really worked properly. The targeting did not work properly and thus the only reason for the kinect to remain on the robot was to act as a camera for the driver station. 3142 did make it to the Quarterfinals at one of their competitions and they also made it to the Finals in their other regional.

Source: http://www.thebluealliance.com/team/3142/2012