Kinect or no kinect?

what are the benefits of the kinect?

there probably arent many.

I don’t see many benefits.
If you can get the kinect working well though, that’s 15 more seconds of having a chance to shoot.
But if your team is typically good with the good old autonomous, I’d stick with that.

I dont think there are many benifits it takes too long to figure out how to make it work

Making it work should be no more difficult than installing the SW, running the driver station, and on the robot using a Kinect joystick during auto and making calls that are the same or similar to your teleop code. Remember that only one team is able to do this, and the others are fully auto. I hope all teams are able to do both.

I think the advantage of using the Kinect is to take advantage of driver input. Robot autos typically don’t know about other robots and it isn’t uncommon for them to interfere with one another or not respond well to unexpected field conditions such as a ball in the way. Drivers tend to correct for those conditions.

Greg McKaskle

kinect at regionals wont have an impact like it will at championship. mY team was practcing with it http://www.youtube.com/watch?v=W1O23R9wN-k&list=PL4AC8D593495DE25E&index=3&feature=plpp_video or http://www.youtube.com/watch?v=Vb7PVAy-a-o&list=PL4AC8D593495DE25E&index=1&feature=plpp_video

I can’t see it being much more effective than the old-fashioned autonomous.

A few members on my team (me included) think that the kinect is unnecessary because as we see it, if you start with two basketballs loaded on the robot, and you start on the key, all you should have to do is point the robot towards a designated hoop during prematch, both shots should be made every time during autonomous.

Would anybody mind telling us why this is a bad idea (if it is)?

TYVM

i don’t think that that is a bad idea if you are able to develop a reliable shooting mechanism. However, i agree with Greg McKaskle that using the kinect is a great way to avoid unforeseen problems like robots in the way or a ball on the ground. Also with the connect control you have a)the adaptability to make a last minute strategy change based on other peoples successes or failures and b) the potential to pick up a ball and shoot it if one of your alliance members misses…

Someone on my team asked this; my response:

You have Kinect enabled capabilities so that you can respond to whatever your opponent does with their Kinect. Maybe you can’t anticipate it now, but you will sure be glad you can counter it when you get to competition.

How long does it take to shoot? You could spend the rest of your time getting the balls from your bridge onto your side, which would be easier with the kinect.

Benefits? For starters, if you learn to use it right, it’s essentially an extra teleopperated mode. This can be very useful if you want to change things up a bit, or not want a repetitive autonomous.

That’s a pretty huge advantage imo.

Also, is anyone else having huge lag issues with the Kinect? We got driving today but it was near impossible with it’s response time.

The lag is affected by the speed of the laptop it is connected to. You may be able to improve it by trimming down what is on the laptop, making sure the kinect is not in a hub, etc. But it takes a lot of processing to estimate the skeleton. The read me for the Kinect gives the system requirements in case you are interested.

Greg McKaskle

Do we know how much processing power we’ll have at the Kinnect station? or will it be the teams OC running the Kinnect? (In which case, the best processor/OC should be running the Kinnect station always)

Per the kickoff video, the kinect gets plugged into the team’s computer.

The Kinect Station is a display/feedback device. The Kinect is not plugged into it, and team code doesn’t run on it. The Kinect Station is relayed the same Kinect Server data as the robot, and it draws the standard skeleton so that the driver knows if the Kinect sees their hands, feet, and other body parts.

The Kinect USB cable is plugged into the OC laptop and the Kinect Server on that computer does the work. Other OC computers have a Kinect Server running on them too, but without a Kinect plugged in, they will mostly snooze.

This configuration will also allow teams to know ahead of time how the Kinect will respond when you get to the field – it is your code, your computer, and your robot, you are just borrowing the field Kinect and using the display for driver feedback.

Greg McKaskle

We’ve decided to push it to our would be cool but only if we get our more important stuff done first. With little experience with our programmers, we are leery of trying lots of new things during build.

Wetzel

just wondring does anyone have any working hybrid code that tests the kinect.
:frowning:

There are examples built into the updates for C++ and Java and in a ZIP file right next to the Kinect Server download for LabVIEW. I’ve been having the students on the teams I work with start with these to help understand the Kinect APIs provided by WPILib.