Log in

View Full Version : How are they going to use Kinect?


GlassPrison142
10-10-2011, 18:19
The last question on http://www.usfirst.org/roboticsprograms/frc/kinect

Q: Can I put the Kinect on my robot to detect other robots or field elements?

While the focus for Kinect in 2012 is at the operator level, as described above, there are no plans to prohibit teams from implementing the Kinect sensor on the robot.

I don't know why, but this makes me a little worried. While Kinect already has enough trouble as it is, will this benefit the game this year for the better or worse? Will it determining factor for the success of teams this year? How do you think they'll incorporate it into the game this year?My guess is something with the human players, but other than that they've got me stumped :confused:

Grim Tuesday
10-10-2011, 18:23
I think it's just a cool option for teams to use. Nothing required, and if people don't want to do it, then it won't ruin the game for them. Fingers crossed.

Chris is me
10-10-2011, 18:38
Kinect on the robot is not going to be easy to use. Interference is a concern.

plnyyanks
10-10-2011, 18:49
Kinect on the robot is not going to be easy to use. Interference is a concern.

Not to mention interfacing it with the cRIO.... That could prove to be difficult

ratdude747
10-10-2011, 18:53
I think it may be a challenge making the kinect talk with the bot... event he new CRIO II doest have USB IIRC (some of the non-frc cRIOs have usb though). sure, you could route the two data wires to existing I/O, but even that could get hairy.

Tom Line
10-10-2011, 19:05
The kinect, for now, is an interface with the driver station. The software and hardware will be tested during Beta, and we will give you all the information we discover.

For now, it has specific requirements - USB port, windows 7, etc. The Killer Bees programmer posted fairly extensive information in the discussion thread about it.

I would like to politely suggest that the conversation about it should remain in that thread rather than starting another one. The thread is here:
http://www.chiefdelphi.com/forums/showthread.php?threadid=97684

Or, you can ask questions on the FIRST Robotics Beta Test Forum. It may take a while to get direct answers, as software is just being received by the teams and hardware receipt is still TBD.

Mr. Mike
10-10-2011, 20:36
It really needs to be a human player device. Do you really think a sensor designed to set on a shelf can handle the shocks our FRC bots are subject to.

connor.worley
10-10-2011, 22:12
No, the game itself won't involve it. Like they said, they're focusing on operator control. The idea of using the camera on the robot seems like an afterthought.

Jared Russell
10-10-2011, 23:21
(Assuming everything works as it is supposed to...)

Team 341 will be demonstrating our 2011 robot controlled via Kinect during the lunch break at Ramp Riot on November 12. Expect to see something that looks a little like an air traffic controller trying to land a tube onto the scoring rack...

We will record as well as webcast the results.

More details to come.

Andrew Lawrence
10-10-2011, 23:26
So is it for human player control, or driver control? Also, it'll be optional, if it's for the drivers, right?

Jared Russell
10-10-2011, 23:33
So is it for human player control, or driver control? Also, it'll be optional, if it's for the drivers, right?

I, nor any other beta tester, do not have any better an idea of what will be done with the Kinect in 2012 than you do at this point. I speculate that this will be used for a hybrid/auto mode, but that is not substantiated by anything other than intuition.

Peter Matteson
11-10-2011, 09:54
I speculate that this will be used for a hybrid/auto mode, but that is not substantiated by anything other than intuition.

That sounds reasonble rather than trying to control the bot the whole match. I had originally figured it would be a human player thing when I saw the news.

gblake
11-10-2011, 10:30
I'm curious if the Kinect itself, or external software processing what the Kinect exposes at its APIs, will give more accurate results if the person/objects in its field of view have special shapes or colors.

Would gluing a white roll of toilet paper onto a trash can lid painted black create an easily discerned (better than a random human infront of a random background) target to track (depth and color)?

Would pink tennis balls stuck onto a person, who is standing in front of a flat green bedsheet, improve ones ability to track the person's motions?

Blake
PS: I'm sure that the answers to my questions exist somewhere out in the Internet information stew. My hunch is the CD folks reading this thread will do the research for me and supply a nice summary answer (plus a few red herrings of misinformation that I'll have to detect and filter out).

However, instead of thinking of myself as lazy right now, I'll choose to consider my questions good mentoring that inspires students to do research. ;)

Tom Bottiglieri
11-10-2011, 11:07
I'm curious if the Kinect itself, or external software processing what the Kinect exposes at its APIs, will give more accurate results if the person/objects in its field of view have special shapes or colors.

Would gluing a white roll of toilet paper onto a trash can lid painted black create an easily discerned (better than a random human infront of a random background) target to track (depth and color)?

Would pink tennis balls stuck onto a person, who is standing in front of a flat green bedsheet, improve ones ability to track the person's motions?

Blake
PS: I'm sure that the answers to my questions exist somewhere out in the Internet information stew. My hunch is the CD folks reading this thread will do the research for me and supply a nice summary answer (plus a few red herrings of misinformation that I'll have to detect and filter out).

However, instead of thinking of myself as lazy right now, I'll choose to consider my questions good mentoring that inspires students to do research. ;)
There are two pieces of software you can modify that bring the Kinect data to your bot. There will be a "server" running on the driver station pc. The server talks to the Microsoft Kinect SDK/ (http://research.microsoft.com/en-us/um/redmond/projects/kinectsdk/) for access to the sensor data. There will be a default build of this which pumps back some softball type of data to the bot (right now, it uses your arms as virtual joysticks). All indicators point to the source code being bundled with this, so you can modify as you wish. I haven't had a chance to look at the source yet, so I can't comment on how much of it is custom vs how much leverages MS's APIs (its bundled in an MSI and seriously, who owns a Windows machine these days? :confused: )

You can also process parts of this on the cRIO. Once again, there will be some kind of basic out-of-the-box experience, but you are free to flip the switches and turn the knobs. You can receive all the skeletal data and build all of your detectors locally on the bot, if you wish.

So to answer your question, if you invested the time you could probably increase your rate of positive detection by doing something whacky. The SDK provided by Microsoft is free to download.

apalrd
11-10-2011, 11:58
I ran the MSI installer under WINE on Ubuntu and the source code in included in the install directory. It looks like it creates two C# projects (KinectServer and UDPDump). I can't really tell what UDPDump does (its a single CS file 36 lines long), and KinectServer looks like it wraps some MS code and does unit conversion and sending the data over a socket.

Tom Bottiglieri
11-10-2011, 14:08
I ran the MSI installer under WINE on Ubuntu and the source code in included in the install directory. It looks like it creates two C# projects (KinectServer and UDPDump). I can't really tell what UDPDump does (its a single CS file 36 lines long), and KinectServer looks like it wraps some MS code and does unit conversion and sending the data over a socket.
UDPDump just prints everything it sees on port 1155

ChrisH
11-10-2011, 15:29
It really needs to be a human player device. Do you really think a sensor designed to set on a shelf can handle the shocks our FRC bots are subject to.

Why not? That is what we do with the Bridges now. Take hardware designed to sit on a shelf and put it in the very demanding environment of a FRC robot.





We all know how well that works :rolleyes:

Chexposito
11-10-2011, 15:57
won't this make the coach not being able to control the robot a borderline call? i mean, they would have to be in the field of view in most set ups

Dancin103
11-10-2011, 16:55
I, nor any other beta tester, do not have any better an idea of what will be done with the Kinect in 2012 than you do at this point. I speculate that this will be used for a hybrid/auto mode, but that is not substantiated by anything other than intuition.

That's what I was thinking, kind of like how auto mode ran in 2008, more human interaction.

Josh Drake
11-10-2011, 17:03
http://www.usfirst.org/aboutus/pressroom/first-adds-kinect-for-2012
;)

DjMaddius
11-10-2011, 20:56
A good benefit of the kinect on the driver station side might be creating a map of the field and knowing where all the other bots are during the competition. To do this youd have to track each bot as it moves. It would be a daunting task, but could be done. I believe the field is the main obstacle around this though. The terrain will greatly affect what the kinect will see as well as the height of the kinect. Id like to see this done. If I had the resources right now. Id do it myself based off of last years game, but I dont own my own field sadly haha.

Tetraman
11-10-2011, 21:39
I'd like to think that it's a simple answer: They are giving you a Kinect, and what you do with it will be up to you, worth an award and probably special recognition.

The quote "While the focus for Kinect in 2012 is at the operator level" is very important, as I'm sure it has more to do about the Kinect's slogan "You are the controller" than it is about making a robot mechanism. However, as it doesn't say use of the Kinect on the robot is prohibited, I will venture a guess that if you only get one Kinect from the Kit of Parts and you can use it on your robot rather than the 'focus', then the whole use of the Kinect is optional.

If the Kinect is optional, it will most likely be for the "Human player", or "Human Dancer", or "Actor", however it can be just as simple as you are given a Kinect, and whatever you want to do with it is up to you.

Josh Drake
12-10-2011, 07:09
From the above mentioned article:

"In the 2012 FIRST Robotics Competition, teams will be able to control robots via Kinect. They will be able to either program their robots to respond to their own custom gestures made by their human teammates, or use default code and gestures. The added ability for teams to customize the application of the Kinect sensor data is a valuable enhancement to the FRC experience."

Just to spell it out further.
;)

Gdeaver
12-10-2011, 07:50
The Kinect is allot of tech rolled up into a 150$ consumer device. One of the side effects of bring it into that price point is the camera is marginal. Teams should explore the camera's tolerance of bad lighting. The Miss Daisey crew should have good tracking at Ramp Riot. However there are other venues that have terrible lighting in the driver area.

Bongle
12-10-2011, 10:16
I think the interference point is a good one (and probably a showstopping one as far as competition goes), but if you want to run the kinect on a robot, you could certainly stick a laptop on a robot and attach the kinect to it for some fun in-lab demos. Network communication via unix sockets with the cRio is actually very easy in c++.

So it'd be something like this:
Kinect->usb->Laptop that can talk to it
Laptop->ethernet->cRio

The laptop could do continuous transmission of the depth and picture images to the crio, or could do processing and just send the results.

Greg McKaskle
12-10-2011, 20:10
Kinect API has several features. It is a USB color camera, directional microphones, a servo for adjusting it's orientation, and finally, a patterned light IR transmitter and IR camera. The skeleton tracking is accomplished by heavy processing of the depth map calculated from the patterned light. Since it brings it's own light source, it even works in the dark. The Kinect can be configured to not even return the color image, and the skeleton will still work. Oddly colored clothes aren't required, but the shouldn't hurt either.

Does at answer a few questions?

Greg McKaskle

Gdeaver
12-10-2011, 21:35
I was late to the meeting tonight and did not see it but, one of our programmers brought his Kinect in tonight with some software off the net. The general consensus was "Yeah we can do something with this". Tracking was very good even with marginal warehouse lighting. Team is excited with the new allowed toy.

Robby Unruh
13-10-2011, 11:35
Am I the only one who hopes that they add C# support to their current list of programming languages? :P

527's_Spy
14-10-2011, 15:40
Maybe Microsoft is giving us a special type of kinect. Maybe it's one that is going to plug into the usb on the computor, and have a seprate sensor box to go on the robot that recieves a signal put out from the connect. Or i could just be shouting nonsense. ::rtm::

Cyberphil
14-10-2011, 20:26
Maybe Microsoft is giving us a special type of kinect. Maybe it's one that is going to plug into the usb on the computor, and have a seprate sensor box to go on the robot that recieves a signal put out from the connect. Or i could just be shouting nonsense. ::rtm::

Ask one of the beta teams that received them. They already have them.

Chexposito
15-10-2011, 02:42
There is one slot on the crio that's purpose hasn't been determined yet...

matessim
16-10-2011, 09:20
Its not going to be robot mounted, its going to be part of the driver station, the driver station has a USB port and the whole point of it is to control the robot with NUI.

Source:http://www.usfirst.org/aboutus/pressroom/first-adds-kinect-for-2012
Snippet:
The addition of Kinect for Xbox 360 will allow the competitors to “be the robot,” using a natural user interface to control and interact with their robots with gestures, without the need to use a joystick, game controller, or other input device.

slijin
16-10-2011, 11:15
Its not going to be robot mounted, its going to be part of the driver station, the driver station has a USB port and the whole point of it is to control the robot with NUI.

Source:http://www.usfirst.org/aboutus/pressroom/first-adds-kinect-for-2012
Snippet:
The addition of Kinect for Xbox 360 will allow the competitors to “be the robot,” using a natural user interface to control and interact with their robots with gestures, without the need to use a joystick, game controller, or other input device.

Please read the thread before posting.

The last question on http://www.usfirst.org/roboticsprograms/frc/kinect

Q: Can I put the Kinect on my robot to detect other robots or field elements?

While the focus for Kinect in 2012 is at the operator level, as described above, there are no plans to prohibit teams from implementing the Kinect sensor on the robot.

FrankJ
25-10-2011, 22:15
Including the Kinect in the kit a parts is actually an evil plan. The programers are going to spend all their time playing with the Kinect. In competition the robot is just to sit there because it will not have a program. :ahh:

JosephC
26-10-2011, 11:23
Including the Kinect in the kit a parts is actually an evil plan. The programers are going to spend all their time playing with the Kinect. In competition the robot is just to sit there because it will not have a program. :ahh:

And the 3 teams that actually figure it automatically win. It makes so much sense now! :ahh:

527's_Spy
28-10-2011, 16:16
http://dvice.com/archives/2011/10/microsoft-demos.php
amazing invention by microsoft with the kinect but it's a special kinect so it most likely wont be what we're useing but you never know

Jared Russell
28-10-2011, 17:36
There is one slot on the crio that's purpose hasn't been determined yet...

Actually, it has been. Slots 1-3 on the cRIO FRC II are Analog, Digital, Solenoid, in that order. Slot 4 on the cRIO FRC II will be a "wildcard" - any of the Analog, Digital, or Solenoid modules may be inserted there. So you may have two of one type of module, and only one of the others.

For the 8-slot cRIO, slots 4 and 8 are now unused. Slots 1-3 are exactly the same as on the cRIO FRC II (Analog, Digital, Solenoid in that order), and Slots 5-7 are in the same order again (5: Analog, 6: Digital, 7: Solenoid). So you may have two of each type of module.

Jash_J
31-10-2011, 00:54
Hmm, what if there is a "kinect mode" at the end of the game similar to the autonomous mode at the beginning?

gwlund
01-11-2011, 11:34
This device may be an interesting way to connect a Kinnect to the robot wireless network and control it via the Driver Station laptop.

http://www.bb-elec.com/product.asp?SKU=UE204&s=20110088&utm_source=productnews&utm_medium=email&utm_content=UE204&utm_campaign=20110088

Anupam Goli
01-11-2011, 12:26
Hmm, what if there is a "kinect mode" at the end of the game similar to the autonomous mode at the beginning?

I'm thinking the same thing... maybe the return of hybrid mode, but with a twist. instead of giving signals to the robot, you use the Kinect to give signals to the robot. I would not be able to stand driving the robot with a kinect. From the videos of Miss Daisy and 1786, It seems like there is a bit of lag between motions of the robot.

alfredtwo
01-11-2011, 12:32
Have you all seen the "Kinect Effect" video that came out this week?

http://www.youtube.com/watch?feature=player_embedded&v=T_QLguHvACs

A bit of a "what would you do" sort of thing. Personally I can't wait to see what interesting uses FIRST teams come up with.

nssheepster
14-11-2011, 09:55
While Kinect already has enough trouble as it is

It has trouble on an Xbox becuase of software and intended use. The camera itself is amazing. See Popular Science if you want proof, they've got a few good articles on the topic. Personally, I'm not worried about the device itself. I'm worried about what we'll have to do with it. My team can do alot of things, but not all at once. I'm more concerned with that than the device.
:ahh:

kinganu123
14-11-2011, 10:40
Ok, I think I have it figured out. The Kinect is really our GAME HINT!!! If the kinect allows us to be the robot, then what's to say that the robot we build will really BE like us. I'm thinkin this year's game is Simon Says.

Robert Cawthon
14-11-2011, 14:00
I find my self stifling a laugh every time I think of what its going to look like if three people on either end of the playing field start acting like marionettes trying to controll their bots. :ahh:

pandamonium
15-11-2011, 14:57
I kind of think this isn't much of a secret anymore

http://blogs.msdn.com/b/alfredth/archive/2011/10/07/be-the-robot.aspx

"During the autonomous period team members will be able to provide some guidance to one (of the three on a team) robot by moving their bodies."

Jared Russell
15-11-2011, 16:16
Actually, that blog post sort of suggests that only one of the three robots on the alliance will be able to use the Kinect during auto mode. That would be very interesting from a pre-match planning perspective...

Who knows whether that is insider information, or whether the poster is simply reading into the press release (possibly incorrectly).

PAR_WIG1350
15-11-2011, 18:56
Actually, that blog post sort of suggests that only one of the three robots on the alliance will be able to use the Kinect during auto mode. That would be very interesting from a pre-match planning perspective...

Who knows whether that is insider information, or whether the poster is simply reading into the press release (possibly incorrectly).


It's highly unlikely, but maybe this year each team has to build three robots, 2 will run off of a simpler system, but one will have the crio and that is the one which will be controlled. That would mean each alliance would have 9 robots.

nssheepster
18-11-2011, 09:38
That would mean each alliance would have 9 robots.
No way. How would you fit 18 robots on a field? Way too many.

pandamonium
18-11-2011, 11:34
There were at times 12 robots on the field this year...

emekablue
22-11-2011, 20:25
Has anyone seen the new Kinect commercial?
http://www.engadget.com/2011/11/01/kinect-commercial-sdk-coming-in-2012-video/
(:52 to see a robot)

PAR_WIG1350
24-11-2011, 01:39
No way. How would you fit 18 robots on a field? Way too many.

They wouldn't be 28*38*60, they would probably be more like 20*20*48, maybe less. Bag and tag would be a pain with three full-sized robots. It would be like how some combat robots are really just a collection of smaller robots that, together, equal the size and weight of a full robot: http://www.robotcombat.com/theswarm.html.

bhaidet
28-11-2011, 00:12
Actually, that blog post sort of suggests that only one of the three robots on the alliance will be able to use the Kinect during auto mode. That would be very interesting from a pre-match planning perspective...

Who knows whether that is insider information, or whether the poster is simply reading into the press release (possibly incorrectly).

I looked up the author of that article and could not find a link between him and the Kinect project other than outreach stuff, so if he is involved with the FIRST project he probably knows about the whole thing but may not have been directly involved in organizing it (and thus may not know that it is top-secret because we are all crazy)

4 theories:

He read too much into the press release
The fact that he specified "of the three" there indicates that he actually knows what he is talking about and was not just typing carelessly. He knew he was only talking about one robot per team.
"provide some guidance to one (of the three on a team) robot by moving their bodies"

He made something up
If you know enough about FIRST games to make that up, then you know not to make things up about FIRST games

It is an intentional misprint to mess with us
They would not do this on an unrelated blog. A game hint could lie, easily, but not some random 3rd party.

It was an unintentional gaff and we should take advantage of it.
Bingo! He would not have been that specific if he did not actually know.


As Jared341 mentioned earlier, this will have huge consequences on pre-match collaboration. FIRST always wants teams to work together more, and this sounds like a great way to do that.
Sounds fun!


By the way? does anyone think that we take this WAY too seriously sometimes? :D

Robert Cawthon
28-11-2011, 10:43
By the way? does anyone think that we take this WAY too seriously sometimes? :D

Of course we do! Thats part of the fun of FIRST! :p

staplemonx
28-11-2011, 16:13
Kinect 2: So Accurate It Can Lip Read? [Rumors]
http://gizmodo.com/5862968/kinect-2-so-accurate-it-can-lip-read


Plus there will be two version of the kinect 1 on the market in the spring
http://www.atomicrobotics.com/2011/11/kinects-2012-frc-robots/

Which will we be given and which will we be allowed to use?

Jared Russell
28-11-2011, 17:04
Which will we be given and which will we be allowed to use?

Since all of the beta testing thus far has been with the "original" Kinect, I would expect that version to be the one donated to teams.

bhaidet
28-11-2011, 19:46
Based on the video from Ramp Riot (and what he said about the prepackaged code), it looks like tank drive by flailing arms should be a piece of cake. Turning would look like you were trying to fly.

has anyone tried this?

Mapping intuitive controls is hard enough with buttons (unless the programmers drive), let alone trying to map actual human movement to something with more appendages.
The demonstration made it look extremely difficult to control because the only "easy" command was moving an arm to raise the robot's claw. The driver had to consciously remember every direction to move to make the robot drive.
If the claw was controlled by nodding the head up and down to "look" at where you wanted it to go (I'm not sure this is possible) and this left the arms open for pretend-wings tank drive, it may actually be easier to control than with a joystick!

DavisC
28-11-2011, 21:47
I happened to be at gamestop the other day and I noticed something called the "Zoom for Kinect"

Seamlessly blends in with the look of the Kinect Sensor
Plug and play - no tuning, software or adjustment needed
Play with 1 or 2 players in a smaller space
Up to 40 percent reduction in space needed to play Compatible with all Kinect software
Easily clips over the Kinect Sensor - no modification or complex installation needed

http://www.amazon.com/Zoom-Kinect-Xbox-360/dp/B0050SYS5A

This is what it claims to be capable of. Might this be implemented with the Kinect for the space problem?

Jared Russell
29-11-2011, 08:17
Based on the video from Ramp Riot (and what he said about the prepackaged code), it looks like tank drive by flailing arms should be a piece of cake. Turning would look like you were trying to fly.

has anyone tried this?


Yes, a "two stick drive" using the Kinect was the first thing we tried. We found it a bit unintuitive, and difficult to drive in exact straight lines (hence the controls arrangement we used at Ramp Riot - when we weren't commanding a turn, a control loop using the gyro held the bot straight).

The Kinect SDK makes it very easy to define new types of gestures ("buttons" and "axes"). As far as I know, the "Kinect Server" code that runs on the Driver Station to send Kinect "buttons" and "axes" to the bot will be released as open source (at least it has been to the beta test teams), so you will be able to modify it to incorporate your new custom gestures.

You can actually start experimenting with this now. All you need is a Kinect, a Windows 7 PC with USB, and the Kinect SDK/Visual C# download from Microsoft (free). You can use the sample skeleton tracker app to get a feel for what gestures will be easy to track, and what will be more difficult.

Matt C
29-11-2011, 15:31
http://www.microsoft.com/bizspark/kinectaccelerator/

Could this be the real reason for introducing Kinect to the FIRST community??:ahh:

pandamonium
29-11-2011, 16:05
I don't think that is the reason. Dave Lavery Gave a speech about the kinect at the DC regional last year... also Will I AM came out with a video game fo rthe Kinect...

bhaidet
01-12-2011, 21:51
Ah. Its great that you tried it but disappointing it didn't work. I guess I didn't think about how hard dead-zones would be to input. sadly our arms are not spring-loaded like the joysticks... :(
Maybe writing a sort of mobile dead zone (averaging the forward power and zeroing the twist value if the two arm values are close enough to each other) would help. If it is easy to interpret, maybe you could use body tilt for twist and only one arm for power. That would leave the arm open to control a mech, which worked excellently in the video.

I can't wait to have fun flailing for the camera, but I think were about to break until kickoff so maybe we'll know WHY we are flailing by the time we try it out. :D
The anticipation is killing me!

garr255
05-12-2011, 19:34
I think the Kinect will be used in the finale, that is the only place for it that would be practical.

Robert Cawthon
06-12-2011, 09:16
I think the Kinect will be used in the finale, that is the only place for it that would be practical.

I think it was 2008 that had the infrared "suplement" available to the autonomous at the beginning. With it you could send up to four different signals to the bot to override the autonomous programming. I see something similar for Kinect.

innoying
05-01-2012, 01:46
Sorry to bring back an old(er) thread, but if any of the people here who are think about the programming side of the Kinect on a robot want to contribute at http://www.chiefdelphi.com/forums/showthread.php?t=99275 it would be greatly appreciated.

nssheepster
05-01-2012, 11:08
I think if we are using Kinect, it would almost certainly be possible to regulate how much you use it. Instead of forcing us to use it full-on, we might have to use it for at least one function. The rules may be more specific. But if we can just drive, and, say, nod to close a claw, or something similar, it could work very well. Full drive is probably too complex to do well, but a partial command might just be a helpful aid. Anybody tried head based gestures? Could be handy!

Daniel_LaFleur
05-01-2012, 11:27
I think if we are using Kinect, it would almost certainly be possible to regulate how much you use it. Instead of forcing us to use it full-on, we might have to use it for at least one function. The rules may be more specific. But if we can just drive, and, say, nod to close a claw, or something similar, it could work very well. Full drive is probably too complex to do well, but a partial command might just be a helpful aid. Anybody tried head based gestures? Could be handy!

I wondering if maybe they'll use it to allow teams to control a movable part of the field ... or maybe control a placebo bot.

nssheepster
05-01-2012, 12:32
I wondering if maybe they'll use it to allow teams to control a movable part of the field ... or maybe control a placebo bot.

I doubt a placebo bot. They'd have to provide them at competition, and to be fair, let us practice with them somehow. Too much money. Unless, of course, Microsoft is paying! :) The field seems likelier, but with the field problems I've seen in past years, I shudder to imagine the time-consuming errors that could occur if the field were to move. But, an elevator or something, would be very cool, no? I wonder, could the visual just be fed into the driver station? That might be it. We could just be vastly overthinking it. It seems to be a habit in FRC teams, so...

Daniel_LaFleur
05-01-2012, 18:41
I doubt a placebo bot. They'd have to provide them at competition, and to be fair, let us practice with them somehow. Too much money. Unless, of course, Microsoft is paying! :) The field seems likelier, but with the field problems I've seen in past years, I shudder to imagine the time-consuming errors that could occur if the field were to move. But, an elevator or something, would be very cool, no? I wonder, could the visual just be fed into the driver station? That might be it. We could just be vastly overthinking it. It seems to be a habit in FRC teams, so...

I agree a placebo bot is probably too much (but one could hope :rolleyes: )

Moving a field component (like a door) or a goal (on a track system) could be easily achieved though.

I doubt they would us it to feed camera data to the driver stations as they could use an inexpensive camera for that, not the Kinnect sensor ($$$).

mwtidd
05-01-2012, 23:24
I really hope they make kids do up downs or jumping jacks to raise a meter. It would be cool to see first take a crack at an alternative approach to the fight against obesity in a way different from generic sports.

Not to mention seeing a bunch of kids doing up downs next to robots would be pretty entertaining·

nssheepster
06-01-2012, 07:29
I doubt they would us it to feed camera data to the driver stations as they could use an inexpensive camera for that, not the Kinnect sensor ($$$).

Yeah, but then Microsoft doesn't get advertised to all the smart, technologically knowledgeable young consumers.....

nssheepster
06-01-2012, 07:33
Not to mention seeing a bunch of kids doing up downs next to robots would be pretty entertaining·

Well, I don't know about you guys, but just becuase I'm skinny doesn't mean I'm healtyhy enough for that. It'll be a lot less entertaining if we all just end up wheezing behind the camera as those long hours in front of Chief Delphi start showing.
:p

Joe Ross
06-01-2012, 11:48
I wondering if maybe they'll use it to allow teams to control a movable part of the field ... or maybe control a placebo bot.

This is the year that Calum Pearson gets his wish. The entire field will be on a 3-axis moveable platform, ala Cirque du Soleil's Kà (http://en.wikipedia.org/wiki/K%C3%A0#Set_and_technical_information), all controlled by kinect.

LMD3130
06-01-2012, 20:09
I believe they may have us use it with the human player, just a hunch. Maybe it's FISRT's way of us getting some exercise because sooo many of us have weight issues they may either think it looks good and obtain money from government (or other health nut group) for promoting healthy activity in kids or the want to provide us with a new challenge because of the kinect's horrible interface.


Or just mount it on the robot as a replacement of the webcam for better targeting (i would prefer a ping sensor over that but at least its a step up) which as an example from last year could have been used to guide the minibot directly to the center of the pole with 3D like imaging.

Remember just a hunch.