http://frcdirector.blogspot.com/2011/10/something-to-end-your-week.html
Good Evening Teams,
I hope you enjoy this peak behind the curtain.
93 days until Kickoff
See you then!
Something new in the kit…
http://frcdirector.blogspot.com/2011/10/something-to-end-your-week.html
Good Evening Teams,
I hope you enjoy this peak behind the curtain.
93 days until Kickoff
See you then!
Something new in the kit…
I predict a big game of Robot Dance Dance Revolution
I am thinking human player. If you look at the forums on the other people who have developed code for the kinect to work in labview, the one issue is a way to connect it to the crio. the driver station already has the usb ports. In addition the object id code(the hardest technical part) is already written to detect human movements
All I have to say is, that’s a pretty big curtain if there is a mountain hiding behind it.
This blog did peak[sic] my interest . . .
I’m no XBOX expert, but I think there is way too much going on in the background to have the Kinect pointed at a human player… unless there is a isolation booth! Is the 2012 season sponsored by Real Steal?
I was thinking that difficult terrain would be an interesting feature in a game. Perhaps the Kinect could be used to read the landscape as the robot is driven, and adjust automatically.
They said it was going to be used on the operator side. If so, I hope it’s more in 08 style, and not a replacement for the option/ability to drive with joysticks (that would be an AWFUL decision).
2008-like hybrid mode would get interesting with 6 HP’s/Robocoaches waving their arms around at the same time.
I just hope that there’s enough testing and documentation on this thing to make it relatively easy to use at some sort of level…
Think this is what Dean had in mind? http://www.youtube.com/watch?v=nUf-YPJC2F8
Reference: http://www.youtube.com/watch?v=mKx5_3HUAho
Thank you Tristan Lall for the original link, but a little on how the kinect works:
http://www.anandtech.com/show/4057/microsoft-kinect-the-anandtech-review/2
The laser is actually really help full in creating a sense of depth. Their are two video streams a RGB and a depth version. They designed it so you can cut out your buddy walking behind the couch. For us, it would be able to limit the depth it is interested to what is right in front(it also raises the bar from the axis to 60 fps so it is less jumpy)
Probably the funniest Robotis Related Videos I’ve seen. Great Job Tweedles!
I’m thinking its going to be semi-autonomous like 08 or possibly be used as a human player interface?
No matter what, it’ll be interesting to see how teams use it.
Well as one of the hardware beta test teams, Im about to found out. FIRST shipped one to my house today…
I talked with Bryan (BJC) and Nick (dsm33) directly, and Jim Zondag via email about this today. As we are a beta test team, we only have to wait until Monday to see our Kinect.
We talked about possibilities of the Kinect on the robot and DS end.
We all agree that using the Kinect on a driver station is a very very bad idea, as joysticks are much more direct in use, and the driver can command specific operations easily.
Some of us think that it could be useful on the robot end, however:
-It is not a light sensor (weight-wise). It has two cameras, an IR laser, four microphones, and a motorized base (yikes!)
-It cannot interface directly to the cRio, requiring at least a single-board computer to interface to. For weight reasons, we obviously want a processor that dosen’t need a large heat sink, and the lowest-power processors aren’t likely to run windows 7. Luckily, there are linux drivers for the Kinect, and its embedded nature means we can run without a local interface (no GUI or graphics processor requirement).
-This brings our total requirements to:
*cRio (in kit)
*Kinect (in kit)
*Single-board computer with embedded linux programming skills (???) - OR - larger single board computer capable of windows 7 which requires much more power and cooling and .NET programming skills (???)
*A challenge which can’t be solved any other way to make this all reasonable (this really scares me)
I think its much more reasonable to assume that the Kinect is provided because Microsoft is trying to promote it, and probably donated it, and that the use for it will be negligible. Plus, how do they expect it to work on the field with multiple robots and Kinects (as the Kinect would likely be confused by other IR patterns from other Kinects, especially 6 operating at once).
Do you know who shipped it? Was it FedEx?
I’m thinking the Kinect will be used in conjunction with the human players, very much like '08 was with the IR remotes used to guide the robots in autonomous. I can’t see it going on the robots and serving as anything more than simply a regular camera. Look at what the Kinect was made for… it was made to interact with people, bodies. It only makes sense if it’s a human player input. Obviously the drivers won’t be controlling the robots by dancing out the Y-M-C-A (although that would be priceless to watch! :eek: ). I would also venture the guess that it’s not going to be used on the robots, as that brings into question issues of power, weight, and interaction with the cRio, as well as the complications of having multiple cameras with multiple sets of IR lasers on the field. If I had to make a prediction, I’d say it’ll be used during autonomous.
That being said, I think the Tweedles were on to something…
There will be some code made available at some point to help teams get started. I don’t know what the schedule on that is. I expect that the beta teams will do try some interesting things as well. We’ll all just have to wait and see how it all fits into the game (if at all) of course. From my point of view I’m just excited that smart innovative FIRST students will have a chance to get creative with Kinect.
BTW I have some links on my blog with Kinect information as well as links to FIRST’s press release about this donation. See http://blogs.msdn.com/b/alfredth/archive/2011/10/07/be-the-robot.aspx if interested.
Here’s a press release from FIRST on the subject, just posted on their facebook:
Quotes specifically mentioning how it will be used:
The addition of Kinect for Xbox 360 will allow the competitors to “be the robot,” using a natural user interface to control and interact with their robots with gestures,
The addition of Kinect for Xbox 360 will allow the competitors to “be the robot,” using a natural user interface to control and interact with their robots with gestures, without the need to use a joystick, game controller, or other input device.
Sounds like a cool Hybrid mode again.
The Microsoft blog post also hinted at the possible usage…
During the autonomous period team members will be able to provide some guidance to one (of the three on a team) robot by moving their bodies.
(emphasis mine)
So we may see a 2008-like hybrid mode period again…