In response to:
"Speaking of input, Microsoft is looking for information about if and how teams used Kinect this season. If you’re going to the Championship, have used Kinect this season, and would like to share your feedback, please contact Alfred from Microsoft at
Alfred.Thompson@microsoft.com. "
from Bill's latest blog, we decided to send in a letter based on our Beta Testing and competition experience with the Kinect. The body of the letter is pasted below-
Dear Mr. Alfred Thompson,
Hello, I am Rachel Holladay of FIRST Robotics Team 1912 Combustion from Slidell, Louisiana, United States. On my team I serve as the Webmaster, Beta Testing Lead and Controls Captain, responsible for all electrical and programming on the robot. Through Beta Testing and build season our team spent a considerable amount of time prototyping the Kinect for FIRST as well as using it for this year’s competition robot. It was definitely a learning experience for everyone involved and through it I have developed fairly strong opinions and conclusions on the Kinect-usage in FIRST robotics.
In the fall of 2011, our team was lucky enough to be chosen as Beta Testers for the Kinect with LabVIEW. Initially we experienced a lot of trouble and therefore frustration. As we were one of the first FIRST teams to be experimenting with the new hardware, our trouble was not wholly unexpected. Often times there was a lack of explanation or sufficient examples. Some testing days we left feeling more confused than ever. However, the FRC and NI support was incredible, resourceful and very understanding. We are very proud to say that we were able to work closely with both groups to finally get our Kinect functional. The resources for the Kinect definitely needed to be developed and we were glad to be part of the process. All of our Beta Testing documents can be found on our website at
http://www.team1912.com/beta_testing.html. In one of our last formal reports on the Kinect (under task 4) we presented our initial reaction to the Kinect:
“While we do find that it is a very cool feature, we see it more as a novelty. We still feel safer and more comfortable driving our robot with joysticks. It really goes back to the fact that controlling a robot with human movements tends to be a bit more unstable. While the robot can be controlled with changes in a joint in degrees, humans do not move their joints in 10-degree increments. While the field of gesture controlled robotics is fascinating, within this scenario the human body is not exact enough to provide the kind of control you might want within a FIRST competition. We do, however, enjoy showing it to people as an example of the possibilities of robotics.”
We demonstrated our Kinect controlled robot at both our Beta Testing Presentation and our local FRC kickoff. At both, our robotics peers were very interested, but also cautious. At these events we had to go to great lengths to maintain safety. Each Kinect player must be trained, and many students were unwilling to try. Within a non-field environment, it was often difficult to maintain safe clearance around the Kinect player and the robot while giving the player a sufficient view of the robot. When we had first started with the Kinect we thought it would be great for demonstration because everyone could get involved. However, do to the safety factor and the intermittent problems often experienced; I doubt the Kinect will become part of our travelling robot kit. Our first success with controlling the robot with the Kinect can be seen in our Youtube video:
http://www.youtube.com/watch?v=IYhxqsj70VY
When Kickoff rolled around we were very anxious to see the Kinect’s place in the 2012 FIRST game. We were pleased to see that it was an optional part of autonomous. As more and more people watched our Youtube video and wanted to learn more, I wrote an unofficial Kinect Manual (
http://team1912.com/docs/kinectmanual.pdf) that included information that we had learned through Beta Testing and tidbits of information as well as examples that I wish I had known. Throughout the build season we continued to answer questions through email about the Kinect from teams across the globe. Most were basic setup questions that could be easily debugged.
As for our own Kinect usage, we created a dual autonomous mode that could be shifted between using a switch on the robot. One version of autonomous was controlled through timing while the other was with the Kinect. A similar strategy was used for both: fire two shots. However with the Kinect version we could decide when to fire and after we fired the shots we had the freedom to drive around with the possibility of collecting more balls. During competition we bragged to the judges that the Kinect control would allow us to bypass the normal barrier of autonomous, the fact that there is no human control. Our Kinect controlled autonomous was a sense, nothing truly special. We used the pre-given axis and joystick button control because it was easier, safer and more reliable. At the time, we had non real need or desire to do overly fancy Kinect processing.
During the course of build season, we were mildly disappointed, although not too surprised that most teams elected not to use or even touch the Kinect. Within the autonomous mode, no extra bonuses were given to Kinect controlled robots. Most teams could easily and effectively accomplish their autonomous tasks through timing or sensors, as they had done in the past. From their point of view, there was probably no need to fool around with this odd new technology, especially with the limited time. We used the Kinect because we had put so much time into it we wanted to be able to put that experience to good use. Also, since we were given the honor of being a Beta Test Team, we felt obligated to use the Kinect. As we looked through, it was rare that more than one team at a Regional would be using the Kinect and even then, it was not consistently used. At our competition, the Bayou Regional, we were the only ones to raise our hands for Kinect usage, although we did do so rather proudly.
During competition we had to spend several practice matches working with the field to get the Kinect operational because the first few times we experienced immense lag, to the point where it was unusable. It was very awkward that to get our camera image we needed the “Video Enable” button pressed on the dashboard but to reduce Kinect lag we had to unclick the button. Therefore, in the match one of our drivers had to stop and toggle the switch. During matches we elect to use the classmate clamshell because it is small and easily connects to the field. Unfortunately due the clamshell’s limited capacity, the Kinect image cannot be quickly processed, leading to a lag. In theory we could have used a regular laptop during matches, but we feel much safer using the clamshell. The placement of the Kinect station was also inconvenient for our human player because he was trying to control a robot that was across the field from him. It would have been much easier in the Kinect station was in front, or at least closer to, each team’s robot. At the end of the day we only used our Kinect controlled autonomous in one of our thirteen matches, and that one match was mainly to show off our hard work. Our driver team and strategists felt much safer with the timed non-Kinect autonomous because they saw it as being more reliable.
I believe the Kinect can become part of FRC. However, with any new product it will take time to integrate. It is not because people are opposed to anything new (although they might be) but because learning how to use new technology requires time and effort. Also it takes time to develop enough documentation and examples to give people the confidence to experiment. Most importantly, in order for usage to become more widespread, teams must see the Kinect as a necessary advantage. Once people think that using the Kinect will give them an upper hand or that using it is critical to success, then the population of users will explode. This year was not a game changer, more of a novelty.