Hmm... Well, there is certainly some controversy over what constitutes a "robot". The Robotics Institute of America seems to believe that a robot's "common characteristic is that by its appearance or movements, a robot often conveys a sense that it has intent or agency of its own." I think that's where our autonomous mode comes into play. In fact, many robotics "experts" would not even refer to a teleoperated device as a robot in the first place.
That being said, I do feel that the significant difference between the "move 2 seconds straight ahead, kick, two more seconds, kick and then strafe to get out of the way" kinds of behavior and the "I'm in a green field, find a light colored, spherical object and acquire it, find something elliptical, kick the spherical thing towards the elliptical thing" kind of behavior. Which one of these represents true autonomous behavior? The answer depends on who you talk to.
On the one hand, even the former behavior is more interesting than simply sitting there for 20 secs waiting for the operators to "drive" the robots. And, there are certainly frameworks that can help implement the former types of behavior. So, at one level, we need to make sure that more teams are capable of at least handling the simple movements. How we achieve this as mentors depends on the makeup of your team in any given year.
What differentiates the latter behavior is the ability to use sensors. Understanding the concepts of a state machine, the gozintas and gozouttas on the robot, what the voltages from the sensors actually mean, etc. enable not only autonomous behaviors but also operator assists in teleop mode. Think of not being able to see a ball in the middle zone from the driver's station because of the bump. You're stuck strafing along until you get lucky enough to bounce a ball into your line of sight. But, what about being able to punch a "find ball and shoot it at the target" button? Now, you're able to seriously understand the capabilities of our robots. The KOP components are there to enable this kind of operation. We just need to know how to use them.
How can you create such behavior? Enable the students with knowledge and let them use their imaginations

. What we really need is a detailed set of materials that describe sensor concepts, state machines, drive-train concepts, scripting concepts, etc. so mentors can have the source material to help teach the students what these concepts are and how they're applied. Here in the D.C. region, many of us (mentors) have been getting together to talk about what that kind of material should look like and how best to present it.
I don't feel that there's a silver bullet that makes autonomous easy. But, we can go a long way to demystify it for the students and ourselves. There is a lot of collective wisdom here in the FIRST community. We need to take steps to actually collect it, write it down and enhance it with some exercises (that use things found in the KOP) than can be easily reproduced.
We may not have to explain the concept of infrared radiation, but we should be able to explain how an infrared sensor works to determine distance. And, with the proper enhancements to WPILib or LabView VIs, enable students to use the infrared sensor. What they do with that knowledge is up to them (with our guidance, of course

). But, I believe that net effect will be more interesting autonomous play.
So, where can we start collecting this information? What form should it be in? How do you teach mentors to teach information that they themselves may not understand? All good questions. Anyone up for a mentor meeting at IRI or other events to discuss?