In 2012 (and possibly earlier, before my time) you were allowed to control your robot during autonomous using a Kinect provided to teams. IIRC it provided the robot with a list of coordinates corresponding to the “driver’s” joints, which teams could process for the robot to recognize hand signals. It wasn’t very popular but there were a few teams that used it. It also lead to some funky hijinks.
In 2014, there was a significant advantage to controlling the robot using simple hand gestures in autonomous: the hot goals. If you shot the ball while the goal was “hot” (aka the lights around it were lit) you got a bonus. That lead to CheesyVision, which also uses the driver station webcam to send signals to the robot. Unfortunately this hack was short-lived, and the next year the language changed from “no touching your controls during auto” to “no communicating with your robot during auto”.