This might just be a crazy imagination, but looking at ai learning to play games with reinforcement learning, will we ever see teams use machine learning to fully replace the driver even in the teleoperated period?
One might train the ai in a simulation with the accurate controls of the robot, and then migrate the ai to the real world to do further training.
We continue to see software innovation each year so I’m sure drivers will begin to have more and more software-assisted maneuvers, but full auto in teleop likely won’t happen soon (and begs the question if it’s legal – i have no idea).
Taking a common example, Teslas required highly extensive testing for self-driving. The amount of resources to accomplish it were undoubtably quite large. While a good deal of the time to push self-driving was probably due to governmental code, the complexity for a high school team compared to Tesla is extreme. You would have to be intaking and processing information quickly which would be difficult if you were to install the computer onto the robot though I suppose you could send it to the drivers station and compute it there, but then bandwidth would cause issues I presume.
Assuming teams had the knowledge and manpower to automate robot driving with ML (such as reinforcement learning as you described), I imagine there are still many other deterrents that you’d come across.
How much sample data can the robot receive and process during the match? Probably a lot less than the eyes of 1 or more humans. This limits the ability of robots to process potentially useful information about the field.
Getting sample data to learn a policy will be tough. You would rarely have real competitive matches to train on, so you’re relying on simulations (which hardly align with real world data) or practice matches (which are also unlikely to align and are also not easy to get in large quantity). Robots would likely not be able to learn complex policies.
Due to the black box nature of ML, during a match there could be many edge cases to be exploited by opponents (worst case) or accidentally encountered (best case). Teams would need a manual override, and utilizing the override and course correcting will create a waste of time as opposed to having the whole thing driver-operated.
Driving is fun! I bet a good number of drivers play video games or other activities that hone in driver skills, and their ability to communicate with the drive team and react to a diverse set of scenarios quickly (AND use sophisticated/fine-tuned strategies) is much higher that ML-powered robots would be.
I agree with others on the thread that some of these limitations may be mitigated in the future. But I don’t foresee ML-powered robot routines to become a useful tool for generic robot control in the near future.
That said, you will see ML-powered routines used in smaller components of robots where the factors above can be mitigated, e.g. Ball detection or goal detection/shooting. I wouldn’t be surprised if there was a robot in 2011 that raised its arm to the right height based on the color of game piece held, for example.
Step 3 is to have an algorithm learn against what are effectively other already-coded fully-autonomous robots, just in simulations where they don’t have to detect real objects. I feel like people are really trivializing that step! This isn’t chess or go, you can’t just play against a large number of semi-random fixed moves. Not saying this isn’t doable but machine learning can’t hand-wave this away.
Put all the above information together and write the actual AI to tell the robot what to do during teleop
This is a vast simplification of what it would take. Most importantly, you’d need your mechanical/cad/design team to build a robot fast enough to give you time to test all this fancy software (I heard 1678’s good at building robots in 9 days).
There is no doubt in my mind that one day a team will create a fully automatic robot to play an FRC game. The tools to do so are already here, and as time goes on the technology to do so will become more accessible.