Will We See Full Auto Robot With ML that Replaces Human Driver in Tele?

This might just be a crazy imagination, but looking at ai learning to play games with reinforcement learning, will we ever see teams use machine learning to fully replace the driver even in the teleoperated period?

One might train the ai in a simulation with the accurate controls of the robot, and then migrate the ai to the real world to do further training.

Here is a video by Two Minute Papers on an ai model playing a physics simulated game: https://youtu.be/SsJ_AusntiU

6 Likes

Boss, I’m just trying to get my students to learn the machines

63 Likes

66r0ba

39 Likes

We continue to see software innovation each year so I’m sure drivers will begin to have more and more software-assisted maneuvers, but full auto in teleop likely won’t happen soon (and begs the question if it’s legal – i have no idea).

Taking a common example, Teslas required highly extensive testing for self-driving. The amount of resources to accomplish it were undoubtably quite large. While a good deal of the time to push self-driving was probably due to governmental code, the complexity for a high school team compared to Tesla is extreme. You would have to be intaking and processing information quickly which would be difficult if you were to install the computer onto the robot though I suppose you could send it to the drivers station and compute it there, but then bandwidth would cause issues I presume.

I’d love to see myself proved wrong, though. :slight_smile:

2 Likes

There is no current prohibition on using auto in teleop.

Matter of fact, WAY back long ago, I recall hearing rumors of at least one team running auto before there was an auto period. If you want stories… catch me at L.A.

8 Likes

I’m under the impression that a few 2015 stack in place robots/minibots were nearly full auto except for edge cases.

10 Likes

I keep our programming simple. Very few PIDs and control loops and sensors. Humans are quite good at doing most things a robot can automatically do.

If a system has a control loop, there is almost always an override.

Makes programming a lot easier :grin:

Now Zebracorns on the other hand…

10 Likes

Maybe 10 years after self-driving cars actually work at all.

There are other drivers, strategy minutiae, etc. that may forever get in the way of this. 2015 was our best shot at this yet and we didn’t see too much of that.

7 Likes

Hell our full robot in 2015 was almost completely autonomous. I believe fine positioning was almost entirely the only driver input.

6 Likes

Assuming teams had the knowledge and manpower to automate robot driving with ML (such as reinforcement learning as you described), I imagine there are still many other deterrents that you’d come across.

  • How much sample data can the robot receive and process during the match? Probably a lot less than the eyes of 1 or more humans. This limits the ability of robots to process potentially useful information about the field.
  • Getting sample data to learn a policy will be tough. You would rarely have real competitive matches to train on, so you’re relying on simulations (which hardly align with real world data) or practice matches (which are also unlikely to align and are also not easy to get in large quantity). Robots would likely not be able to learn complex policies.
  • Due to the black box nature of ML, during a match there could be many edge cases to be exploited by opponents (worst case) or accidentally encountered (best case). Teams would need a manual override, and utilizing the override and course correcting will create a waste of time as opposed to having the whole thing driver-operated.
  • Driving is fun! I bet a good number of drivers play video games or other activities that hone in driver skills, and their ability to communicate with the drive team and react to a diverse set of scenarios quickly (AND use sophisticated/fine-tuned strategies) is much higher that ML-powered robots would be.

I agree with others on the thread that some of these limitations may be mitigated in the future. But I don’t foresee ML-powered robot routines to become a useful tool for generic robot control in the near future.

That said, you will see ML-powered routines used in smaller components of robots where the factors above can be mitigated, e.g. Ball detection or goal detection/shooting. I wouldn’t be surprised if there was a robot in 2011 that raised its arm to the right height based on the color of game piece held, for example.

3 Likes

Our 2019 programmer was insanely good. He left this in the readme…

18 Likes

971 I am looking directly at you guys.
If somone would do it they could if they tried.

9 Likes

Step 3 is to have an algorithm learn against what are effectively other already-coded fully-autonomous robots, just in simulations where they don’t have to detect real objects. I feel like people are really trivializing that step! This isn’t chess or go, you can’t just play against a large number of semi-random fixed moves. Not saying this isn’t doable but machine learning can’t hand-wave this away.

7 Likes

For a fully autonomous robot in FRC that could compete with a human driver, you’d probably have to be able to do the following prerequisites:

  • Know the robot’s position on field for the entire match (mostly a solved problem as far as I know)
  • Accurate autonomous driving (a solved problem)
  • Detect game pieces on the field (also mostly a solved problem)
  • Detect other robots on the field (see 2898’s work on this back in 2017, could probably use more work / be updated to use more modern technologies)
  • Put all the above information together and write the actual AI to tell the robot what to do during teleop

This is a vast simplification of what it would take. Most importantly, you’d need your mechanical/cad/design team to build a robot fast enough to give you time to test all this fancy software (I heard 1678’s good at building robots in 9 days).

5 Likes

It wouldn’t be to difficult, just give a neural network a 360 lidar view and all the values from all the subsystems on the robot and let it output commands for the robot to do.

1 Like

With that said. I take that you’ve done it?

2 Likes

Not yet, if I can convince my team’s mentor to let me try it then we’ll see if it’s as easy as I think it will be

1 Like

There is no doubt in my mind that one day a team will create a fully automatic robot to play an FRC game. The tools to do so are already here, and as time goes on the technology to do so will become more accessible.

david’s challenge still lives on in the students of today

1 Like

Second this opinion!

2 Likes