Machine Learning for Autonomous Robot Actions

I recently stumbled upon a scientific paper (here) related to building an autonomous robot controller with machine learning techniques. This article gave me the idea of using machine learning algorithms for controlling autonomous robot actions in FRC. This algorithm in particular seems somewhat applicable to what we do with our autonomous modes. Experiments with it also seem encouraging so far.

I was wondering if anything like this has been done before in FRC?

In addition, the algorithm outlined in the paper above employs a couple of sub-algorithms I am trying to implement. One is “iLQG”(paper about it here), and another is “Path Integral Policy Improvement”. Has anyone had any experience with, or even heard of them before?

Doing a quick search of machine learning comes up with this fairly recent thread: http://www.chiefdelphi.com/forums/showthread.php?t=136523&highlight=machine+learning

This seems like the best approach to machine learning a high school student would be able to implement for robot control.

If you want to get technical, 900’s vision code this year implemented machine learning to identify the bins.

I’m assuming that when you talk about controlling robot actions, you mean something more sophisticated than simple things like driving in a straight line, and are instead talking about having the robot play the game autonomously.

You shouldn’t need anything more than hand coded top-level algorithms for FRC autonomous, although it’s possible to use machine learning sorts of results to make modest improvements, I think that for anything non-trivial (see my first paragraph) the cost outweighs the benefits.

Attempting to automate the driver-control part of the match has come up before. Read those threads.

I think the consensus is that attempting it is an excellent learning exercise, but that it’s unlikely to bear fruit quickly.

Also any algorithm that you might put in charge of your bot will only be able to react to what it senses. You have to possess a robot with the right sensors before you can automate it’s behavior.

However(here is where the glass becomes half full), if you start by learning to control a simulated robot…

  1. You can figure out what sensors you would need/want on a real robot, and
  2. You can learn how to choose and train/tune an algorithm that would have a good chance of being successful in a real robot, and
  3. Safely operating, modifying and repairing dozens of simulated bots is waaaay easier than doing the same with real bots.

So… I suggest starting with a simulator.

Blake

For better or worse, FRC has stuck with an autonomous challenge that required precise and repeatable actions. The closest I have ever seen an FRC autonomous challenge get to needing decision making and actual autonymity was 2014, and even then it ended up not working out since the field and robots weren’t properly synced so we ended up telling the bots which goal to go to through chezyvision anyways. (I am well aware that in 2007 the rack was moved around prior to a match requiring robots to actually track targets most of the time. I was unfortunately but a child at the time.)

I would personally LOVE to see a game where more in the autonomous mode was nondeterministic, maybe making a machine learning algorithm useful for autonomous action. Unfortunately many teams still struggle to get moving in autonomous, so it seems FRC is trying to get the floor up first and then worry about the ceiling.

But build season is only 6 weeks! Theres a whole year out there to mess around with this yourself. Who cares if it goes on a competition robot?

Yes, that is definitely one of the things I will do. (My team would not take well to being told that our robot was now learning how to play autonomous, and that I had no idea what it would try next!)

This year, I had tried to run a 3 tote automous routine with my team’s robot. I had the robot’s motion carefully controlled throughout the entire routine, and routines were very repeatable, with very little deviation from run to run. However, I could never seem to get it quite fast enough to complete in time.

We then went to a competition to find another team with a routine almost identical in procedure, but with an extra (almost sloppy) skid here and there, making them fast enough to complete it in time.

My hope is that a machine learning algorithm would be able to optimize autonomous routines to ensure we had the best run possible with a given robot, including finding solutions that a hand-written autonomous may not have. (Including a well-timed skid)

On the other hand, such a system would be very hard to modify mid-competition, and would need a complete re-train in order to, say, drive a little farther into the auto zone.

Oh how I wish for a bigger autonomous mode this coming year…

Agreed. The one thing that will surely lead to more autonomous behavior is a greater opportunity for scoring points using autonomous behaviors. That’s a decision only the GDC can make. I encourage you to make your voice heard to whomever you can.

In the mean time, keep researching it, as others have pointed out it’s an area rich with opportunity for research and innovation, and it’s my opinion that those who pursue a career in autonomous navigation and control will likely find it very rewarding in many ways.

There is this: https://github.com/KHEngineering/SmoothPathPlanner

It is designed to be used in real time, but one could compute the velocity time profiles before hand and simply upload them to the robot. Granted this only gets the robot (x,y) (in theory), one still has to do something once it is there, such as pick up a tote or shoot a ball.

Beyond that, you could implement q learning with computer vision (through raw images, or data gathered from vision and sensors) if you have enough computational power. I believe machine learning never will take off unless wireless aerial cameras are allowed.

In situations like this, I generally bet on the biological neural nets (the wetware) instead of the silicon nets, because the problem space is usually more complex than anything a reasonable (in an FRC time and trouble sense of the word “reasonable”) silicon net (or other machine learning) is able to properly digest and respond to.

Whatever is given control of the robot does need to incorporate the right level of sophistication. What gets coded by hand has the possibility of adapting fairly quickly (but not magically) to include as much sophistication as is needed. The sophistication a machine learning algorithm includes will never exceed some fixed mash-up (created during training) of that algorithm’s fixed inputs. That mash-up might surprise us, but it will always be limited until some wetware gets involved to expand the mash-up’s limits.

In the sort-of simple case you described, explore the many ways you could have changed your hand-coded software, before throwing in the towel and betting on machine-learning techniques in the future. Ask yourself what a machine-learning algorithm could have incorporated, and ask yourself how many other ways there are to incorporate the same things. That exploration will help you know when, and when not, to deploy more machine learning.

For high-level decision-making, your wetware neural net is the more sophisticated one. Use it wisely to decide what to encode and deploy in the software.

Blake
PS: This thought came to me a few minutes ago. A good frame of mind for FRC robot designers might be to think of themselves as someone trying to build an excellent mechanical turk (the original one, not the recent Google thing). The result would have plenty of useful automation, but wouldn’t try to force fit automation everywhere. That result would make full use of the processor(s)/method(s) best suited for each task; and some of those would be biological.