Future of Autonomous Mode

I’ve been looking at all the videos from the regionals and I’m very disappointed with the quality of autonomous mode. In the real world, in a manufacturing plant, robots are expected to work autonomously doing repetitive routines throughout the day. Successfully programming an autonomous mode is a part of the process of engineering a robot. When will we be able to use a Pentium 4 level processor for the robot? In fact, maybe they should let us interface the robot microprocessor with the PC so that the PC does all the processing and sends back data to the robot processor on what it would like the bot to do. We are currently stuck with very primitive sensors. If only we could mount a couple of webcams and use a PC-class processor, the sky would be the limit for autonomous mode. In fact, you could play the entire match in autonomous mode if you could use webcams. At that point, it would purely be a matter of programming. You could use neural nets and heuristic algorithms and other pattern recognition techniques to understand the situation on the playing field and to have the robot react accordingly. The robot would be able to learn over time as its neural net weights evolve. That would be my dream come true. I would love to try to program something that sophisticated. Digital sensors and even the analog sensors don’t tell you much. On the other hand if your robot is processing realtime video, the sky is the limit. When do you think FIRST bots will be doing that? In 5 years? 10 years maybe?

EDIT: Yes, I do realize this is a high school level competition but there are a lot of very very talented young programmers out there who can do this and much more.

When it comes down to it, it’s the idea of 6 weeks vs. more time. Sure, you can do more with more time, but half the challange is the time limit. 6 weeks is believed to be the bare minimum that you can build a robot. Frankly, sure you can have a P4 (yuck) or PPC ( :slight_smile: ) in a robot controller, but what we have now is the bare minimum. Many times in the real world you will not the option nor resources to use the best. Innovation FIRST had to re-build the scoring and controll system for the regionals at VCU using a few outdated computers! That’s part of engineering challange. Trying to make something out of nothing that should work together.

Most likely, robots in the “real world” are using a processor closer to what we have in the robot controller than a Pentium 4. Very few appications outside of the personal computer make use of Pentium chips, because they are too powerful (and therefore too complex and costly) for the job at hand. The Mars rovers are a perfect example. They do all sorts of complicated things, and it’s all controlled by a 20MHz RS/6000 derivative.

Well, it was just an example. I’m not saying the bot should be powered by a P4 but just a faster processor and much more memory so that it’s capable of the more advanced stuff. Yes, I agree, the 6 week period will hold people back. Unless of course, FIRST provides some default code for say, detecting the big yellow ball or something. Besides, when the software gets more advanced, teams will see a greater need for professional mentors in the fields of software engineering and computer science and they’ll get to experience the work involved in these disciplines.

Again, I agree…I just meant to say a faster processor and more memory than the current PIC processor. The Pentium 4 was a bad example. I’m sure every year, the robot controller will continue to be upgraded. Microprocessors are getting cheaper all the time.

I’ve been looking at all the videos from the regionals and I’m very disappointed with the quality of autonomous mode. …

I completely disagree. I think a lot of teams have been scared off by interrupts and the IR sensors and having to write a realtime embedded sequence in C this year

the learning curve of switching to C has been very high this year.

our team has our bot doing exactly what we want it to do in auton mode - knocking the release ball off then turning around to start collecting them all by itself

and we are using nothing but an Analog Devices yaw rate sensor, and the FIRST beacon detectors - and our detectors are in a fixed postion on the opposite sides of the bot, just to see when we are passing the beacon

in fact, we are not even using interrupts for the IR senors - just polling the INT1 and 2 flag bits to detect when those pins have changed state, then we clear the bits - no interrupts are enabled at all!

If we can do all this with these simple resources, what do we need a pentium and a camera for?

if teams cant even make there bot move in auton mode at all, throwing more complexity at them will not make it easier, it will make it worse.

throwing more money or technology or resources at a problem is never the right answer - if you cant get something to work at all using simple technology, you will never get it to work using something complicated .

This is very unlikely. For something low-volume like the robot controller (a few thousand a year is pretty low volume), the majority of the cost is in the PCB design, writing the default software, and testing. The cost of the processor is fairly insignificant. I think IFI spent basically all of this past year redesigning the RC with the new PIC processor, and I’d be very surprised if they don’t stick with it in it’s current form for at least a few years. Designing and testing new boards with different processors isn’t trivial - especially with the limited amount of time IFI has to work with.

If you have tried to write something that won’t work with this RC because of not enough processor power or RAM or something, you might try sharing it here on CD. There’s a lot of good software folks here, and they may be able to help you optimize your design so that it will work on the RC.

I agree with Ken on this one. People had a bit of a rough time making the switch to the new processor.

I also feel that the PIC was a bad choice (however, I still feel it is a big improvement over last year’s controller). I would have rather seen IFI go to an HC11 or HC12 that comes pre-loaded with a real-time operating system Kernel. We use this type of thing for our custom electronics and it is actually easier to work with than the PIC on the IFI controller.

I don’t think teams will have so much trouble next year. There will be a lot of lessons learned from this year, a lot of whitepapers on the subject, and the rookie teams will have all of this knowledge and experience at their disposal before next year’s build season starts. Let’s see how next year goes before we condemn this year’s system.

I know FIRST isn’t supposed to be fair, but autonomous mode as it is out of reach for many teams. Programming a real AI as well as processing realtime video and active decision making for playing a game as complicated as FIRST’s would not be possible for all but a very very few teams, and I believe those teams would mostly outsource their programming.

FIRST is For Inspiration, of course, but I believe that most of the work on a robot should be by students, and that includes programming. I know that I couldn’t program to the level you are talking about right now; people go to college and spend their careers developing these tools and systems. Certainly C was the right move, but this would be far too much.

I would have greatly appreciated this also. It is much better to have some of the basic requirments laid out for you in an OS than to have to put all that in your code. Plus, with an OS you can have fun things like multitasking. :slight_smile:

Yup, this probably does need to move in stages but I think it would be nice to give the programmers a bigger role in the team. Really at this point all you need is one guy and an hour or so to write all the basic code for the bot. Any more advanced stuff will take another couple of hours. It would be nice to have a team of programmers instead working around the clock like the rest of the team, trying to figure things out.

EDIT: Perhaps we’re looking at something in the long run. Maybe in 20 years, FIRST bots will be AI driven? :smiley: Besides, this just adds to the coolness factor. :cool: :stuck_out_tongue:

FIRST is a competition intended to give high school students the experience of working on all stages of a real engineering project in six weeks (Design, Prototype, Build Program, Debug, etc…). FIRST does this by getting all teams to start from scratch with a new game. Although some teams have advantages like a battle-tested drive train that they rebuild every year, it still is a new game with new challenges (like steps to climb). Autonomous has new challenges as well.

FIRST is not an AI development competition like the RoboCup. They use AI with cameras to play an autonomous game of soccer. Some robots even network with teammates to organize plays, which would be cool for FIRST if IFI would provide the ability for bots to communicate with each other (perhaps that is the future, FIRST loves cooperation). However, Robocop teams same simple goal: kick the ball in the opponent’s goal. The game of soccer is much simpler than any FIRST game, which are notorious for their complex scoring systems that many humans can’t comprehend. Even if we were given a default camera object recognition system, very, very few teams would be able to teach it to play the game in 6 weeks. Think about all the different objects involved in this year’s game (3 types of balls, 4 different goals, etc). All FIRST robots look different, so the only way to tell a friend robot from a foe is those little blinking lights.

This would create a larger gap between the haves and have-nots when it comes to Autonomous. Currently the pinnacle of autonomous is a positional coordinate system (like the ones Wildstang and a few others have). The PIC is powerful enough to allow everyone to do this without external processors. As teams get used the new processor, gyros and encoders in the next few years, many more teams will develop positional systems, including some relatively new teams. It gives all programmers (students and mentors) a lofty, but achievable goal. A camera based AI system in 6 weeks is not an achievable goal for nearly every FIRST team.

Really at this point all you need is one guy and an hour or so to write all the basic code for the bot. Any more advanced stuff will take another couple of hours. It would be nice to have a team of programmers instead working around the clock like the rest of the team, trying to figure things out.

There’s a lot for programmers to work on right now. It’s often useful to compensate for mechanical quirks your robot may have, or to streamline your control system. If nothing else, there’s plenty of work to be done on autonomous mode. If your team can afford sensors, they open up new possibilities and improve the accuracy of your existing code. Accelerometers, gyros, optical sensors, pots, and prox sensors all help. At the very least, you could always create dead reckoning programs.

Its pretty easy to integrate a program running on a PC with the mini-RC using the serial port. You can code all of your low-level functions on the RC with any high-level processing on the PC. The PC handles all of your “big picture” task management and the RC handles IO and actuator control. Your PC program is basically acting as a smart operator in teleoperation mode. You could probably mount a small laptop on a simple mobile robot base or use a wireless serial adaptor.

Obviously you won’t be able to use this on the real robot for the FRC, but what are you going to do for the other 46 weeks of the year?

I think with most teams/programmers at the first regionals of the season, the priority was getting the robot up and running before worrying about autonomous. We will be going to our first regional of the year on Thursday and you can bet we will be worrying about getting the robot running before we finetune our auton code. By nats though, I expect that plenty of teams will have working auton modes.

FIRST IS meant to be fair. Team 1241 showed us at GLR that a Rookie can do just as well as veteran teams. They had an amazing design and idea. Obviously veteran teams are doing well with awesome designs as well. Autonomous mode is simply something else that teams can use to show off their designs and strategies. We can’t look away from the fact that some teams have more engineers than others. This will make their robot do more things in the game, but a specialized robot can do just as well as a robot that can do everything.
In respects to the goal of FIRST and the competitions, it is meant to be very fair. I hate it when people say FIRST isn’t fair. We have been a team without any engineers for all 6 of our years as a team. Our students have always designed and built the robots by themselves. We even managed to win Midwest and Newton last year. It all comes down to strategy and how hard the students want to work.

I too, think programer should have a bigger part in FRIST, I think they start by a better RC

Autonomous mode is probably the most important parts of the whole 2 minutes probably because not all teams perfect it. For example our team NEEDED an alliance who could knock off the bonus ball during autonomous. If they succeeded we could guarantee a win 90% of the time. If they failed to knock the ball we would be stuck there without any strategy for 45 seconds making us incredibly vulnerable. My point is that teams might as well take advantage of this given 15 seconds during the game since It could make a huge difference in the end.

I think the autonomous probably won’t become much more complicated than what it already is now… for the very simple reason that as the complexity increases so does the complexity of the testing requirements.

Not all teams are capable of building a full mockup field, for example. Nor do all teams manage to finish their robot with weeks to spare in order to test these autonomous programs.

I think the key in developing the autonomous area of the game is to keep the procedure relatively simple (not much more complicated than what it was this year) and increase it’s strategic significance in the game. In last year’s stack attack, had the potential to make the autonomous really worth while… but then the king of the hill came in and seemed to pretty much negate that. This year, again, you’d think knocking off (or grabbing) that 10 point ball would be worth more, but herding those balls (and managing to throw them in successfully) turned out sometimes to be less successful than just hanging from the bar.

(of course, there are always exceptions with those amazing robots)

I remember my adult mentor discussing this with someone else… (for the life of me I can’t remember… it was probably with me :P) if they wanted to make the autonomous more of a priority for teams to make it worth a set number of points. For example, rather than just having your score based on the final state of the field, if you had your robot knock off that 10 point ball, you’d automatically get 20 points for example.

Unfortunately, doing so would probably further unbalance the game. Where teams with the programming and testing resources would be able to develop a flawless system and teams where they can’t afford to build testing fields or have time to test and debug the program… or even the experience and education to write the programs and work the miscellaneous sensors would be left in the dust.

So where does the future of autonomous lie?

Probably right where it is now; purely for a strategic importance. Getting your robot in place to climbe the stairs. Or knock off the 10 point ball. Something that would help you (or if you have bad luck, break you) but it isn’t required by any means to have a successful match.

Amen. :slight_smile:

I totally agree with that. However, I feel that autonomous mode should still become a larger part of the game. :slight_smile:

Plus, I want a better microprocessor. :slight_smile: