Making autonomous accessible to all teams

Okay, here’s another brainstorm thread.
How can we as a community make autonomous accessible/achievable to the majority of the FIRST community?

Create resources:

  • Create flexible and powerful methods of robot control
  • Standardize method of inter-robot communication

Host workshops:

  • Choosing the right sensor for the job
  • Programming techniques
  • Machine decision making
  • Predicting movement of other robots

What new methods do you think are needed? Or what methods supplied already need to be changed? A decent autonomous mode can be created by what is supplied by WPI, I think.

It’s already dead simple to make an effective autonomous program using the Autonomous Independent framework in LabVIEW. Every team that asked me to help them was able to do it themselves only a few minutes after I showed them how it was intended to be used.

For teams without software mentors, the current FRC programming environments are just too difficult. While our team has never had a issue (since we have four years of computer science at our school) you only need to look at the number of robots on the field that just sit there during autonomous to understand that there’s a problem.

There really needs to be something of the level of RobotC or even NXT-G (written in LabVIEW, by the way) for teams in this situation. LabVIEW is too much for some teams, forget about C++ or Java. A nice simple development environment and an easy to learn language is what it will take.

There was a toy I had as a kid that was called a Big Track. It was a toy tank that had a keypad in which you gave it a program to perform autonomously. Some sort of hand held easy computer with specific canned capabilities that plugs into the cRIO and programs it without other interfaces is the only way I can see EVERY team being able to do autonomous.

It may seem cold, underestimating the teams, but not every team can get an engineering or technical mentor, and not every team has students interested enough in computers to learn what they need to know to do autonomous. But this is a fact of life, that not everyone has the same resources and interests.

I think that FIRST is getting better at making autonomous modes that are worthy enough to pursue so that the challenge is there, but not so game breaking that only autonomous mode robots can win. I think keeping that balance is about all that can be expected.

This is true. For example, at the 10,000 Lakes regional, I was unable to test my autonomous code until actually reaching the event, as we shipped the Crio on the robot and didn’t have a spare. When I realized that I made a serious error in the code, I scrapped everything I wrote (this was 5 minutes before our first qualifying match, and I had spent the entire day before debugging to no success), and proceeded to use just four lines of code to run our autonomous for the entire event. It worked great, scoring 3 goals total while in the front zone.

The lines consisted of the following in the c++ autonomous periodic loop:
RobotDriveTrainObject->Drive(-1,0);
Wait(1.0);
RobotDriveTrainObject->Drive(1,0);
Wait(1.0);

This isn’t hard to figure out how to use. The problem isn’t that the code is obfuscated for simple controls (for other parts, yes, it is, hands down), but that the knowledge on how to do something like this isn’t readily available. The WPI guides are pretty obscure in explaining how this works to teams that don’t have students already proficient in c++/java.

I think there is no problem with LabVIEW autonomous. I used Autonomous Independent, and ran my own loops within it, and see no reason not to. I had a system like NXTG that gave me high-level controls to do feedback on speed while driving straight and finish by distance, etc. and blocks to set data for the other modules to pick up (kick distance, shift state, chassis mode, kick, ball-O-fier). It worked really well. I am already planning on next year.

I wrote some code for a fairly new team at Troy. We were playing against 469 and wanted to try a sacrificial robot. They had mecanums, and volunteered. So in like 10 minutes (using their Classmate) I wrote a simple time-based routine that used Mecanum-Cartesian and Delay And Feed, in a flat sequence structure, and it worked perfectly. They made it into the tunnel, and 469 did not. 469 was able to get in in the last 20 seconds and win the match, and that was enough to push us from #1 seed.

I also helped a team that we mentored last year, with some autonomous stuff before MSC. I told their programmer to use Autonomous Independent, and string together Tank Drives and Delay and Feed’s, and connect their errors. Since data flows over the error line, LabVIEW executes the VI’s sequentially and that’s all you have to do. He was impressed as this was much easier then the Auto Iterative he had at Detroit, which didn’t work.

There is one giant flaw in the system that causes autonomous development problems, especially on LabVIEW. Every time you build code, it has to re-build the entire WPI library. Then it re-downloads the whole WPI library. This is painfully slow, and for minor autonomous fixes between matches this is often a giant problem. Example: While sitting next to the field in elims, I had a minor kick distance change to make. During auto, I wrote in the new number, and begun the build. It did not finish the build until after the robot came back to the “pits” (this is in Atlanta), the tether cable had been connected, and the classmate was booting. Then, it finished downloading in only like 2 minutes. It would be nice if it was easier to partition the WPI lib so it dosen’t have to rebuild, or separate the autonomous code.

I’m sorry but I don’t see how a simple Autonomous mode can be hard to write. You don’t even need a year of Computer Science to know how to write it. I started learning Java during the build season and was able to write our Autonomous code. Like theprgramerdude, ours was about the same.

It just takes some time to learn the language, and read the documentation. There were some problems with the Camera and the tracking for us, so we decided to keep it simple.

Okay, so we mostly agree that a sequential, time-based autonomous is extremely easy. But that doesn’t require any sensors. Why are sensors useful?
Sensors increase the repeatability of an action as other factors change (e.g. battery voltage drops or mechanism gets jammed) Sensors also allow the robot to respond to changes on the field, meaning the robot can operate based on intent rather than actuating by rote.
Higher levels of control are useful in connecting actuators to sensors in common and easily configurable ways. For example, in NXT-G, it allows you to tell the robot to go forward for a time, a distance (degrees), or until told otherwise. It even allows you to ramp the speed from the current value to the desired value. Likewise, the “wait” function is configurable for a time, or until a sensor is greater/less than a given value. Such high-level coding can save time and reduce errors.
As has been pointed out, all robots are different. Such high-level control needs to be extremely configurable to allow for the differences in sensors, strategies, decision making, actuator control, and wiring configuration.
In other words, it needs to be modular and extendable. I like the idea of separating it into Perception, Planning, and Control. (Linked are Chief Delphi threads about each one)

I have to disagree. I hadn’t used LabVIEW at all before the first week of build this year, and I was able to program multiple, successful autonomous modes by the time the build season ended. And I’m really not a programming genius either. The WPI Library, example code, and context help really allowed me to understand the way LabVIEW works, and I thought that going about editing Autonomous Independent VI was very intuitive and straightforward. By the end of MSC (the last competition where we used LabVIEW), our autonomous code was very advanced and used encoders, a pot, a gyro, and multiple PIDs. This proves that it doesn’t take a really experienced programmer to use multiple sensors working together to implement an autonomous mode.

I’ve seen people on this forum complain that a team needs no programming skills, because everything is handed to them in the WPI Library. This might be true… If you want to use a simple tank drive or an arcade drive or holonomic drive or PID, that’s all pre-programmed. I have to say thank you to the WPI Library, because without it, I would have had a much much harder time programming in LabVIEW. However I do think that in this advanced, high school level robotics competition with professional mentors, we should be using REAL programming languages and REAL programming environments. Not something like RobotC or NXTG that we’re never going to see in our lives. Besides, we’re learning about these more advanced languages in school and if not, the pre-knowledge of a language like C++ or Java or LabVIEW will vastly help for college courses and eventually careers in computers. Remember, this is a learning experience and preparation for college and careers in engineering, not just a robotics competition.

Perhaps the real reason why close to a majority of robots do not move in autonomous is that the teams did not have enough time to program or test their autonomous modes. Or, maybe they couldn’t find the room or manpower to make a practice field. I could imagine many teams at the end of week 6 were just thinking about getting their robot together, or making weight, or getting their kicker to work, or adding a ball possession mechanism, or doing anything that the team considers more important than getting an autonomous working. I think that any team that has at least one dedicated programmer from week one can figure out how to do an autonomous, but whether or not there is time to debug and test it at the end of the season is a different story.

There is no doubt that sensors are useful in robotics (I would define a robot without them as a “machine”, not really a “robot”), but you can’t over-simplify things too much. NXT-G suffers from this a great deal; and creating anything more complicated than very simple sequential instructions (with the occasional decision making) is a pain. I’d hate to see that happen in FIRST.

I think it would be helpful if FIRST provided a code library with a similar interface/feature set as Tekktosu. Personally I find that using state machines to model robot behavior is much more intuitive over typical C/Java code. Also the Takkotsu vision library runs circles around what FIRST provides you guys.

I’m not suggesting using a different environment. I’m just suggesting alternate frameworks that make control easier.
For example, with this framework I made, any action can be started or stopped with any of the following conditions:

  • immidiately
  • time delay
  • time in match
  • named value =, <, or >
  • named input =, <, or >
  • Completion of another action
  • Sucess of another action

This allows dynamically sequential actions, but prevents race conditions (actions are isolated by their mechanism). It’s very flexible, but allows many common actions to be easily implemented.

However, this is just one method of abstracting autonomous control, and surely not the only method. I think the sorts of control people want to do are similar enough that they can be all part of a generic framework, and then programmers can start transitioning from preplanned actions to dynamic action planning.

You’re probably right. Is there anything that can be done to remedy this?
I think the programming time tends to affect rookie teams the most, and isn’t usually a big deal once programmers are familiar with the language. Our region holds pre-season workshops for such purposes, though many rookie teams are pulled together at the last minute. Releasing the WPI libraries before kickoff could be a big help as well.
But lack of time to test is something every team runs into. What about encouraging modular control systems that can be removed from the robot intact and used on a test setup while the robot undergoes mechanical changes? Educating on practices of testing algorithms on the PC? Modular code implementation? I have a software development guide which might help with this.

I think it would be interesting if FIRST provided a mini chassis set up, something that a student programmer could take home with them while the team worked on their chassis. If they provided several sensors, and a cookie cutter code for that test chassis it would be nice and I think it would help with the time issue. Also once the robot would ship you would still have a test chassis. I understand that it would not be nearly a one-to-one with the actual robot, but from discussion I don’t think the fine tuning is why robots play dead, its because the “getting started” is something that is very low in many teams priorities, and fairly difficult to do.

As with any good programming language or technology there are tons of tutorials. However with FIRST these are few and far between. And the tutorials (sample code) that do exist are fairly intimidating.

I think the combination of an advanced framework that alleviates high level functions, a programming chassis, and tutorials would greatly lower the bar for getting started.

Based on this I would propose 3 steps:

  1. Distribute a Test Chassis with very specific instructions on how to set it up.

  2. Bundle a framework where all you have to do is define the parts of the robot
    and the maneuvers.

  3. Release a set of dozens of tutorials that your mother could follow and get working.

This thread might’ve been more appropriate 3 years ago.

Why isn’t it appropriate now?

For my team, I didn’t think about programming autonomous until the 4-5th week into the season. I knew what I wanted to do, but I didn’t know exactly how I wanted to execute it. During the season when I was writing the code for the robot, I kept it in a format so that I could simply tell it what I wanted it to do, and the rest of the code would take care of it without me having to think about it. Combining this format with autonomous made it very easy for me to program autonomous including the various sensors that was required for the kicking system on the robot to work. Though, it did take me a regional to get it right due to lack of testing (mostly my fault though).

I think autonomous programming can be made easily enough if the programmer has made things modular (i.e. like I did) thereby reducing the workload when it comes to the autonomous part.

As an idea over the summer, I was thinking about teaching some interested programmers some of the thought-process that is required for programmign a FRC robot (or for that matter anything) with an arduino with a few sensors set up and perhaps a servo or two. It’s a small and relatively cheap platform that is quite easy to use. To program it runs a C-like language, so it’s not quite ideal for LabVIEW use, but I’m sure it could be easily modified for such.

-Tanner

i kinda like the idea of leveling the playing field for everyone, but here’s the issue: we are not here to prove who can win, but rather, we are here to learn. if you just hand a team an autonomous mode, and tell them it works, they’ll use it. i know i would, only because i put so much into it. but what do you learn about programming in this situation? people need to realize that autonomous is not that far off from teleop. it just seems a little intimidating. really, i think the only thing to really do is try to help those teams that have a hard time with it. otherwise, by “helping” you may be doing their team harm if you just hand autonomous to them.