Improving the experience of programmers and the effectiveness of code

Improving the experience of programmers and the effectiveness of code is what I try to do in FIRST; I think it’s a respectable goal.

Here’s what I do to achieve those ends:

  • Develop more advanced autonomous
  • Develop higher-level methods of control during teleop
  • Stimulate discussion
  • Publish my code online
  • Provide resources and create resources to help teams in their software development
  • Provide programming workshops to local teams
  • Promote collaboration and innovation
  • Encourage others to strive for innovative algorithms and strategies in their code

However, I don’t think that my efforts are having the impact that they could, and I’d like to change that.

What can we do to improve the experience of programmers in FIRST (and the effectiveness of their code)?
How can we collaborate to help rookies achieve notable programming in their first year?

Number one thing to help a rookie team with programming would be to lend them a robot they can work on while the team is building the teams robot.

Biggest difference between our rookie year and second year was having the second platform to allow the programming team to work in tandem with the engineering team.

On my old team, we had a problem with not having programming mentors, or mentors who even understood programming. Our computer science teacher was on the team for about 1/2 the years we competed, and we did decent work for those years, but when completely unguided our work got mediocre in comparison to everyone else at our regionals.

I don’t know if this is a general problem in FIRST, but it would be nice to have more software engineers and computer scientists mentoring teams.

In addition to this, I think educating non-programming students and mentors is important. I know that last year, our teams leadership tried to force a switch to LabView, despite not having any programming experience, and meeting stiff resistance from both our programming team and others who knew programming. I think if they knew what we were talking about more, it would help ease cooperation.

Most programmers understand the basic mechanics of the robot, but most mechanically oriented team members have no clue what the programmers are doing. Of course there are exceptions, but this is what I found on my old team.

That’s certainly a great goal… but it’s difficult to see how everything can be accomplished. On one hand, a team with no programming experience (a rookie team, or a team whose programmers all graduated) is going to have a difficult time just getting things working. On the other, a team with a solid crew of returning members is going to be able to tackle much harder challenges. Any plan you put in place has to take into account your audience - In short, you need to start by defining an end point that is achievable yet challenging for the students, and that end point will be different for every team.

The only thing i can stress from my teams experiences: keep it simple, and start from the basics. Ensure you can drive your robot before you try to make driving more interesting/precise. Make sure you can operate your manipulators with the controller before you try doing it during autonomous. Make sure you have something simple that works OK for autonomous before you try implementing any complicated PID loops.

We’ve been around for 3 years now… And i can tell you that the rookie teams that win are the teams that keep it simple. Simple autonomous modes work just as well as complex ones, once other robots start bumping into you. Just being able to drive around can get you into a winning alliance.

Fixed, you may view the winning teams’ robots as complex but if you look at most of the teams that consistently win it isn’t a complex mechanism but a series of simple mechanisms and systems that work together. Simplicity wins. Remember that in code too, yes I can write a function that takes 40 arguments and is 400 lines long. Should I? NO! Break it into a dozen small functions not only will it make it easier to write it will make it easier to change for when the wrench monkeys decide they want to change the whole robot.

Us code monkeys need to remember that what we do is the least flashy, least recognized, least understood, and most important part of any robot. It is also among the hardest thing in FRC (I am factoring in time allotted to this task).

I hate to recommend a bad habit but in a lot of cases what stresses me out when doing FRC code is the people who know nothing or very little about programming telling me how to do my job. Just acknowledge them, say you will look into it and move on. Sometimes they have a good idea but it is ALWAYS simpler to not explain why they are wrong. Trust me, it will lead to far less arguments.

If we want programmers to grow across the FIRST robotics competition, we need a great deal more code sharing between teams and individual developers…this doesnt just mean example code–“oh look, here’s how to write ur teleop mode” doesnt help anyone as programming is never fully understood until you actually do it for yourself.
Instead, I think we should push for more abstraction and framework code, developed by experienced developers(metor or student), and the sharing with and teaching rookie developers, across team lines.
I say abstraction can be the vehicle for two reasons. First, abstraction makes programming a great deal easier to understand(the rationale for high-level languages), and also, as rookies learn those frameworks, they will begin to explore the possibilites of adding to them…

These are really high minded goals, but I believe they are within the capabilities of FRC teams. As an example, my team (192; Palo Alto, CA) has built a friendly framework for this years competition(and future years) in Java, and Open-sourced it to allow other teams to utilize it. It is available at http://code.google.com/p/grtframework

We offer as much support as is possible to users of the framework, and release it not after competition, but before it, just as teams are allowed to try out java. Our hope is that instead of fighting the language change, people can be helped into it and reap the benefits.

Mmmk, FRC is not about the students. I direct your attention to USFIRST.org, you will see that while it is about inspiring young people it is NOT about students. Our goal is to transform the culture to recognize STEM as a viable career path. FRC is not the summer training camp, it is the Superbowl of smarts. Yes students learn but more importantly they are inspired by seeing what a robot can do, by seeing what a trained professional can make a robot do.

That being said, I will never tell a student they can’t do something (barring safety or legality concerns). If a student is interested in programming I will always let them even if it means the code takes many times longer. If no student is interested I will do it myself though, there is no reason to FORCE a student to do something they aren’t interested in. As a result of this belief I disagree with the notion that an “absurd” amount of FRC code is written by mentors. Not only do I question the validity of that statement (please cite your sources) but I also feel that you can only make that statement for YOUR team (or any team you are intimately familiar with).

For example, the years I was on 27 code was done with the involvement of students, as for 397, 2008 was initially done by me due to lack of student interest. A student came forward and took over when they were ready though. I am proud to say that in 2009 90% of the code on the 397 robot was done by a student programmer. Maybe he didn’t write it all on his own but the closest I ever came to writing code for last years robot was some scribbles on a note pad helping him step through a problem. I hardly call that “absurd”.

In my experience with FIRST, I would actually say a lot of the FRC code is written by students.

I’ve been meaning to get our team to write some sort of white paper. I’ll get them to document their offseason work and hopefully can contribute to the code base and knowledge of programming. I don’t think we’d be posting any specific code because it probably wouldn’t be helpful to other teams but probably more of what concepts we come up with for autonomous with visual aids/diagrams to help programmers visualize how things should work.

I’ve been on both mechanical and programming, and what I tend to see with programmers is the disconnect between what is expected to happen (how it’s coded) and what actually happens (and how to correct the issues.) Hopefully we can address these issues, in said WP. (“I’m outputting PWMs of 5, shouldn’t it be moving?” “No, it’s not enough power.”) Others before me have said, it’s important for the mechanical to understand programming but it’s also important for programming to understand mechanical.

When it comes to programming autonomous and human controls, I think the most benefit comes from understanding how to create human controls. A lot of the issues (and loss of potential points) I see during a match can be resolved with a change in how the controls are set up and intuitive they are. There’s also the issue of programmers making the robot too intuitive to drive and taking control away from the driver. Drivers are smart, they’ll figure it out. Maybe this is more a industrial design type issue but usually in a team, it rests on the electronics/programming team. This area more than any other programming area can make or break a team (in my opinion.) We had some awesome controls in 2008, (one joystick, one driver). I’ll definitely include a section about controls design.

Edit: I did a bit of thinking, our whitepaper will be about controls and not really programming itself.

I would have to agree with Andrew here. This not only applies to programming but all areas of FIRST Robotics and there are many threads discussing the mentor vs students. So please avoid it in this thread.

But to relate to Andrew, in 2006 we had never tried welding and our whole bot was done by an adult. The next year, a student stepped forward and for the last 3 years, we’ve had great success with our student welder.

-RC

Sorry all…my opening point was offtopic/questionable and I’ve removed it…the rest of the post has some value, however and I’d ask that you give it a careful read.

Just to make sure I understand your point, you are saying that we need to develop more framework/abstraction layers to help improve the experience of FRC programmers. Eventually you hope that the more interested ones will contribute back to the frameworks. Did I miss anything?

I would disagree with the notion that showing functioning examples is a bad thing. Some people, myself included, learn better from taking code that works and breaking it. I like to call it “playtime”. I generally dislike frameworks as I spend more time trying to figure them out than I would just writing code from scratch. Admittedly I am not one for reading documentation, I prefer just fiddling with stuff until it works (note, this applies to software only, hardware I read the manual at least twice). However, I do agree that providing frameworks will greatly aid many teams in developing functional programs. I also agree that developing a consistent, usable, and well documented framework is well within the limits of SOME FRC teams.

Agreed. You can’t learn to code based on a doxygen/javadoc/book of theory alone…and conversely you can’t learn just from examples. you need a good blend :slight_smile:

Honestly, it doesn’t matter where the framework comes from as long as its extensible by everyone. I’d love it if WPI teamed with FRC teams and could put together something more abstract and official, but what really matters is that the code gets into peoples hands.

Actually a good example of this is the Lego mindstorms setup…which provides even young children the tools they need to make simple, functional programs in terms of the components of their robots…they can tell a motor to spin a certain amount and it does with out thinking about PWM signal generation or encoders or threading or whatever. I know many, many kids who have built awesome lego mindstorms things on their own–having taught themselves to put the logic bricks(or now labview elements) together.

Here’s a recap of what we’ve mentioned so far.

Improving the experience of programmers and the effectiveness of their code:

  • recruit software engineers and computer scientists as mentors
  • educate all team members on the process and capabilities of programming
  • definine an end point that is achievable yet challenging for the students (different for every team)
  • share code, collaborate with other teams
  • abstract the frameworks (make the IO control higher-level and more intuitive)
  • understand the mechanical characteristics of the robot
  • create intuitive human controls
  • give programmers time to play and experiment with the IO functions so they understand how they work

Collaborating to help rookies:

  • offer a second platform for programming practice

Is there anything I’ve missed?

I think this is a good summary looking to the future. The thing that could probably be stated more directly would be to allow programming to proceed without the robot. In my book that means test harnesses so that code logic can be validated more quickly and more safely somewhat independent of the robot. After all, NASA rarely launches another rocket just so the programmers can see if they’ve fixed a bug.

One way to accomplish some of the other elements on the list would be to form cross-functional teams. This is a common occurrence at NI. If you want the HW and SW guys to understand each others tasks better, to communicate better, you put them on the same team and give them a higher level goal. This would be equivalent to saying that the manipulator team was composed of some mechanical, some electrical, and some programmer members. I’m not certain if it builds a better robot, but perhaps it builds a better team. One of the enabling tools for this is a code architecture that allows it to be written by different subteams and integrated without stepping all over each other.

The other way to look at this is that programming isn’t just for programmers. NI’s tools are intentionally aimed wider than that and aim to let a EE’s and ME’s write good code too. One of my favorite robot stories is from Dave Barrett from the MIT DARPA urban team. It starts at 25:30 and ends at 28:30 on the following URL movie. The rest of the keynote is very interesting as well.

http://zone.ni.com/wv/app/doc/p/id/wv-1709/upvisited/y

So, while specialization is important in the modern world, it isn’t the only way to tackle a problem. Thanks to easy access to a useful tool, we no longer take our notes to the typing pool to get them formatted for distribution. We open a productivity tool, type away, and it helps with formatting, spelling, grammar, finding references, etc. Perhaps some amount of programming is heading this direction as well.

Greg McKaskle

Agreed. The most frustrating thing for me last year at the beginning of the season was the fact that the C++ stuff had to be run on the robot, and so it made it really annoying to do anything really significant to the code without any easy way to test it. In particular, I don’t have 24/7 access to the facility where our team has the robot, so most of the coding I did last year I ended up doing at home, because we simply don’t have enough time to do it in the time we meet at the school – and I can’t bring the robot home! :frowning:

I ended up building a reasonably nice GUI test harness for WPILib/C++ that allows you to run your robot program on a desktop (Windows or Linux) and stimulate various inputs and view the outputs. More information and download links at: WPILib Test Harness Released! « Random thoughts along the roadside….

I share the same issue about hardware access being a bottleneck for testing…
In my experience the best way to get around this has been through simulation and hardware emulation…either at the API level or even at the device level.

For this year’s competition, I think this might be something that we can actually achieve relatively easily at least for Java developers–
The squawk VM was run primarily on SunSPOTs(a sun embedded platform similar to a Mote) until very recently, so the guys at Sun Labs built an emulator for the SunSPOT known as Solarium. It works decently well, and for FRC we’d really only need to write our own Pseudo-WPILibJ (same methods, just no JNI calls for hardware)…and then we could run on top of this. Unfortunately this is an emulator, not a robot simulator

Alternatively, my team is writing a generic robot simulator that ties into the GRTFramework, such API calls to the GRTFramework are caught and used to manipulate Java AWT/Swing renderings on screen…its a lot of work to simulate a robot, but it might well be worth it.

I should also point out that labview has the awesome feature that if you use VIs appropriately, you can unit test MOST of them independently of the hardware (you end up feeding in values to a front panel or even through the VI inputs, but you dont actually need to deploy a lot of the code).

I think that a WPILib level emulator would be a great start, and depending on the level at which the emulation happens, it could even run with the existing libs for all three languages. The way to do this on LV is to duplicate the libraries and place them into the vi.lib for the desktop target. This would result in broken VIs because there is no FPGA. With a bit of work, though, the FPGA could be replaced with an I/O engine that has hooks for external code to update the values based on a given robot.

When VIs in the project tree are under the cRIO, they load with a cRIO vi.lib. When they load desktop, they open with a desktop vi.lib – which doesn’t contain a WPILib implementation. So once one exists, the rest is pretty straightforward.

It didn’t get done this year, but there is always next.

Greg McKaskle

I can tell you that I had plans for making one this summer, and I simply ran out of time.
I was doing it fairly low-level, and I don’t think I got much past PWM and solenoid. (I wasn’t sure about how to implement the analog trigger).
I think the GUI would be a key in making an effective emulation.

What I did was use conditional disable structures. I quickly found how long it took to place and configure them. I also had some trouble with placing the preconfigured conditional disable structures around a selection on the block diagram.

The way you’re talking about it, it sounds like you can specify different dependencies for different targets in a project. How would you do this without creating conflicts and recompiling the VI whenever you switch between the two?

Anyways, what I’ve done can be found under “cRIO simulation” here:
http://kamocat.com/programming.html#misc

You’d load the VI under different targets, and it’d load with different libraries. If the libraries have the same connectors, same typedefs, etc, then no compile is necessary. If something is different, say some parameter is added to WPILib only for emulation control, then your VI recompiles.

In reality, LV does what is called proptypes each wire, and each node move. It is the syntax portion of the compile. When you hit the run button or save, it finishes the compile. What I’m getting at is that the reason many people assume LV is interpreted is because the compile is very fast, so even if that occurs, it is no biggie. I’m pretty sure that the top level of WPILib will not change at all from the target.

LV also allows the VIs to be loaded into more than one target at a time, so you could even have both open and running at the same time.

Greg McKaskle