How does your team architect its codebase?

My question is aimed at Java/C++ teams, but LabVIEW/other teams could provide helpful information here too.

How does your team architect it’s codebase?

A search of “architecture” in the Programming family of forums only brings the 2011 Cheesy Poof code release as a discussion about code architecture from what I can tell.

I’ve been wondering about how other teams plan out structure their codebase. 1675 has come from a “one file” team in the olden days to something that resembles OOP now. FRC robots are a seeming simple yet sometimes complex system, especially when you have preset routines that run autonomously during teleop. 1675 (and from what I perused yesterday, 254) had an “autoscore” last year, a series of action where given the robot and tube in a certain state took all the repetitive actions out of the drivers hands and performed them precisely for each hang.

This year we were trying for classes to represent each system, which would have different methods for teleoperated and autonomous. As we wanted to do some things (semi)autonomously in teleop, and communicate between systems, this broke down and it was too late to do a redesign, so there are a lot of hack jobs.

We also wanted to make classes “plug-and-play” as we iterated, using interfaces to represent things like our Hood (implemented later as SimpleHood, EncoderHood, and PIDHood) and Shooter (implemented later as SteppingShooter, VoltageShooter, and PIDShooter). This also ended up breaking down a bit as the different combinations of components tended to interact in different ways, breaking the purpose of an interface.

Things that would be awesome for this thread:

  • The general plan your team has when starting the codebase for a season
  • Any design diagrams you might have generated along the way
  • Links to codebases shared on the web
  • Challenges you had integrating a system/action into your design, and how you overcame them (or what went wrong)

Our code for Rebound Rumble can be found at https://github.com/pordonj/frc1675-2012 (There were some changes over the weekend that I need to commit once I get access to the code again this week).

Our code design always falls along functional lines, with the code modules closely following the moving pieces of the robot and communications channels. Most of the robotic projects I do at work turn out the same way. This year the different pieces include:

main - starts all other tasks and services comms with the driver station
auto - autonomous/hybrid behavior, reads and execs a custom script language
wheels - starts and monitors all drive wheel servos, crab, spin, measured movements
gatherer - runs the 4 ball gathering systems and the elevator
shooter - runs the servos for turret position, wheel speed and the feeder
targeting - runs the camera and processes images
dashboard - collects and sends dashboard data
monitor - monitors the health and status of the other modules

Each piece/module is in a separate task, all derived from a common class. Each spins in an endless loop reading a message queue and performing the duties requested. It makes debugging easier, it is like 8 different smaller less complex programs. Since they are in separate tasks we can adjust their relative priorities using the operating system (taskPrioritySet/Get).

HTH

This is a nontrivial problem for which there are several correct answers. First, here is Team 125’s code from this year. It used to be organized according to the structure I outline below, but two back-to-back competitions led to some unfortunately hacky fixes under time pressure. C’este le code.

For the past two years 125 has used some sort of command-subsystem model*, with a significant amount of WPILib support provided it this year. I covered it in my Java presentation at this year’s kickoff; for more detail, check out the WPILib Cookbook or the FRC Java API. In brief summary:

  • Subsystems represent actual parts of your robot. Their methods represent the lowest level actions that you will ever want to use them for. While there can be some extra logic inside the methods—such as an elevator not going beyond minimum and maximum positions, doing some math to turn a real world value into a sensor value, or even a PID loop—you generally want subsystems to be “stupid”. A corollary of this is that the subsystems are never aware of whether it is autonomous or teleop, because you should use the same set of methods for both. It also means that subsystems almost never talk to each other. (We have one exception in our code that I’ll eliminate if I ever rewrite it.)
  • Commands are objects that tell subsystems what to do. This is where you combine all of your subsystem’s methods in clever ways that make them do useful game actions, such as score. Make these apply to both autonomous and teleop as much as possible, but there are some commands that will naturally only be used in one mode. For instance, “spin 180 degrees” isn’t useful in teleop, and “manual tank drive” has no place in autonomous, but “auto-align to target” could be written so that it can be used in either.
  • Command groups perform a group of commands. (Duh :).) They are themselves commands and can be nested inside other command groups. These combine simple commands to make an even more complicated game action, such as waiting until a shooter wheel is at the correct speed and then firing a ball. Autonomous modes are generally command groups.
  • Obviously, some things (perhaps the dashboard or camera) don’t necessarily fit exactly into the above structure. Be flexible.

That, in retrospect, was not very brief. Hopefully it or one of the links is at least a little enlightening. It sounds like you’re doing something like this already, but a slightly more strictly hierarchal organization could help you avoid the messiness of interacting subsystems.

Last year I wrote a scripting language for autonomous mode, but we never had time to test it. If you do something of this degree of complexity, start in the offseason.

*This was originally inspired by 254’s autonomous code from 2010; a common mentor between our teams brought the idea to the opposite coast, where we made it more object-oriented and applied in teleop. In 2011, we wrote our own system for running commands, but thanks to the WPILib update this is no longer an obstacle to using this intuitive architecture.

Great replies so far. I was aware of the new Command stuff but didn’t look deeply into it as I noticed it too close to the season starting. I think we will try it in the offseason as it looks really good once you get your head wrapped around it.

Also, this thread isn’t just for 1675 :slight_smile: Even if you have an architecture completely different than being discussed here, go ahead and post. It would be nice to have a large set of ideas for teams to be able to look at in the future.

We have been using a similar architecture for the past few years, with a few improvements this year. Everything is written in LabVIEW, and follows common programming practices for LabVIEW.

-All mechanisms are organized into subsystem controllers, each containing a high-level VI with a while loop running at a certain frequency (most run every 20ms). A few subsystems run multiple loops at different speeds and priorities.

-All subsystem controller VI’s set the thread priority and execution system for that subsystem. We organize them logically, but since all threads are important, we generally set them all to above-normal or high.

-There is one thread which is not timed to a fixed period, but to the Ocurrance coming from Start Communication (it’s timed to the radio packets). This is called “MainTask” and contains the standard Robot Main while loop and the HMI system.

-The standard RobotMain loop contains the logic to enter and exit autonomous (by starting/stopping AutonomousThread.vi), and the HMI system which reads joystick inputs and generates high-level responses.

-The Autonomous system is exactly the code we used last year and released here. We wrote many new script commands to fit this year’s game and robot design, but the architecture and script system is the same.

-New for this year is a “signal manager”, which manages timestamped binary signals. We use these to indicate boolean events, and corresponding “signal traps” tell a subsystem when new signals are sent, which is very useful for communicating between asynchronous systems.

-Data communication happens between data storage VI’s - Most are set as Subroutine priority for execution speed reasons, and store bundles of data between asynchronous tasks. A subset of data storage VI’s are state storage VI’s - They store the commanded state of a mechanism. A commanded state is enumerated in a strict type def, and the state storage VI’s allow systems to write a new state, read the current state, and also manages resetting the state to an uninitialized or manual state when entering enabled.

-Most VI’s listen for SIGENABLED and react to being enabled, but none react to entering or exiting autonomous mode (we don’t even have a signal for it, or really store that state anywhere). The autonomous command system has wrapper command VI’s which allow setting states through the script system.

-The only exception to all of this is the drive motors - There is a set of VI’s which records the drive state, which sets who controls the drive motors. They are set like a normal system during teleop, but autonomous routines directly command the drive motors, since autonomous drive actions are completely different from teleop drive actions.

Hi all,

I’m not a programmer, but I know this is what we’ve used in the past years.We also use it in Rebound Rumble, but not as much.

http://code.google.com/p/grtframework/

Our team likes dividing our code into classes based upon the functional units of our robot. This year featured a drivetrain, a elevator, a shooter, a bridge lowering device, a control, a (sketch) motion sensor, a constants, and output class and a main class for interfacing logic.

Each class only interfaces in the main class, in order to make bugs easier to debug. This allows our code to be extremely clean, as variables and methods not needed by the main are declared private and hidden in that class.

Having multiple classes also allows us to use Object Orientated Programming, so we could (if we every needed to) create a “new” drivetrain for use in auto or teleop.

The last great thing I can think of at the moment is that we can multitask effectively, especially if each programmer writes one class. Close to the end of build season (when we had an entire robot), all of us (programmers) gather together to write out the main class, taking into account decisions on strategies, and workarounds that the mechanical team needed us to fix in code. Our SVN server deserves a mention somewhere around here.

One thing you would want to be careful about is the overuse of global variables. They certainly have their place, but when you have global variables pointing to global variables, things have gone too far…

We code in LabVIEW. Justin and I are both very experienced with LabVIEW (we’ve both used it in summer internships extensively) and spent a lot of time last summer thinking about a way to architect this year’s code. This is our first year trying something structured like this, so it hasn’t been perfected yet. Eventually, we came up with the following - we call it a “dual-parallel state machine”:

  • The loop of the code follows three basic steps: input values, calculations, and output. They’re pretty self explanatory - one step gets all the values to input to the calculations (this varies with robot mode) and controls which mode (state) the calculations use (we call this the state controller), the next step calculates the outputs based off of the values and states passed from the input section of code, and the final step sets the outputs (with appropriate safeties).
  • This basic process happens five times in our main loop - once for each robot subsystem (drive, shooter, etc). These are all run synchronously in parallel with each other. We played with the idea of having each subsystem in a separate loop, but we decided against it for some performance reasons. Our code currently runs around 50 iterations per second and at ~21% CPU (without vision enabled; with vision, it runs at a little over 90%), so I think we might be able to spare more parallel loops next year. But I’ll cross that bridge when I come to it.
  • Each subsystem is always in a “state”. A “state” is how we define which calculations to do to the input values. This is our way of executing autonomous routines in teleop - you just change the state of a certain subsystem either by a button being pressed or some other interrupt. We have one big typedef in our code, which holds all the possible states (for example, the most common ones are “driver control” and “auto”). Each subsystem implements states in a different way, but we know that every state is implemented - sort of like an interface for you APCS/Java people.
  • All of the states in our code have the ability to interact with each other. Each subsystem has its state stored in a functional global variable, along with some other information such as a first call boolean. Any subsystem can access any other subsystem’s current state information. This allows for subsystems to wait for each other before executing a task (this is especially useful in coding autonomous).
  • Speaking of autonomous, our structure made it very easy this year. Autonomous is simply an alternative state machine controller (input value provider) which is run instead of the teleop one. Our autonomous this year consists of (for each substate) an array of the states to execute in order, their input values, and their exit conditions. These values are then passed to the calculation and output phases, and that’s our autonomous.

Anyway, I’ll probably release our code in about a week, so that will be a good supplement to my rambling. In the meantime, Justin and I are always happy to answer questions…

Another Command-Based Programming team here to chime in – here’s our codebase, for reference:https://github.com/prog694/joebot/

Ziv has already covered the basics of CBP. Ultimately, our team is pleased with this architecture – we liked the distinction between configurable robot actions (commands) and actual implementations (subsystems).

One thing we did this year was create a series of “fake” subsystems, for debugging purposes. If we wanted to only troubleshoot the Drivetrain and disable everything else, we would replace all of our regular subsystem instantiations with a safe, “Fake” version that simply prints statements instead of actuating hardware.

I will now comment on our software engineering practices, since others have too. We have an unusually large team of [software] engineers, many of whom are unfortunately made anonymous by the orange “prog694” committer in the preceding graph. Our development process is best characterized by the following graph:https://github.com/prog694/joebot/network (click & drag the graph to scroll it.)

We have one guaranteed stable code branch, ‘master.’ From ‘master,’ we branch off into a ‘develop’ branch, where we write all of our untested code.

When someone is charged with implementing a feature, he/she creates a branch off of ‘develop’ with the prefix ‘feature-,’ e.g. ‘feature-foobar.’ Then, when it is time to test this feature on one of our old robots, from that feature branch we branch off into a new branch with the prefix ‘debug-’, e.g. ‘debug-foobar.’ In that debug branch, we implement test code that will prove that the feature works.

Finally, once the feature has been tested on old robots to the greatest extent, we merge that ‘feature-’ branch back in to ‘develop’ using GitHub “Pull Requests” (example). Because we have so many developers of various skill levels, Quality Control is critical, and so we review every new feature before it is merged with our ‘develop’ branch.

At a regional, once we have run continuous matches without re-loading code, we merge ‘develop’ into ‘master’ and tag that commit, starting with v1.0 and iterating as we improve control over time.

Thanks for the discussion on practices. I have learned a bit about github over the past few months and really want to get our team on a development track like this. Currently working with the school district’s IT dept to get unfettered https access to Github :stuck_out_tongue:

Here’s a good talk about Github for anyone familiar with version control that basically describes a process like the above – How Github Uses Github to Build Github (warning, I don’t remember, but there may be a few “bad words” here and there but not many – this is a presentation at an informal convention given by an adult to adults) http://zachholman.com/talk/how-github-uses-github-to-build-github

-Each subsystem on our robot (turret, drivetrain, ramp device, etc) is an object/class in our code. Each has an “operator_update” function called in teleop where inputs on the joysticks are given - this allows for all calculations and commands to be handled in each class.

Piece of Main Teleop Loop:


     Launcher->UpdateLaunch_Operator(turretStick->GetTrigger(),turretStick->GetRawButton(6));


-An autonomous object is used in our code as well. Inside of the class, the routines are comprised of single-line commands such as “LaunchBall”, “GyroTurn(angle)”, and “SetWheelState(state)”.

-An autonomous routine would look like this:


void AutoController::AUTO_Layup()
{
     Shooter->SetState(state_RPM)
     SetShooter(1550);
     Drive_GyroEnc(.6f,0.0f,450);
     SetTurret(-45.0f);
     IntakeOn();
     FullLaunch();
     WaitForBall();
     FullLaunch();
}

In the main Autonomous Loop:


if(DIP_1 == 1)
{
     AutonController->AUTO_Layup();
}
else.....

-Functions on the robot are all condensed to one-button commands (like the 254 auto-score).

-We use bitbucket for source control.

Team 230 takes an approach similar to what daniel describes. This year we have classes representing the shooter (which includes the turret control as well), drive train, inertial data sensors (gyros, wheel encoders, accelerometer), the bridge device, and ball gathering. Each of these classes defines an interface [1] to support the various tasks that will be requested by the operator and [2] to provide any information that will be needed by other classes, as well as [3] a service function which is called each time through the main loop of teleop and autonomous to perform multi-step operations. Note that none of the other interface functions are allowed to perform operations which take significant processing time; they simply set a flag to start an operation or turn on a mode and return.

The responsibility for each class is given to a single student programmer so that they maintain that code (with oversight by the programming mentor). This gives each student something that they can work on and be proud of their individual accomplishment as well as being part of the programming team and what we come together to create. More experienced students often handle multiple classes. And ironically enough we also make extensive use of SVN for our source code control as well.

Our TeleOp and Autonomous functions are basically loops which maintain the loop time (required for PID loops and other control systems) and make use of the interface defined by each class to facilitate the operation required… either based on an operator request (like pushing a button on the joystick) or based on the steps defined for the currently selected autonomous mode. Reusing the same class interface functions to perform operations in both modes allows our autonomous to come together very quickly and reliably after the testing is done for teleop.

I don’t have a copy of our code anywhere that is accessible on the internet, but would be happy to share a copy with anyone who is interested. Please feel free to PM me.

This year we used the command based system provided by WPI. Once we figured it out it worked really well for us and provided a great deal of functionality. The only issue I had is that it can take a lot more time to make some small changes especially when you need to create new commands. We’re at 29 class files at the moment.

We wrote a multi-threaded API during pre-season. Basically, we have commands and subsystems and all that, but I hate state machines, so every command had its own thread.

We ended up writing 64 classes. It took way more time than it should have, and we practically rewrote the entire WPILibJ (which I am considering finishing at some point).

Keep it simple, stupid.

Are you open-source? I am curious to see your implementation.

One interesting idea our team implemented this year was functional programming(having functions accept functions as parameters).

This way we could write a ToggleButton function that would take two functions and schedule them to alternate when a button is pressed using the command system.

We could also write things like UntilDistanceTo or TimeSince for autonomous or teleop pseudo-autonomous code.

It made reusing code very easy, as almost any request can be stuffed into a function without modifying the functional code receiving it.