|
|
|
| I love you like a good game hint. |
![]() |
|
|||||||
|
||||||||
![]() |
| Thread Tools | Rate Thread | Display Modes |
|
#1
|
|||
|
|||
|
How does your team architect its codebase?
My question is aimed at Java/C++ teams, but LabVIEW/other teams could provide helpful information here too.
How does your team architect it's codebase? A search of "architecture" in the Programming family of forums only brings the 2011 Cheesy Poof code release as a discussion about code architecture from what I can tell. I've been wondering about how other teams plan out structure their codebase. 1675 has come from a "one file" team in the olden days to something that resembles OOP now. FRC robots are a seeming simple yet sometimes complex system, especially when you have preset routines that run autonomously during teleop. 1675 (and from what I perused yesterday, 254) had an "autoscore" last year, a series of action where given the robot and tube in a certain state took all the repetitive actions out of the drivers hands and performed them precisely for each hang. This year we were trying for classes to represent each system, which would have different methods for teleoperated and autonomous. As we wanted to do some things (semi)autonomously in teleop, and communicate between systems, this broke down and it was too late to do a redesign, so there are a lot of hack jobs. We also wanted to make classes "plug-and-play" as we iterated, using interfaces to represent things like our Hood (implemented later as SimpleHood, EncoderHood, and PIDHood) and Shooter (implemented later as SteppingShooter, VoltageShooter, and PIDShooter). This also ended up breaking down a bit as the different combinations of components tended to interact in different ways, breaking the purpose of an interface. Things that would be awesome for this thread:
Our code for Rebound Rumble can be found at https://github.com/pordonj/frc1675-2012 (There were some changes over the weekend that I need to commit once I get access to the code again this week). Last edited by BigJ : 25-03-2012 at 17:36. |
|
#2
|
||||
|
||||
|
Re: How does your team architect its codebase?
Our code design always falls along functional lines, with the code modules closely following the moving pieces of the robot and communications channels. Most of the robotic projects I do at work turn out the same way. This year the different pieces include:
main - starts all other tasks and services comms with the driver station auto - autonomous/hybrid behavior, reads and execs a custom script language wheels - starts and monitors all drive wheel servos, crab, spin, measured movements gatherer - runs the 4 ball gathering systems and the elevator shooter - runs the servos for turret position, wheel speed and the feeder targeting - runs the camera and processes images dashboard - collects and sends dashboard data monitor - monitors the health and status of the other modules Each piece/module is in a separate task, all derived from a common class. Each spins in an endless loop reading a message queue and performing the duties requested. It makes debugging easier, it is like 8 different smaller less complex programs. Since they are in separate tasks we can adjust their relative priorities using the operating system (taskPrioritySet/Get). HTH |
|
#3
|
|||
|
|||
|
Re: How does your team architect its codebase?
This is a nontrivial problem for which there are several correct answers. First, here is Team 125's code from this year. It used to be organized according to the structure I outline below, but two back-to-back competitions led to some unfortunately hacky fixes under time pressure. C'este le code.
For the past two years 125 has used some sort of command-subsystem model*, with a significant amount of WPILib support provided it this year. I covered it in my Java presentation at this year's kickoff; for more detail, check out the WPILib Cookbook or the FRC Java API. In brief summary:
Last year I wrote a scripting language for autonomous mode, but we never had time to test it. If you do something of this degree of complexity, start in the offseason. *This was originally inspired by 254's autonomous code from 2010; a common mentor between our teams brought the idea to the opposite coast, where we made it more object-oriented and applied in teleop. In 2011, we wrote our own system for running commands, but thanks to the WPILib update this is no longer an obstacle to using this intuitive architecture. |
|
#4
|
|||
|
|||
|
Re: How does your team architect its codebase?
Great replies so far. I was aware of the new Command stuff but didn't look deeply into it as I noticed it too close to the season starting. I think we will try it in the offseason as it looks really good once you get your head wrapped around it.
Also, this thread isn't just for 1675 Even if you have an architecture completely different than being discussed here, go ahead and post. It would be nice to have a large set of ideas for teams to be able to look at in the future. |
|
#5
|
|||||
|
|||||
|
Re: How does your team architect its codebase?
We have been using a similar architecture for the past few years, with a few improvements this year. Everything is written in LabVIEW, and follows common programming practices for LabVIEW.
-All mechanisms are organized into subsystem controllers, each containing a high-level VI with a while loop running at a certain frequency (most run every 20ms). A few subsystems run multiple loops at different speeds and priorities. -All subsystem controller VI's set the thread priority and execution system for that subsystem. We organize them logically, but since all threads are important, we generally set them all to above-normal or high. -There is one thread which is not timed to a fixed period, but to the Ocurrance coming from Start Communication (it's timed to the radio packets). This is called "MainTask" and contains the standard Robot Main while loop and the HMI system. -The standard RobotMain loop contains the logic to enter and exit autonomous (by starting/stopping AutonomousThread.vi), and the HMI system which reads joystick inputs and generates high-level responses. -The Autonomous system is exactly the code we used last year and released here. We wrote many new script commands to fit this year's game and robot design, but the architecture and script system is the same. -New for this year is a "signal manager", which manages timestamped binary signals. We use these to indicate boolean events, and corresponding "signal traps" tell a subsystem when new signals are sent, which is very useful for communicating between asynchronous systems. -Data communication happens between data storage VI's - Most are set as Subroutine priority for execution speed reasons, and store bundles of data between asynchronous tasks. A subset of data storage VI's are state storage VI's - They store the commanded state of a mechanism. A commanded state is enumerated in a strict type def, and the state storage VI's allow systems to write a new state, read the current state, and also manages resetting the state to an uninitialized or manual state when entering enabled. -Most VI's listen for SIGENABLED and react to being enabled, but none react to entering or exiting autonomous mode (we don't even have a signal for it, or really store that state anywhere). The autonomous command system has wrapper command VI's which allow setting states through the script system. -The only exception to all of this is the drive motors - There is a set of VI's which records the drive state, which sets who controls the drive motors. They are set like a normal system during teleop, but autonomous routines directly command the drive motors, since autonomous drive actions are completely different from teleop drive actions. |
|
#6
|
|||
|
|||
|
Re: How does your team architect its codebase?
Hi all,
I'm not a programmer, but I know this is what we've used in the past years.We also use it in Rebound Rumble, but not as much. http://code.google.com/p/grtframework/ |
|
#7
|
|||
|
|||
|
Re: How does your team architect its codebase?
Our team likes dividing our code into classes based upon the functional units of our robot. This year featured a drivetrain, a elevator, a shooter, a bridge lowering device, a control, a (sketch) motion sensor, a constants, and output class and a main class for interfacing logic.
Each class only interfaces in the main class, in order to make bugs easier to debug. This allows our code to be extremely clean, as variables and methods not needed by the main are declared private and hidden in that class. Having multiple classes also allows us to use Object Orientated Programming, so we could (if we every needed to) create a "new" drivetrain for use in auto or teleop. The last great thing I can think of at the moment is that we can multitask effectively, especially if each programmer writes one class. Close to the end of build season (when we had an entire robot), all of us (programmers) gather together to write out the main class, taking into account decisions on strategies, and workarounds that the mechanical team needed us to fix in code. Our SVN server deserves a mention somewhere around here. One thing you would want to be careful about is the overuse of global variables. They certainly have their place, but when you have global variables pointing to global variables, things have gone too far... |
|
#8
|
|||||
|
|||||
|
Re: How does your team architect its codebase?
We code in LabVIEW. Justin and I are both very experienced with LabVIEW (we've both used it in summer internships extensively) and spent a lot of time last summer thinking about a way to architect this year's code. This is our first year trying something structured like this, so it hasn’t been perfected yet. Eventually, we came up with the following - we call it a "dual-parallel state machine":
Anyway, I'll probably release our code in about a week, so that will be a good supplement to my rambling. In the meantime, Justin and I are always happy to answer questions... |
|
#9
|
||||
|
||||
|
Re: How does your team architect its codebase?
Another Command-Based Programming team here to chime in -- here's our codebase, for reference:https://github.com/prog694/joebot/
Ziv has already covered the basics of CBP. Ultimately, our team is pleased with this architecture -- we liked the distinction between configurable robot actions (commands) and actual implementations (subsystems). One thing we did this year was create a series of "fake" subsystems, for debugging purposes. If we wanted to only troubleshoot the Drivetrain and disable everything else, we would replace all of our regular subsystem instantiations with a safe, "Fake" version that simply prints statements instead of actuating hardware. I will now comment on our software engineering practices, since others have too. We have an unusually large team of [software] engineers, many of whom are unfortunately made anonymous by the orange "prog694" committer in the preceding graph. Our development process is best characterized by the following graph:https://github.com/prog694/joebot/network (click & drag the graph to scroll it.) We have one guaranteed stable code branch, 'master.' From 'master,' we branch off into a 'develop' branch, where we write all of our untested code. When someone is charged with implementing a feature, he/she creates a branch off of 'develop' with the prefix 'feature-,' e.g. 'feature-foobar.' Then, when it is time to test this feature on one of our old robots, from that feature branch we branch off into a new branch with the prefix 'debug-', e.g. 'debug-foobar.' In that debug branch, we implement test code that will prove that the feature works. Finally, once the feature has been tested on old robots to the greatest extent, we merge that 'feature-' branch back in to 'develop' using GitHub "Pull Requests" (example). Because we have so many developers of various skill levels, Quality Control is critical, and so we review every new feature before it is merged with our 'develop' branch. At a regional, once we have run continuous matches without re-loading code, we merge 'develop' into 'master' and tag that commit, starting with v1.0 and iterating as we improve control over time. Last edited by carrillo694 : 25-03-2012 at 23:54. Reason: Added clarification. |
|
#10
|
|||
|
|||
|
Re: How does your team architect its codebase?
Quote:
![]() Here's a good talk about Github for anyone familiar with version control that basically describes a process like the above -- How Github Uses Github to Build Github (warning, I don't remember, but there may be a few "bad words" here and there but not many -- this is a presentation at an informal convention given by an adult to adults) http://zachholman.com/talk/how-githu...o-build-github |
|
#11
|
|||
|
|||
|
Re: How does your team architect its codebase?
-Each subsystem on our robot (turret, drivetrain, ramp device, etc) is an object/class in our code. Each has an "operator_update" function called in teleop where inputs on the joysticks are given - this allows for all calculations and commands to be handled in each class.
Piece of Main Teleop Loop: Code:
Launcher->UpdateLaunch_Operator(turretStick->GetTrigger(),turretStick->GetRawButton(6)); -An autonomous routine would look like this: Code:
void AutoController::AUTO_Layup()
{
Shooter->SetState(state_RPM)
SetShooter(1550);
Drive_GyroEnc(.6f,0.0f,450);
SetTurret(-45.0f);
IntakeOn();
FullLaunch();
WaitForBall();
FullLaunch();
}
Code:
if(DIP_1 == 1)
{
AutonController->AUTO_Layup();
}
else.....
-We use bitbucket for source control. |
|
#12
|
||||
|
||||
|
Quote:
The responsibility for each class is given to a single student programmer so that they maintain that code (with oversight by the programming mentor). This gives each student something that they can work on and be proud of their individual accomplishment as well as being part of the programming team and what we come together to create. More experienced students often handle multiple classes. And ironically enough we also make extensive use of SVN for our source code control as well. Our TeleOp and Autonomous functions are basically loops which maintain the loop time (required for PID loops and other control systems) and make use of the interface defined by each class to facilitate the operation required... either based on an operator request (like pushing a button on the joystick) or based on the steps defined for the currently selected autonomous mode. Reusing the same class interface functions to perform operations in both modes allows our autonomous to come together very quickly and reliably after the testing is done for teleop. I don't have a copy of our code anywhere that is accessible on the internet, but would be happy to share a copy with anyone who is interested. Please feel free to PM me. |
|
#13
|
||||
|
||||
|
Re: How does your team architect its codebase?
This year we used the command based system provided by WPI. Once we figured it out it worked really well for us and provided a great deal of functionality. The only issue I had is that it can take a lot more time to make some small changes especially when you need to create new commands. We're at 29 class files at the moment.
|
|
#14
|
|||
|
|||
|
Re: How does your team architect its codebase?
We wrote a multi-threaded API during pre-season. Basically, we have commands and subsystems and all that, but I hate state machines, so every command had its own thread.
We ended up writing 64 classes. It took way more time than it should have, and we practically rewrote the entire WPILibJ (which I am considering finishing at some point). Keep it simple, stupid. |
|
#15
|
||||
|
||||
|
Re: How does your team architect its codebase?
Quote:
|
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|