View Full Version : The OpenRIO C++ Coding Guidelines for FRC
OpenRIO has decided to release some coding standards and guidelines for the C++ programming language, tailored to embedded development on the National Instruments RoboRIO. This short document includes guidelines to ensure good, performant coding practices for FRC.
https://github.com/Open-RIO/C-Coding-Guidelines
Peter Johnson
08-07-2016, 19:46
Why the dislike of std::unique_ptr? I can understand not wanting the locking/reference counting overheads of std::shared_ptr, but std::unique_ptr is essentially just a simple wrapper around a pointer to provide scoped destruction (basically a less dangerous version of std::auto_ptr).
While std::map and its kin are very allocation-intensive, it's hard to get more efficient than std::vector when you need a dynamically allocated or resizeable array. It's basically a wrapper around a pointer and a size.
I'm surprised at the recommendation to use std::exception and std::runtime_error. Exception-safe code is very hard to write well, and usually the exception paths are not effectively tested anyway (as they're hard to test). You're almost better off crashing and letting the auto-program-restart functionality get you back up and running.
Why the dislike of std::unique_ptr? I can understand not wanting the locking/reference counting overheads of std::shared_ptr, but std::unique_ptr is essentially just a simple wrapper around a pointer to provide scoped destruction (basically a less dangerous version of std::auto_ptr).
Although these calls are known to be safe, we know that this additional safety isn't really required if Rule 4 (no dynamic memory allocation outside of one-time code initialization) is followed. We're aware that the overhead is fairly small, and that they are a wrapper.
We've found that when developing software specifically for embedded systems, especially those on a periodic loop, we want to try and reduce the use of both locking functions and the C++ STL. The main reasoning behind this is that compilers are specifically terrible at optimizing templates and the STL.
You'll often find this mindset also carried in Game Development and other applications where performance (and memory usage) is a key factor.
While std::map and its kin are very allocation-intensive, it's hard to get more efficient than std::vector when you need a dynamically allocated or resizeable array. It's basically a wrapper around a pointer and a size.
The main goal around this is to avoid allocating memory after a one-time task initialization (Rule 4: heap memory). The main idea is to "allocate what you will need", even if that means over-allocating a static array. This mostly follows from the JPL Institutional Coding Standard's Rule 5. Of course, in some cases, this rule is made to be broken. It should be noted that most of these rules should be read in a "periodic context", tasks which occur over and over again that can very easily bog down the system if optimizations do not occur.
I'm surprised at the recommendation to use std::exception and std::runtime_error. Exception-safe code is very hard to write well, and usually the exception paths are not effectively tested anyway (as they're hard to test). You're almost better off crashing and letting the auto-program-restart functionality get you back up and running.
To clear this up: we're not promoting the use of exceptions or runtime_error where a simple return-value check will suffice. We're trying to use these types as opposed to calls to assertions. Assertions are great for debugging, but when running on a robot in a match, they can be disastrous. By allowing assertions to occur, you can easily slip into a code reboot loop, and the code startup time during a match is far from desirable. It should be noted that Rule 5 (error processing and catching) is designed to enforce exception-safe code. It should also be noted that this rule is one of the few rules not meant to be read in a periodic context, but instead in an initialization context (i.e. allocating a hardware object that isn't there).
I'll update the document tonight to reflect these clarifications
calcmogul
09-07-2016, 08:08
Given that this a technical discussion of C++, I thought I'd give my thoughts at length.
We're aware that the overhead is fairly small, and that they are a wrapper.
I'd say std::unique_ptr has no overhead since it's a wrapper for what one would be doing with raw owning pointers anyway. The only member variable a std::unique_ptr has is the pointer itself, and it simply calls new in the constructor, calls delete in the destructor, and copies a pointer in the move assignment operator. These are inlined by the compiler. Another benefit of using std::unique_ptr is the explicit ownership semantics which help avoid dangling pointers and double-frees (raw pointers are considered non-owning). The Rust programming language takes that explicit ownership/lifetime concept to the extreme by building it into the type system.
In general, all dynamic memory allocations and resource acquisitions (like a locking a mutex) should be contained within an RAII wrapper to handle cleanup. In modern C++, one typically never needs to use new and delete or C's malloc() and free() directly.
We've found that when developing software specifically for embedded systems, especially those on a periodic loop, we want to try and reduce the use of both locking functions and the C++ STL. The main reasoning behind this is that compilers are specifically terrible at optimizing templates and the STL.
I agree that blocking functions can be bad in real-time applications and should be minimized (what's "good enough" depends on one's deadlines and what "blocking" entails). However, poor performance of templates is a myth. When a template is instantiated with a type, all the code generation happens at compile time, and the resulting code is exactly the same as if the developer had written that implementation the normal way without templates.
It is possible to see linear growth in the executable size when instantiating a template with different types because the compiler emits an implementation for each. However, that would also happen if the code was written without templates due to the developer writing two implementations themselves. If the instantiation of two templates is the same, the compiler won't emit a second identical copy (inlining notwithstanding).
You'll often find this mindset also carried in Game Development and other applications where performance (and memory usage) is a key factor.
The STL is designed to provide generic (as in not special-purpose) implementations of common containers. Companies have developed their own STLs with custom allocators to optimize heap memory usage (like https://github.com/electronicarts/EASTL).
In addition, optimizations can be made to avoid heap allocations. For example, the GCC 5 ABI std::string supports the small string optimization, which allows small character strings to be stored on the stack inside the std::string object. LLVM also has several useful containers, and ntcore and WPILib use some of them on occasion. SmallVector (http://llvm.org/docs/ProgrammersManual.html#dss-smallvector) (statically allocated vector), and DenseMap (http://llvm.org/docs/ProgrammersManual.html#llvm-adt-densemap-h) (map which uses contiguous storage) are a few.
The main goal around this is to avoid allocating memory after a one-time task initialization (Rule 4: heap memory). The main idea is to "allocate what you will need", even if that means over-allocating a static array.
When allocation of large objects is needed during runtime, one strategy is to allocate a block of heap memory up front, then treat that like a mini-heap to make allocations and frees from that constant time, or at least determinant. Don't forget that one can reserve space in a std::vector with reserve() so heap allocations won't occur upon insertion and deletion up to the reserved size.
std::array is an iterable replacement for a static C array.
... the code startup time during a match is far from desirable.
For C++, it only takes a few seconds at most to restart the robot code executable after it has crashed, which isn't horrible. I'm not saying that crashing is good, but FRC is more forgiving than the average hard real-time environment. :)
Regarding rule 6, std::lock_guard is an RAII wrapper for a mutex.
JamesTerm
09-07-2016, 08:51
This short document includes guidelines to ensure good, performant coding practices for FRC.
It has some good "guidelines", but consider this that their requirements are for a different level of development such as rule 18:
"
Static function pointers are more efficient both in runtime and by the compiler than calls to a class object
"
A red flag is raised in my mind when I see the reason argument is for performance gain, which suggest the scope of development here may not apply for FRC programming in general. The philosophy we have is to choose readability over performance as chances are most code is not "inner loops" meaning the iteration count is probably more than 10ms frequency. For FRC we shouldn't need to write inner loops... I've never needed to except for vision... which brings another point... different aspects of code can bring upon different rules depending on what it is doing. Saying that... I say good guidelines here, but don't let them be the be-all end-all. Understand why they exist and feel free to do what feels right for you... as long as the team agrees with your coding standards... it's all good.
AustinSchuh
09-07-2016, 17:10
The philosophy we have is to choose readability over performance
Yes! To quote Knuth, "premature optimization is the root of all evil".
On 971 and at work, we focus on making sure that our code is readable, testable, and deterministic. Notice that performance isn't a main driver. Every time we've focused on performance too early, we've ended up optimizing the wrong thing. The most noticeable speedups I've seen recently in real-time code were ones where someone reduced the number of context switches or removed some huge copies which were accidentally happening, rather than ones where someone optimized a function call. We recently got a 2x (or more) speedup by switching to polling for log messages rather than waking up for every new log message. We would not have anticipated that in the design phase, and only caught it by profiling and testing.
Why the dislike of std::unique_ptr? I can understand not wanting the locking/reference counting overheads of std::shared_ptr, but std::unique_ptr is essentially just a simple wrapper around a pointer to provide scoped destruction (basically a less dangerous version of std::auto_ptr).
I agree with Peter here. I've been very impressed by unique_ptr. It has no overhead, or if it does on a poor compiler, isn't worth discussing given it's benefits. Since I started using unique_ptr exclusively, I think I can count the number of memory leaks and use after free bugs that I've seen on one hand.
I'm surprised at the recommendation to use std::exception and std::runtime_error. Exception-safe code is very hard to write well, and usually the exception paths are not effectively tested anyway (as they're hard to test). You're almost better off crashing and letting the auto-program-restart functionality get you back up and running.
Safe use of exceptions requires use of RAII, which means use of unique_ptr and friends. I've seen a lot of benefit to crashing early rather than trying to recover when architecting the projects that I've lead recently. This is actually one of my complaints with how WPILib handles errors. When was the last time you checked the error object associated with each WPILib object after reading a sensor? As Peter points out, exceptional error paths are rarely well tested. You are likely to hit an obscure bug in your recovery path and crash in a bizarre way from that. At that point, you are way better off crashing early and printing out a nice error at the same time. Of course, every time you hit a crash, you should take the time to understand why it was hit, and fix the bug that it exposed. I've also had similarly good luck with using assert statements to verify assumptions in code for all the same reasons.
I've been following the Google C++ style guide for about 5 years now and been happy with it. We add some small additional constraints when working with real-time code.
vBulletin® v3.6.4, Copyright ©2000-2017, Jelsoft Enterprises Ltd.