How does your team incorporate engineering units?

I’m hoping to get feedback on how teams are already managing engineering units within their code.

Several of the languages used in FRC can provide compile-time checking to verify that units of distance and velocity, for example, are not combined incorrectly within your code. The upside is that you have a compiler watching your back as you make algebraic statements in your code to update an actuator. The downside is that compilers are sticklers and your code may actually become harder to read, or the points at which you connect to WPILib, which doesn’t use units, may just become littered with casts and such.

Is anyone using unit features within the languages or libraries? Do you have another system that you’ve adopted?

Greg McKaskle

Didn’t even know this feature existed.

For one, my team (we use LV) does not use this unit feature.

To begin with we weren’t aware of the feature at all. (Though most of the senior coders have some cpp experience and know how typedefs can be used in cpp)

Also, I don’t think it’s likely we’ll use this feature in the future. My personal opinion is that all our numerical data should be the same data type (ie doubles or rather the LV equivalent.) This makes life easier when coding and debugging since we don’t have to cast everything which means 1) the code is less cluttered 2a) We can use LV and WPI libraries more easily since they all take standard numerical inputs 2b) We can ensure our own utility Vis will function for multiple modules of code (which may be using different units)

Moreover, the feature seems to be more of a crutch than a solution. Unit confliction can easily be solved with good planning, sanity checks, and testing.

;tldr: we haven’t heard of it before, and we probably wont use it since it’s easier to code w/o and the issues can be solved with good coding practice

I suggest you research the Mars Climate Orbiter for a good example of how a seemingly obvious bug can slip through the cracks, with catastrophic results, in spite of having some of the best & brightest programmers and most stringent QA processes.

The implication of the above being that not only would that problem not have slipped through, but also no other problems would have been introduced by the harder-to-read code which is littered with casts:

In the absence of a sizable independent controlled study of both approaches, no definitive conclusions can be drawn.

That’s not what I was trying to imply. My point is that people are imperfect. All software has bugs. Using sound software engineering practices to expose them as early in the development cycle as possible is a Good Thing. Ultimately it’s a tradeoff of time/cost for QA vs. anticipated impact of a bug that slips through the cracks.

In the case of the Orbiter, the integration testing obviously didn’t include the test case that would have exposed the unit mismatch - or the verification side of the test wasn’t properly written, or nobody looked at the results, etc.

Would stronger type/unit checking native to the compiler(s) have caught the problem? Would programmers have “fixed” the compiler warnings/errors by littering their code with typecasts to make them “go away”, thereby making the coder harder to read/maintain - and possibly causing this exact bug to slip through anyway? We’ll never know.

Our team’s codebase is mostly devoid of values that would be in physical units anyway. Generally, we code to what our sensors provide. Instead of measuring out a path, calculating the number of rotations, then the number of encoder ticks that would take, we take the robot, zero the encoders, and run through the path, recording their values directly. I distrust any assumption (at least in the FIRST timeframe) that a quantity defined in physical units can be cleanly converted to a value from the robot’s sensors. If we had the time to create a proper simulation, then perhaps I would use real-world units more, but since our only platform is the robot, we use the values directly from the robot.

As an example, we have this class which allows us to do motion planning from teleop mode. Map this command to a button, and it prints out the lines of code to put into the autonomous mode, using the values from the sensors directly.

We typically convert sensor values to a physical unit, if it makes sense to do so. For example, we convert our drivetrain encoders to feet. However, we only use that unit (easy to do on a small development team, harder to do on a large development team that spans multiple companies like the mars orbiter). When there isn’t a straight forward conversion, or where that conversion doesn’t “help”, we leave it in sensor units. We have not used LabVIEW’s convert unit feature.

To the discussion of trade-offs between the benefits of unit checking versus adding to the complexity of the code I will add that making the code harder to read has a very serious downside for FRC that does not exist in some other applications. Namely that FRC code often needs to be very quickly modified and tested between rounds. Adding to the difficulty of reading the code can be a real problem. We have generally tried to include sensor calibration information in the comments for each method.

When we used LV, we were aware that LV had lots of useful data types, but we didn’t use the feature mentioned above. To solve the problem of inconsistent units we made all of our outputs clusters that we could unbundle, and each element of the bundle would have a meaningful name with units. Having all the bundle/unbundle, index/build array vi’s everywhere got confusing, especially while trying to figure out arrays of clusters and clusters of arrays. In order to simplify the block diagrams, we decided to do all of our math in formula nodes, and figure out all of our equations on a piece of paper, while making sure to check units. This system works, but explaining to newer programmers what type defs are, and how 3d arrays are split into 2d arrays was really tough, and was why we switched to java.

We always use the same units and are allergic to things like encoder counts, loop counts, magic constants, etc. All public interfaces speak in terms of standard units, with the exception of open loop speed commands which are in the range -1, 1]. All return values and arguments are named so that the units are explicit, and a common library is used for any necessary conversions.

Generally we use:
Distance = inches
Time = seconds
Angles = degrees (in yaw 0 degrees is robot forward, increasing clockwise)

Rates and accelerations are therefore inches/sec, deg/sec, etc. Inches are used because FIRST field drawings are usually spec’d in inches, so it makes for intuitive autonomous mode scripting. We use degrees for angles due to intuition. 0 degrees is north because usually if we care about degrees we are talking about autonomous mode, so the robot’s starting orientation defines the coordinate system.

I’ve played with libraries like Boost Units and javax.units, but always found their syntax clunky.

Learning Boost.Units has been on my to do list for a long time but I’ve never gotten around to it. In my own code, I tend to just do things like:

typedef Foot double;

It’s not much more than documentation though.

Thanks for the info thus far.

I didn’t really expect very many teams to use SW tools to enforce and check units, but thought I’d check my assumption.

If teams were trying to use them, it would be helpful if the WPILib sensors and actuators were had support for the advanced types built in.

Any other input as to whether this would be a useful feature? Do you think it would encourage more/better sensor integration? More/better autonomous?

Greg McKaskle

This feature is bad because it would use engineering time that ought to be used to fix real problems.

-WPIlib C++ code is flawed. There are places where it may dereference a null pointer. There is a function that takes a lock, looks up the address of a global variable, unlocks, and returns the address. If you’re interested I can post a more detailed list of problems.

-The low-level interface is undocumented. This means that to bypass WPIlib requires guesswork about what the field requires. And when people ask for the documentation they’re asked to take a hike: https://decibel.ni.com/content/thread/17785

-The tool chain is out of date. It looks like some people are trying to fix that: http://www.chiefdelphi.com/forums/showthread.php?t=116921; it would be nice if something like that was available out of the box.

-Problems go unfixed for years. Here’s one from 2011: http://www.chiefdelphi.com/forums/showthread.php?t=89255. It does the same thing today. If the file to download doesn’t exist then the error message is “IO Error while downloading program to robot” rather than “File not found at path [insert path here]”.

I think it’s fun to add features too but I think there are bigger fish to fry.

I agree. It’s really frustrating to hear that all these great new features have been added at kickoff (robot simulator, robot builder…), but something really, really basic, like the CPU graph, got messed up, because somebody made a stupid mistake, and decided to change working stuff after the beta test.

I’d rather see development time spent on testing and reviewing code that’s going to be released to teams. Like including a driver station log viewer that can actually open driver station log files. Or, a debugger that works over a wireless connection in java. In the past, stupid little bugs like having every other encoder work really hurt teams.

However, the thing that bugs me the most is the documentation page included in netbeans. For new programmers, having a nice place to look is really helpful. On the main documentation page in netbeans, the second line said “This document is a work in progress, more to come by final release”, this needs to be removed. Also, none of the three demo projects listed can be opened, and one of the sites linked was last updated in 2010. I had some kids trying to get the v20 cRIO image from 2010. Also, as the previous poster has pointed out, some of the code is just too complicated and poorly implemented to be really useful, like the totally incomplete vision libraries. Before a single new feature should be added, ALL of WPIlib should be documented.

Thanks for giving your reasons. I’ll try to give a brief comment on each of these.

Code is flawed … I don’t directly contribute to the C++ or Java versions, but http://firstforge.wpi.edu is where the bug tracking takes place. Opinions on how the libs are bloated may not make a good bug report, but issues such as you listed sound like good bug reports. Do you know if the detailed issues are captured?

Low-Level is undocumented … The low-level interface is machine generated and as pointed out on the forum, it can change if a new FPGA image is built, if HW changes, etc. This undocumented interface is a pretty straightforward peek/poke interface with some locking and indexing built in. It was not intended that it be used directly or that teams would develop their own language support without being on the control system team. Do you have specific questions that are keeping you from making progress?

Tool chain … The tool chain for C++ is indeed out of date and it is intended for professionals or well-mentored teams. Both the Java and LabVIEW tools are modern and simpler. The new control system will offer more modern tools for C++. And of course, this is also a matter of opinion. Arduino is simple, LEGO is simple, but what level of challenge is appropriate for the FRC challenge? I think it is important that the students get exposed to real engineering, and that isn’t always simple. That is where the mentors come in.

Problems … The issue listed sounded like it was unique to vxWorks tools. It isn’t something that can be fixed by NI or WPI.

Please don’t take my comments the wrong way. I agree that the libraries should put the majority of emphasis on the fundamentals of getting the robot to do what was intended. I’m not itching to implement units, but if they are a significant gap that teams struggle with, the LV solution is relatively easy to implement – by teams or by NI. I’m not sure they are needed, thus the question.

Thanks for the feedback.
Greg McKaskle

… , but something really, really basic, like the CPU graph, got messed up, because somebody made a stupid mistake, and decided to change working stuff after the beta test.

I’d rather see development time spent on testing and reviewing code that’s going to be released to teams. Like including a driver station log viewer that can actually open driver station log files. …

Actually, I confess. I was the person who broke the CPU plot, but it was broken during the beta test, not after. And the break was due to extensive changes in what was being logged at the DS. Logging changes were also the reason the log file viewer wasn’t updated at kickoff – it wasn’t ready, so we shipped the previous years until the updated version was ready. This wasn’t an oversight, but a conscious decision. It was a trade-off caused by what took place on Einstein months before.

I apologize for the mistakes, but I’d hope that you didn’t characterize them as “stupid” to students you mentor. Tools and products around you will include mistakes, and having discussions with the students about the issues is great. But mistakes are often caused by factors other than stupid.

Greg McKaskle

Sorry if my post sounded kind of harsh. I didn’t mean it that way. The fact that the CPU plot was broke during the beta test is news to me, and don’t worry, by “stupid” mistake, I’m talking about an issue that could be easily solved easily (which it was, in the first update), rather than something that was actually “stupid”. I understand that a lot of the work done is by volunteers, and demanding perfection isn’t right. I know that you’re the LabVIEW guy, and I’ve only used LV for one year, but my experiences with the libraries were that they were significantly more “polished” and documented. I could go through all the different subVI’s for the motor set vi, and I could understand every portion of it, even without much LV experience.

The C++/Java libraries are different. Large portions are written quite well, and implement object oriented programming concepts quite well. Other portions are a little less great. For instance, whenever you use one of the “canned” PID controllers, it automatically puts it in a new thread and updates the output with this thread. While it’s really easy to get insight into why some things are the way they are with LabVIEW (talking to you), it’s a little bit trickier getting a bug fixed for java. Many of the little frustrating things (like the debugger) are reported, but ignored year after year.

If the LabVIEW libraries work (which I think they do), then, by all means, the unit feature would be awesome.

As for Java/C++, there’s important missing documentation, so I think that teams would benefit from improved help rather than a new feature.

I haven’t submitted any bug reports to WPIlib myself, but one of the other mentors on my team has submitted six. Three of them say that they’ve been fixed in the next release, but their fixes aren’t available to us since we don’t have access to their private repository. The other three appear not to have been touched; they are still listed as unassigned in the bug tracker. What do you mean by captured?

As far as the low-level interface, the big question I have is what is the minimum set of calls that I must make to have the robot operate during a match and when must they be called? I would assume that at least some of the things in the robot template could be omitted, such as reporting the programming language. But I don’t actually know this and it might be that the FMS will lock out a robot that omits that step. I know that if you screw up the calls that update the virtual LCD text you can make the robot become disabled.

Does Labview also use all of the routines that appear in “FRCComm.h” underneath or does it have its own set of communications primitives?

This is good to know and I hadn’t heard this before. I’m glad to hear this is in the works even if it won’t be ready for the 2014 season.

Who would be the right person to bother about the code downloading problem? This occurs when you hit the menu item “FIRST->Deploy”. Who implemented the customizations of WindRiver Workbench? Do you know whether they have a bug tracker?

Sorry if all my questions are outside of your department.

By captured, I simply meant recorded. If you do not have access to the stable builds, then you may be able to check with a beta team.

The calls into FRC_Network_Communications for things like language are not required, but they are helpful so that FIRST and the controls team know what resources are being used. We may still need to poll, like this one on units, but we will have hard data on how many teams use various features that are present and being tracked.

Do you have a reproducible way to get the LCD to mess things up? That info never leaves the DS, so it would probably just be a bug that went unidentified. LV does use the same FRC Comm calls, but it uses a different mechanism for doing peek and poke to the FPGA. The purple nodes that it uses predate the FRC project and ChipObjects were written to allow C++ and later Java to more easily interact with the FRC register set that is compiled into the bit file.

If you think the deployment issue has to do with the project extensions and settings, I’d probably put them on the FIRSTForge site. The vxWorks tools are dated and require that they are installed with no spaces in the name. If the path limitation is the issue, then it is inherent in the WindRiver tools. They may have fixed it in a newer version of workbench, but NI has decided against migrating to newer versions.

No need to apologize. I understand how frustrating it can be when you don’t see the sort of progress you desire and instead see energy being put elsewhere. I was just trying to make us all more productive.

Greg McKaskle