Cross-VI Communication - best practices?

I’m looking to do some experimentation in the off-season - our team is trying to build an infrastructure that lets us use some sort of interface between VIs, allowing us to quickly “drop in” test code instead of product code.

A simple example would be something like:
Gyro sensor (data source) -> Gyro Data Interface (stores data until needed) -> Drive code which consumes it (data sink)

The idea being we can implement both the data source and data sink independently, as well as letting us switch out the source/sink with test code so we can effectively unit test.

However, we’re not sure what is the best way to actually implement this, and we don’t know the effective pros & cons of them either.

  1. Global variables
    Pros: easy to use
    Cons: easy to cause race conditions without some sort of protection

  2. Functional Global VIs
    Pros: All the goodness of functional globals (easy to extend, allows us to change the method of data storage/retrieval without callers knowing)
    Cons: Need to spend more time creating a number of VIs for each chunk of data being shared. Also, if there is one data source with multiple data sinks, won’t all the callers queue up? I’m worried this could cause delays.

  3. Queues/Notifiers
    Pros: Don’t need to write a lot of infrastructure
    Cons: The source and sink need to share a piece of data (the name of the queue/notifier), meaning there is potential for a runtime error that might not be caught at compile time unless we keep a list of that data in a global somewhere. Also, the data source needs to manage what’s on top of the queue, since there could be an arbitrary number of callers - this may not be efficient?

  4. Events
    Pros: Very easy to have one data source and several data sinks
    Cons: (This may just be my experience) Writing event infrastructure seems fairly complicated, and involves a lot of objects on the diagram

Are there other types?

Any recommendations for implementation?

We have used global variables rather extensively, and typically run the data sink in a much faster loop than the data source, for example the data sink being a 5 or 10 ms drive loop with the data source being teleop at 20-50 ms. We haven’t experienced noticeable issues with race conditions because the ones that might arise are short lived enough with a comparably steady state input signal (changing much slower than the output updates). Similarly with sensors, we’ve had the timing such that any issues that arise are fleeting and insignificant. We have enjoyed the ability with a set up like this to easily move control of the functions between teleop, autonomous, and various automated routines by having them tag team on the updating of globals which command output state.

I can’t say I know much about the other options you listed outside of their existence and basic operation.

I’m not sure I understand enough about your needs, so I’ll start with a few observations/questions. Then I’ll add a bit of comparison info to the four options you listed.

It seems like you are looking for a looser coupling between your robot components. The first decision I think you need to make is whether you need asynchronous or synchronous scheduling between the components.

The synchronous debugging module would be placed into the code opposite (via a case statement) the production code, and a global switch, or even a compile directive would call or link in the correct version. Other options are to use the Call by Reference mechanism to invoke code, and swapping in debugging code can then be done simply by changing the init code that builds the interface of refnums. This is almost an object-oriented approach, but using only the interface. You can go further here and even make the interface init be driven by a text file if you want additional flexibility/scriptability.

An asynchronous version can be done similarly, but typically includes globals or other communication mechanisms that you list. This also lets you run the components as individual top-level VIs that can be swapped in and out without restarting the others.

Built-in globals or functional?
The built-in is easier and more efficient if all you need is read and write. The benefit of functional globals is their flexibility to add other operations such as read-and-clear, or accumulate, or data-query. Putting the operation in the functional global keeps all operations atomic to avoid race-conditions. Of course the locks necessary for making them atomic mean that some operations are unavailable and need to wait or bounce-off. Typically, the data tasks are microsecond in length and there is no reason to worry about delays. Note that built-in globals already guarantee that read and write are atomic.

Queues, notifiers, events:
And you may as well toss in Occurrences, Rendezvous, Semaphore, and the RT FIFO. Each of these brings a unique scheduling capability and fills a niche.
The simplest of these is the Occurrence. It doesn’t hold data, only triggers remote code that some pre-agreed trigger took place. This is very low level and some of the corner cases about boot strapping are quite confusing, but it is the basis of the other Synchronization options and is sometimes useful for making your own where you want to control storage but need a trigger.
The notifier, queue, and RT FIFO are closely related, implementing different strategies of read (waiting, lossy or lossless, etc) and write (wait or overwrite, lossy or lossless, etc). The RT FIFO has the most flexibility, and therefore has the most confusing API. The semaphore is rarely needed since any non reentrant subVI implements a critical section which is much safer than using semaphores that can easily be unbalanced and load to lockup. The rendezvous is a coordinated countdown gate – everyone waits until a predetermined number of code paths have reached the rendezvous points. Again not commonly needed.
I think the only one that hasn’t been covered is the User Event, which is almost identical to the notifier and queue except that for nonRT targets you are able to integrate them with UI events.

Of these, I’d say I end up using the synchronous switch on debug/nondebug code quite a lot, as it keeps synchronous things synchronous and simple. If things were already asynchronous, they are already using the global or functional global and the test code is easy to incorporate. Personally, I don’t find myself using the others that often. They are special purpose and I guess I just don’t have the need.

Just to throw it out there, the actor framework seems to be all the rage now amongst high-end LV developers, and if your team is ready to dip your toe into the OO pool, it is definitely a flexible and useful pattern, but it is also pretty technical.

I didn’t comment on the name mechanisms for the notifiers and queues. If you build a subVI that returns the name and encourage folks to use the subVI, you won’t have to worry as much about misspelling. You can do this as a constant function or with a selector if you have a few closely related refnums.

I doubt this answered your question, but hopefully it gives some info so you can ask more detailed questions. Hope it helps.

Greg McKaskle

Greg, as usual, your posts are extremely informative.

Your approach of using globals as the data-passing mechanism and using global switches / text file initialization to switch between test code and production code seems simple and robust enough for our needs.

That being said, I watched some videos and read up on the Actor framework - really interesting stuff! I’ve been looking to try LabVIEW classes on the robot, but could never quite figure out the implementation. I suspect that has more to do with our poor design rather than any inherent implementation limitation.

As you pointed out, the issue really is with tight coupling - we probably need to do a better job encapsulating functionality so that it’s easier to implement something lighter.

Every year I try to move our team more and more into a compartmentalized and modular framework.

It’s not always easy to find adult Software Engineer’s who actually have a firm grasp of tasking and software architecture, much less high school students.

Greg ^^ has some great suggestions and I honestly don’t have anything to add to them, but I can relay our personal experiences.

Over the past two years we’ve been using global data objects, using semaphores to handle mutual exclusivity, and our students have been able to grasp the concepts fairly well.

Next year I’d like to open them up to functional global variables, but there are trade offs to hold against having a code base that is too… ‘well established’, since if most of the code is already written and doesn’t need to be touched, then your students may not be able to learn how it truly works.

Once the current set graduate if we find ourselves lacking someone who really understands the whole framework, I fully intend to break the traditional set of rules I follow in the work place and throw what we have over the shoulder and willfully reinvent the wheel just to counter the above.