Quote:
|
If arrays and clusters are atomic, wouldn't that mean if a 5ms task wants to write a single element to a large array while a 100ms task is grabbing a local copy of that same array, the 5ms task has to wait for the 100ms task to finish copying the entire array?
|
Yes. In a parallel access, each will acquire a mutual exclusion object, or a mutex, and until they successfully gain the mutex, they cannot touch the data. This could be because someone else is writing the data, or because someone else is reading the data they want to write. This is the same mutex you'd better put around your accesses in C/C++ code or you'll corrupt data. LV just does it for you.
Quote:
|
I don't understand the "vice versa" part. Are you saying that the teleop code can preempt the 5ms periodic task that reads the accelerometer? Under what conditions would this be necessary, and how would that affect the 5ms tasks' ability to complete on schedule?
|
If the scheduler determines that is the best thing to do given the priorities and resource needs, the scheduler can halt any loop or any code in any task and schedule another. A good example of where a short, task may be suspended, even if for a lower priority task is when the high priority task blocks waiting for asynchronous data or blocks waiting for something like an event or timer. LV generated code may contain strategically placed yields so that the scheduler is encouraged to run, but except for mutex'ing for data correctness, it doesn't interfere with the scheduler's job of allocating CPU and other resources.
Note that the timed loop structure is a special case in all of this since internally it implements its own scheduling and determines what to expose to the system scheduler.
Greg McKaskle