I’m not too good with LabVIEW and I’m using it to implement some code to run on the cRIO.
The very basic algorithm I have to implement is:
a) Apply a user-defined setpoint to a control system, then measure its output and control effort for a user-defined number of samples, sampling at a user-defined sample time (ie, 40 samples at 20ms).
b) Apply a different setpoint that is dependent on the data acquired at step a), then perform the same measurements.
c) Using the data measured at a) and b), along with some other user-defined parameters, calculate stuff (I’m using MathScript for that), and iterate everything all over again, for a number of user-defined times.
Any ideas on what’s the best way to do it? Please ask for clarifications, if needed.
It will be run from a PC - I have a nice front panel set up, and some auxiliary functions (PID loop, simulated - for now - plant, user input, etc.) working.
What I really can’t get to work is the algorithm outlined above. Data dependency and parallel executing just doesn’t seem to go well with simple algorithms - I’m sure there’s a way to get it to work, though. :ahh:
Do you think you can draw your algorithm as a flow chart? If you can it will be much easier to change your pseudocode (flow chart) into LabVIEW code (data flow).
Do you have any part of your program already implemented in LabVIEW? If you do post a screen shot or a VI so others can give you tips on what to do next.
If you’re having problems getting specific sections of the program to run in a specific order, don’t fret. Simply use the Flat Sequence control structure, putting each section in a frame of the sequence. The code in the first frame will run to completion, then the second frame, and so on.
I tried that, and it didn’t work - for my purposes, that is.
Since I’m generating data that is to be sent sample by sample to the control process, using the flat sequence structure (and a for loop inside it, both to generate data and build the arrays of collected information) will give me a batch of data - while freezing the rest of the program, such as the PID calculations - after it runs.
I’m not sure I’m being clear enough, again, please ask for clarifications!
I’m confused as to what you are wanting, so I’ve attached a typical low-tech UI wrapped around a loop that does single point I/O, ctl, and output and accumulates the values for further processing. Perhaps some part of this helps.
If you want a higher tech UI using the event structure, I’ll post one of those too, but since RT doesn’t do events, these simple polling UIs are often useful.
By the way, the False case is the wire going through, and a delay of 50ms, just to slow down the UI and avoid reading stuff 1,000,000 times a second. You could also put processing or display of the entire dataset there.
Also, I’ll describe the other thing that is often useful. Build a simple subVI that takes in your setpoint, the process variable and the output. Chart all of them on the subVI similar to how you’d see in a textbook. You can drop these near any control block, and wire it up. While the panel is closed, not much is going on, but you can open it, bump the setpoint and watch the oscillations and other effects of the control. You close the panel when you don’t need it.
Thanks for the help, but the control part is already working - in simulation, at least.
What I really need to do is the simpler stuff that I outlined above:
output a specific setpoint, and measure output and control variables for a period of time
only after the first experiment is over, output a different setpoint, dependent on the output variable measured in the first part of the experiment, again, for some defined period of time
calculate using the data obtained in the two experiments, then iterate
You can replace setpoint, output and control variables with any name you want, since that’s not the part that I’m stuck - it’s the algorithm itself.
I think the thing I’m missing is what your overall goal is, autotuning perhaps?
If you take the nested loop I posted and after the inner loop, where the label Save to disk, etc. is located, and you put your code to analyze the data and recompute the coefficients, then have them update the local variables before the next loop runs. Then change so that the Run button controlling the case turns into some logic for how often you want to run or simply to run every iteration and control how often the outer loop runs.
Yes, the goal is autotuning, but I think that is irrelevant with respect to the simple problem I’m facing.
In the simplest terms, what I want is to run a for loop for a number of times, without blocking execution of the rest of the code (hence, a flat sequence doesn’t work) and, after that loop is finished, run a different loop, again without blocking execution.
I need to have data dependency between the two for loops, but not for each iteration, only when the loop has completely finished. I’m thinking a state machine that changes state comparing the current iteration count with the desired number of iterations, should work. Will it or will it not work, or is there a better way?
I don’t understand why a flat sequence doesn’t work. Do you need the loop to continue to function while you are computing new values?
My understanding is this:
You have other tasks unrelated to this that must continue.
You run your control for a while.
Stop this control loop (but nothing else)
calculate new inputs for the control loop
run the control loop with the new inputs for a while
LabVIEW runs everything at the same “level” at the same time. If you put “the rest of the code” next to the sequence rather than inside it, the loop inside the sequence won’t block it.
I’m pretty sure your description of what you want isn’t getting your situation across clearly enough for us to understand the problem. The way I’m reading it, there doesn’t seem to be a problem.
Is there something inside the loop that needs to be used by “the rest of the code”? If that’s the case, you might try using a global variable to extract the value from inside the loop and use it in the parallel task.
Is there something in “the rest of the code” that needs to run synchronously with the loop execution? If so, you could try turning that part into a subVI and include it as part of the loop.
OK, as I organized my code to share with you guys, I think I made significant progress.
I have attached the code below.
Running the main VI, toggle Activate control to initiate the PID calculations. When clicking Run IFT, the IFT_loop VI is called, and that’s the only part not working the way I want it to.
Looking at the code for that particular VI, you can see I have a flat sequence with a for loop in each frame. The problem I’m facing is that I want to output the “Setpoint out” to the main VI in every loop of the program (every 20 ms), and not only when the frame is finished, as it is currently implemented. Using property nodes to set the value is probably a bad idea, and it’s not working anyway. I believe I’m missing something really simple now, and I think the problem is now clear to understand. If it is frustrating trying to help and not understading the problem, imagine not being able to explain it clearly, as I was before (hopefully not now). Communicator’s fault, of course
Once you get your code worked out I’d be very interested to see how you have gone about implementing autotuning/optimization. My MS thesis was on simulation-driven approaches to optimal controls.
It makes much more sense now. This is why we still program computers with code, and not descriptive paragraphs in a forum.
LV executes as a synchronous dataflow push language. Since it is synchronous, it means that nodes synchronize their execution and data production. It means a subVI will not produce its outputs and write to its caller’s wires until it completes. It also will not begin until all of its inputs have arrived. In your case, this means that the IFT runs once each time your PID and model run. Its internal loops run to completion, and return the last value in the loop each time it is called.
What to do about this. You have two options. I’ll describe both and let you decide which direction to go.
Option A is to merge the IFT loop and state with the loop in the caller, Main. Assuming your setpoint will change more slowly than your PID and other elements will execute, you will continue calling IFT each iteration, and occasionally it will change the setpoint, advancing it to the next value in the sequence. It will need something to trigger off of, and I think time is the appropriate choice here. You can use the coputer clock and have each of the subVIs read the clock and make changes as time advances. It will be somewhat more flexible if you move from computer time to an abstract system which you advance independent of the computer clock. This would be necessary if your system responds on a very different time scale than the computer – simulation of an ice age or tokamak reactor for example.
The second choice is to move some of your dataflow from synchronized wires into unsynchronized storage such as a global variable. If you take the setpoint, and perhaps MV --since this doesn’t change, I’m not sure what to do with it – and make a global variable called setpoint, you can now have unsynchronized loops access the setpoint. Your Main program now has two loops, the one which calls PID and model using the current value of setpoint, and the independent loop that updates setpoint to perturb the system. You may also find it useful to grab the setpoint and change it by hand to see how the system responds. You’d do this by either opening the global panel or making your Main panel have a mode where you write a panel value to the global untimed.
I think that this less synchronized approach may work fine for you, and possibly be easier to program. It introduces globals, but LV makes parallel access to globals quite safe. If the loop time increases or you want more guaranteed synchronization, the merged loop is better in that regard.
The problem is that, when IFT is enabled, the setpoint changes (or should change, if the program was working ;)) for every loop of the main program.
Basically, the sequence I want in the main loop with IFT enabled is this:
Run PID, run the simulated plant, store MV and PV in an array (until the array is N elements long), and calculate a setpoint to be sent to the PID block in the next loop.
I have tried a state machine as well, but with every solution I come up with, the problem is the same: the setpoint should be updated every iteration and it is not.
I understand that you want the setpoint to change , but by using a wire to carry the values between them, it will not and cannot. If you switch to using a global variable, it will allow the loops to run independently – there will be no dataflow wires between them. This is the simplest way. If you get it to work, you can also play with the state machine again.
I got most of the program to work (using MathScript), I’ll post it when it is working 100%.
Now I have to deploy it to the cRIO, and with that comes a question: how to retrieve data from it? If I save a spreadsheet file, which path should I use? And how do I get it back, FTP?
Or is there a way to save the file to the PC, or send data straight to it (without the dashboard, of course. I’m trying to keep it as simple as possible).