Hello. If you have had any of the problems I have had, you probably keep a scribe close to hit the No App switch every time you download. Looking deep into the WPI library for possible causes of CPU overload, I found this.
The Compressor library opens a new thread to handle compressor updates. Kinda a weird thing to do, since it could just poll the compressor during every loop of Teleop or something, but it does. It uses FIFO to transfer enabled state between the main thread and the Compressor thread. The compressor thread has no delays - an infinite loop that reads the FIFO, reads the switch, and sets the relay. I don’t know much about how FIFO operates, but I assume it dosen’t take long to read the FIFO. I do know it dosen’t take long to read a switch or set a relay. Thus, it is quite possible that all extra CPU power is dedicated to running the compressor as fast as possible, so it would be really really hard to tell if optimizations work, as the processor is always around 95% since the compressor is always eating the extra.
The solution is simple. I added a 100ms delay to the Compressor loop. After doing this, the CPU load on my cRio went from 96% to 56%. Drastic improvement. I can’t certainly say it was this compressor, as I was trying desperately to get the drive lag to go away.
The compressor implementation should be fine if you start and stop the compressor only when needed. The code in question is Starting for every auto and every tele and every disabled packet, the compressor was already Started in Begin and never Stopped.
You can modify anything you like, but I’d recommend calling Start only in Begin.
As for the implementation, the Compressor was done as a background VI which loops waiting on a low-jitter queue called the RT FIFO. This enables other code to send commands to the compressor whenever you like, kinda like a fancy global. The RT FIFO read has a 500ms timeout so if no commands are sent, it will poll the compressor at 2Hz.
The reason I added the calls in Teleop was because the single call in Begin wasn’t enabling the compressor. I tried calling it once, and it just didn’t enable. So then I called it in the loop. And it worked. So I left it.
relay8_fwd = !rc_dig_in18;
This is what IFI had for their compressor. WPI has:
Compressor->Open, which opens two RT-FIFO’s and a Digital Input and a Relay, then begins asynchronous execution of a thread dedicated to polling the compressor.
Comrpessor->Run (thread) - polls compressor as fast as possible, assuming no delays from RT-FIFO (greg claims it does wait for new data).
Compressor->Start - sets RT-FIFO to enabled
Compressor->Stop - sets RT-FIFO to disabled
Compressor->Enabled - gets RT-FIFO for enabled and switch state
Compressor->Close - Closes RT-FIFO. The Run thread closes itself when RT-FIFO spits an error out (assuming that the RT-FIFO was closed).
All the complexity to take the place of that one line of code.
I’m not sure what we are discussing here. The compressor runs an independent VI, but VIs do not own a thread. The behavior, if not constantly told to start, is that twice a second the loop will run. If enabled, it will test the switch and update the relay.
This could be coded in many ways. You can put the code to test in teleop, but what if you want it in auto. You’ll also be checking the relay 50 times per second when the pressure cannot really change that much in 20ms. WPILib implementation allows control over multiple compressors and makes state data for each instance so that you can Start, Stop, and Check the state at various places in your app. In other words, the parallel loop seems like a reasonable way to do this. It could be simpler, but it isn’t particularly complex either.
As for the behavior of RT FIFOs, feel free to read the help, experiment, or ask others.
I was simply suggesting that the OP is free to do the control himself if he doesn’t like the way WPIlib does it for whatever reason.
We have no problems with the standard WPIlib compressor setup (Compressor Open and Start in begin) and I find it to be extremely simple. Reading the state to pack it into Dashboard data was also very straightforward and simple.