Our robot drives fine and we are not getting any errors on the drivers station but the CPU is 80% disabled and bumps 100% while enabled. it’s hard to read this on the drivers station graph.(Perhaps there is a better place?)
I have looked for untimed while loops and can’t find any. I slowed down all of our periodic task loops and that does not seem to make any difference.
We have poor update rates while in debug mode. Sometimes the front panel displays will be as much as 3 seconds behind reality and can be choppy. That makes it hard to trouble shoot and tune our PID loops.
Rebooting everything helps but the problems come back.
When I check the vi timings it says that various CAN bus subVIs from the WPIlib are using a lot of milliseconds of processing time. The fact that we have not touched these VIs and that it seems to be reporting more milliseconds than have passed or even would be available makes me think the report is in error. I believe it was reporting about 5 minutes of CPU usage only about 30 seconds after we deployed the code.
We are running 4 Talon SRX controllers on the CAN bus. We have them on different IDs. I have not checked them for sticky errors. I don’t know what firmware they are running. I can’t see how that would effect the RoboRIO CPU usage, but I have been reading that is the pat answer to anything having to do with them. I will check next time the team meets.
What I don’t know is what normal CPU usage would be for the default code. How fast should our drive loop be in periodic tasks? 10,20 or 40 ms? The drive loop is our biggest piece of code and it seems small.
Is this normal? Will it bite us at competition? We had that happen in the past.
Is it Wifi interference? We have tried 5ghz and ethernet connection but nothing seems conclusive. Of course we are cramped for time, so I don’t want to spend a lot chasing something that is not really a problem. But if this is an indication of something terribly wrong then I can move the issue to the top of the list.
I can post our code, but I don’t know if I should post all of it or just the VIs in question. Well here’s the whole folder.
So, you’re running LabView in the background while running driver station?
It’s been a few years since I used LV, but I recall it being very resource heavy.
Try putting a disable structure over all of your code, verify the CPU usage drops to a normal value, then one by one take your VI’s out of the disable structure to see what’s causing the increase.
One of the biggest things that I see that is causing problems is that you have vision on the RoboRio. This is a major resource hog and has nothing to do with LabVIEW.
You should look at moving this vision stuff to the Dashboard. It is not that hard to do. That would help you a lot with your RoboRio resources.
There are many other things that I see that would help with lowering resources. One thing is using global variables. They take a windows call to set priority. This will slow down what you are doing. We use action engines to hold the values that we want to use in other places. Look at our code from last year to see what I am talking about.
I would move all of the dashboard updates to their own loop in the timed tasks. Have one loop that sends and receives values and put them into global variables if you do not have time to lean action engines. Can I edit your code on line? I could try and show you what I am talking about.
Biggest thing you can do is move the vision stuff to the dashboard hands down.
I didn’t look at the code yet, but I’ll try to answer some of your questions.
Normal for default code is somewhere around 15% to 25% depending on PWM or CAN, the number of sensors and actuators you are updating in teleOp or a fast periodic task, and how many variables you are publishing.
If you have panels open, they also take CPU to send the updates the debug panel.
The first thing I’d do is build an EXE, deploy and run as startup and use the DS to see what the numbers are as you use the robot in TeleOp and other modes. If you want to see more detail, you can connect to the roboRIO web page and scroll down the screen and you will see detailed CPU numbers.
If they seem high, slow down or disable vision to isolate it from the rest of the robot. Personally, I think vision on the roboRIO is not that bad, but running it on the dashboard is a fine solution too.
Global variables make it quite easy to have a race condition, but I wouldn’t pin them as a common source of performance problems. They are atomic, meaning the a mutex is needed, but I wouldn’t describe that as a windows call to set priorities. Again, once vision is removed, you can more clearly see what remains.
Common problems are … running loops faster than needed (which includes no wait at all), calling CAN nodes inside the loop when they only need to be set once, setting a NT variable more often than needed, doing excessive logging or print message reporting in a loop, etc. Vision is typically slow because the image is too big.
I looked at your code an spent some time changing a few things. If you PM me I will figure out how to get you the updated version. I added some action engines so you can see how they are used. They are simple to use and can save a lot of time. I cleaned some stuff up and reorganized a few things.
I did notice that you are talking to the Arm UpDown in two places. You should try to keep the writing part of your code in one place. There are other things to do that could help but I do have a day job. I will be out this weekend so I will have to get you the update on Monday if you would like to see it.
Aeastet, Thanks for spending time on the code. I don’t know what an Action Engine is. Looking forward to seeing what you mean.
Greg, “calling CAN nodes inside the loop when they only need to be set once”
This sounds like a mistake I could be making. I will google it.
Default code 15 to 25% That means that yes I have a problem I need to fix.
We are not doing vision in the RoboRIO. We are doing it in roborealm on a tablet on the robot. What you see is just the processing of the X and Y Roborealm found into commands to turn and fire.
I figured that global variables would be fast. We are using a lot of them this year. I don’t know what a windows call to set priority means. More to google.
The WPI method of calling a subVI always seemed like more work than global variables. And we had a problem last year where we could not call the same subVI at the same time and get reliable results. We had to tie the errors from one to the next so they would not run in parallel. But I suspect that is all just minor compared to our problem. Our code just is not that complex.
I will check the arm up down in two places I thought I put the first one in a false case when the second one was running.(autofire is true)
I ran your code on my roboRIO and the disabled CPU load was about 30% and teleOp was about 60%. Of course I don’t have the CAN devices or RoboRealm running on a tablet, so I’m getting lots of errors and may not be running some of the code. In the profiler, the biggest contributor was the CAN updates.
As a test, I converted the CAN Talon SRX in Begin into a PWM Talon SRX and the numbers changed to 20% and 40%. Of course there were still lots of error, and different errors, but in general, using CAN when the Talon is in power mode, the equivalent of PWMs, will be more expensive. Until this is on a real robot with a true CAN network, I don’t trust my numbers much, but this may be a pretty easy experiments for you to run.
Using CAN for velocity control, position control, or other things that cannot be done easily with PWM is certainly worth it, but there is additional overhead to CAN, and for PWM, it will somewhat your CPU usage.
We had that problem when we used older computers. If you don’t need the dashboard, I’d suggest closing that down because that lightened the load to a normal amount on our older computers. Also make sure you don’t have any unwanted programs hogging the cpu.
What processor do you have? (look up system information in the start menu, and in the “hardware summary” there should be a row labeled “Processor.” i want the full name.)
Also, if you are running windows 8/10, in the task manager there is a place to see the CPU, GPU, RAM, Disk, and WiFi usage. See if those line up.
BenWolak-It is the RoboRio CPU usage, and we believe, not the computer’s.
MikLast- I doubt that the computer’s processor is going to have any problems. It’s a MacBook Pro running Windows 8.1 through BootCamp. It has a Core
i7-4750HQ CPU, 2.00 Ghz, and 8 GB of RAM. Nothing else is out of the ordinary, except the RoboRio CPU usage.
No, i also would not think that is the issue. Is there any other computers to test with then? as Senor said, try getting out of any other program. Try and figure out if its an issue in the code or the computer itself.
Greg. We were having problems deploying the code. Different errors and it takes several tries. I think the lead programmer did get it deployed so it would run at startup. I was to tired to remember to check the CPU usage.
I was also distracted by the fact that our auto targeting system did start working pretty much as planned today.
We do have both another RoboRIO and another computer we can try from if it still has the issue when deployed.
We plan on working tomorrow so I will check it in the morning.
If you are using a navX, you may want to get the latest libraries. There was a subtle issue that caused deployment problems on every other run. This isn’t obvious, and it is easy to misinterpret the issue. I don’t know of other specific deployment problems. If you have specific error messages, please post them.
Looks like the issue was that we were updating the dashboard too often. We slowed the loop from 40 ms to 200 ms. CPU now seems to stay below 70% most of the time. But I have not done the definitive testing that is needed. And I have not had time to look into doing action engines.
We have a practice bot so we will be working on this some more.