![]() |
Worried about high CPU usage in CRIO
I admit I haven't been 100% involved with the software group in a few years until this year. I am back in the saddle again.
One thing I have noted is the high CPU usage in various things we have running. Here are some observation so far. I guess I am wondering if these are "normal behaviors". We have noted that if we load up a default 2012 framework with the default arcade drive, we see about 55-60% CPU usage. We wrote a serial driver that had taken this up to around 65-70% CPU usage. The loop rate is about 60 milliseconds here. On another project, we have a vision application separate than the serial driver program that when the vision is tracking, consumes about 75-85% CPU usage. The loop rate is the standard vision processing VI. So I am sitting here watching these two programs in development and I am concerned that when we combine then together that we are going to max out our CPU to 100%, cause watchdog errors and the robot to start dropping out of Tele-op mode because we are getting a "drive loop" not running fast enough error. So a few questions. 1. Are these CPU usages normal? 2. Should we be concerned and start thinking about enabling and disabling loops in the program only when we need them? For example if we don't need vision tracking all the time, just turn off that loop until the moment we need it, then enable it, and shut off other loops? Any other pointers, advice on CPU usage, watchdogs, or loop rates would be very appreciated. Thanks |
Re: Worried about high CPU usage in CRIO
I can't tell you whether it is "normal", but I can confirm the numbers you're seeing: 65% CPU with the default code, and 100% with everything running. When we added vision processing to our testbed robot, the controls were unusably sluggish, the vision was lagging, and the system watchdog kept shutting down the motors.
We have done a lot of optimizing to keep the amount of actual processing in Teleop to a minimum, but it wasn't helping enough. We moved all the vision code to the Dashboard and are now using UDP packets to send the target data array back to the robot for action. I was worried that the E11 Classmate might not be up to the task, but it works very well. We haven't tried the E09 yet, but I suspect it won't do as well as the E11. |
Re: Worried about high CPU usage in CRIO
Alan - Appreciate your input on this matter. Thanks for confirming what we see as well. Our robot does the same thing, "unusable" because of watchdogs.
Ok, so we are unfamiliar with moving the vision code by off loading it to the dashboard, is there a white paper for newbies how to do this? In response to your question about E09 and E11 comparison, other than the size of the unit in comparison to a normal laptop, that might be the best thing if performance of a little "netbook" PC compared to a "laptop" running your driver station would be the better selection if we do what you recommend and have seen as far as a performance boost. Using a normal laptop instead of a E09 or E11 at the driver station. At first when I saw some other teams questions about: 1. Adding a laptop on your robot. 2. Adding a SECOND CRIO on your robot. I was a little concerned with those kinds of questions teams are asking, but this may confirm why. I think we have tasked out the CRIO...I remember back in the IFI days...at least the CMU CAM was processing vision data on it's own and sending a serial string to the robot controller...maybe we need to look into a small PC too... http://smallpc.com/panelmounts.php (not under $400) Or a low cost vision processor that can handle vision in a co-processor relationship to the CRIO like the CMU CAM days. |
Re: Worried about high CPU usage in CRIO
If I recall, the E09 and E11 laptops are very similar in terms of CPU and RAM. You may want to get an SD drive so that you have additional disk for development tools.
As for maxing out the cRIO CPU, there are indeed lots of things in the framework. I doubt that the dashboard code is all that useful, so you may want to configure it, or better yet, rewrite your own. I'll look at the usage when I get back into town and see if there are some obvious things to tune up. Greg McKaskle |
Re: Worried about high CPU usage in CRIO
It does not solve the total CPU bandwidth issues but you can prioritize activities on the cRIO. We run the DS comms in task with higher priority than the camera so the camera gets the remaining bandwidth AFTER the comms, motors and sensors have been serviced. We work in C++ and use the native VxWorks (OS in the cRIO) API for doing this but I believe LabView, Java and WPI/C++ have APIs to create independent tasks and set their relative priorities.
We are processing images at about 10Hz (on the cRIO) which (we hope) is fast enough. HTH |
Re: Worried about high CPU usage in CRIO
We are seeing CPU usage interfering with operations, and so are also optimizing our code. We are fooling with Arduinos to manage some of the easier computing tasks offboard, not sure if we;ll actually use any of them though. Some of our motors are taking advantage of the PID loops in the Jaguars (available under CAN control) to further offload CPU cycles.
A 400 MHz Power PC and we're maxing it out. Incredible. |
Re: Worried about high CPU usage in CRIO
Quote:
If the basic code is already running at 65%, couldn't that indicate there is a flaw in the basic control code? I don't recall any previous years' code acting this way. Don't we have tools that could point us to where the biggest users of CPU cycles are? Something akin to "Task Manager" in Windows? |
Re: Worried about high CPU usage in CRIO
Ok, I think I have attracted all the "power users" of Labview at CD in one post. "ya'll" are scaring me with your comments....
I just got back helping a rookie team with a new CRIO-II. We downloaded the default code in that CRIO-II and it was running 40-45% with the default code. We only have two of the CRIO-I. Maybe I'll order a CRIO-II tomorrow just to gain another 20%... Maybe I should post this in the NI community and hook up with a Labview engineer over there and understand if we are doing something wrong or if this is fact of the matter... |
Re: Worried about high CPU usage in CRIO
One man's flaw is another man's feature.
You know, I asked our programming mentor that exact question Friday. He said that if I could find the Ctl, Alt and Del keys on the cRio I could access the Program Manager...:p |
Re: Worried about high CPU usage in CRIO
Quote:
|
Re: Worried about high CPU usage in CRIO
Quote:
Quote:
There's a few things you can do to track down high usage. In the Default project, there is a VI called Elapsed Times. You can drop it into each loop and wire in a name, and it will keep track of how long it takes between calls of that VI. This can help track down slow loops. You can also go to Tools -> Profile -> Performance and Memory for NI's equivalent of Task Manager. |
Re: Worried about high CPU usage in CRIO
Quote:
My logic is, if we did not see any watch dog errors in 2009, 2010, and 2011, then I assume maybe we did not have a MAX'ed out CPU Usage. This year, we see the CPU usage hit 100%, then we see the watch dog errors fill the screen, and teleop disables, and the robot shudders to a stop when we have too much loaded in the CRIO. (Which bty the way isn't much code at all... compared to the past robots. And if you wanna see the past robots, click the link at the bottom of the screen to the repository. The only time we used vision was 2009 and that robot was fine. BTY, that's a great tool who ever designed this CPU, latency and charts, thanks for giving us the chart trends to see this information....who ever you are. Thank you. |
Re: Worried about high CPU usage in CRIO
There is a function called "spy" which one can run on the cRIO console. It will print CPU usage by task every 10 seconds. There is a problem with the spy utility though, it uses the auxiliary clock library in the OS to profile the system and I'm not sure if NI uses that timer/clock library for anything else - next time I am in our lab I will check. There is also a remote display of the nearly the same information when using Workbench in debug mode.
The watchdog goes off if it does not get "petted" regularly and the FRC comms code interprets this as a dangerous condition (thus the disabling of motors etc). 100% CPU usage is not a good sign but it does not automatically mean something is wrong. If the watchdog is going off you could being doing too much work serially (one right after another) in between messages from the DS. Try parallelizing your activities and prioritizing the comms with the DS. The watchdog alarms should go away and you'll be giving the camera all the "left over" time. Then slow down and/or simplify the camera code till utilization drops just below 100%. HTH |
Re: Worried about high CPU usage in CRIO
Quote:
It gets a lot more complicated when you have multiple loops involved like the framework code. However, even this year, we've definitely had the CPU at 100% without watchdog/motor safety errors. |
Re: Worried about high CPU usage in CRIO
Quote:
|
Re: Worried about high CPU usage in CRIO
Quote:
Quote:
|
Re: Worried about high CPU usage in CRIO
One thing i have noticed when running the code is that sending the data to the dashboard using the default code takes up alot of CPU resources. It the past when i have deleted that it freed up between 10 and 20% of the cpu resources. I havent tried it with this years code or the new Crio's. I will try this tomorrow when i get access to our robots.
|
Re: Worried about high CPU usage in CRIO
Quote:
We take over that default operation with our own dashboard sending code and send the data less often, more often than 10Hz is of questionable value |
Re: Worried about high CPU usage in CRIO
Quote:
|
Re: Worried about high CPU usage in CRIO
Adding some code to my test project, when I was demonstrating for a local team, I kept killing the cRIO. (BTW the DoS bug in the network stack still exists.) I had to add some careful performance controls in my code to keep the CPU utilization down. (Partially my fault to begin with.) I was running between 65-75% CPU on the cRIO with nearly default code.
Previous years have not be a whole lot better, and normally saw these utilization numbers for most LabVIEW projects. The Vision loop was the worse, normally consuming whatever was remaining of the cRIO. The performance monitoring in LabVIEW is very useful in tracking down problems. A built project, running at start-up should take a bit less resources then just hitting the run button, since it is not running in debug mode. PS. During my testing the other night, I saw some interesting metrics. I will have to dive into it tonight. |
Re: Worried about high CPU usage in CRIO
2 Attachment(s)
Now, I've placed this code within Periodic Tasks.vi. There are two more case structures similar to the two visible, and I've moved the axis value to Teleop.vi. Today, we were getting some infrequent watchdog errors which shut down our comms with the cRIO.
Just by looking at this image, is there a way to streamline the code? I had tried to do something similar to the second image with just the single case structure visible, wired to a joystick button. When the structure was false, it set motor outputs to 0. No matter what I did, it wouldn't work. (I followed it in debug mode, and it appeared that the command to set the motor output to -1 was being triggered, however nothing happened on our jags.) Is there a conflict between the 500 millisecond timing and the 100 millisecond timing within the while loop? |
Re: Worried about high CPU usage in CRIO
Quote:
I'm assuming that the code shown is in the Periodic task with 100ms sleep. The issue is that when one of those buttons are pressed, the inner loop goes for 500ms with no sleep. That would likely cause a watchdog or other issue with CPU usage. After the inner loop completes, things would go back to normal. If you place a 20ms delay within the loop, that will improve the CPU usage, but you will still have an issue in that for 500ms, the outer loop cannot run. I'd think that you can make a loop that starts on a message, like a notifier, then runs for 500ms and waits again and is independent of the others. You can then send the notifier from the teleop or other loops. Greg McKaskle |
Re: Worried about high CPU usage in CRIO
I was able to look at the usage of the framework code a bit today.
I built a new framework, ran the code from the run button, and watched the CPU usage from the DS tab. The disabled robot default usage on an 8-slot was about 25%. I ran the default teleop and the usage did not change. I ran the default auto, and the usage jumped to just under 50%. Odd, but it turns out that when the auto doesn't update the motors, they go into safety update, and I'm assuming the error messaging eats the CPU. I need to take more measurements tomorrow to be sure that the errors are the issue. But if the auto sets the motors, even setting them to 0 every 20 ms or so, the CPU usage stays at 25%. Since this was default code, this was without video enabled, with no panels open except for Robot Main. The 8-slot was formatted without net console or CAN enabled. Can anyone with higher CPU usage provide differences or code to go on? Greg McKaskle |
Re: Worried about high CPU usage in CRIO
@DominickC: If you haven't already seen these two posts, they're worth reading: http://www.chiefdelphi.com/forums/sh...96&postcount=3 http://www.chiefdelphi.com/forums/sh...21&postcount=6 |
Re: Worried about high CPU usage in CRIO
Having a high usage isn't necessarily bad. Well written code can consume 100% of the CPU but be squishy enough to not get in the way. For example, let vision consume whatever cycles happen to be left over, but prioritize it correctly so it doesn't consume cycles that aren't left over.
Also, it is really easy to rail the processor - any while loop without a delay will take care of your spare cycles immediately. Do any of the power users have a favorite way to implement Dominick's code? If it doesn't need to run in parallel with itself, I typically do that with a single (Enum)Notifier with multiple loops. Let me temper that with an admission: I haven't gone into serious depth with FRC LabVIEW since I started working at NI. Most of my FIRST LV experience is in FTC/FLL, and most of that was so far under the hood that I didn't know what road we were driving on... :) Quote:
Both loops are busy loops - they will consume all available CPU time to check to see if 500 milliseconds have elapsed and to write the output. If there isn't anything else contending, you could just write once, wait 500, write again. Both loops are fully blocking - the owning VI has to wait 500 milliseconds to continue. Was this intended? Both loops pull the motor reference every iteration. If you are using the same reference often, you don't have to pull it each time. |
Re: Worried about high CPU usage in CRIO
@Greg - Ah, now I understand. I'll see what I can do here.
@Ether - I have read the first post before, however reading it again now (with a deeper knowledge of LV) I feel like I understand it better. If I wanted to make a "state machine", I would get the Joystick button values and set a local/global variable, and call such a variable within the code requiring the value from the variable, yes? @EricVanWyk - Is there a way within LV to prioritize code? I did not intend for my loops to be blocking (at least not in this situation). How could I write a value (such as a motor output) then rewrite a value 500ms later without it becoming a blocking loop? I'm going to try to write the code here from scratch. |
Re: Worried about high CPU usage in CRIO
Quote:
Quote:
|
Re: Worried about high CPU usage in CRIO
Quote:
|
Re: Worried about high CPU usage in CRIO
I think what you are trying to do is set the output, then wait 500ms, and then restore the output. Consider using a sequence structure. Put the delay in the center frame and have the outer two update the outputs.
Greg McKaskle |
Re: Worried about high CPU usage in CRIO
Wow, that turned my fans on to full tilt!
|
Re: Worried about high CPU usage in CRIO
1 Attachment(s)
Quote:
EDIT - Here's what I've got. How can I clean this up to run smoother, if at all possible? |
Re: Worried about high CPU usage in CRIO
Quote:
http://en.wikipedia.org/wiki/Busy_waiting http://citeseerx.ist.psu.edu/viewdoc...=rep1&type=pdf |
Re: Worried about high CPU usage in CRIO
That is a very minimal way to update the I/O twice with a delay in between. As mentioned earlier, the only issue is that if you start that in a loop that runs at 10Hz, this will pause it and may interfere with other operations you want the same loop to control.
Greg McKaskle |
Re: Worried about high CPU usage in CRIO
@Ether - Excuse my misuse of the terminology; running on little sleep while trying to do tasks which require thought is not the best combination! I intend for the majority of my code to be blocking.
@Greg - Great to know I'm now able to strip my code down to the bear essentials to make it run the way I want it. |
Re: Worried about high CPU usage in CRIO
1 Attachment(s)
If this code is located within Periodic Tasks and runs with a 100ms or even a 50ms delay, will any problems be caused if the code is not finished within 100 or 50ms? This code, along with other stuff, helps the second driver aim and shoot. Everything is tested and is working save the first frame of the Sequence Structure. Sorry for the messy coding in there! It basically squares the bot up with the nearest wall and seeing if the distances fall within an acceptable difference.
|
Re: Worried about high CPU usage in CRIO
Quote:
#include "sysLib.h" .... taskDelay(sysClkRateGet()/2); // wait 0.50 seconds The WPILib has a delay function in it also, taskDelay is the native vxWorks function. We send messages from one task to another and block waiting on the messages. For example the task receiving data from the DS sends a message to the motor task telling it the desired velocity. The motor task calculates the PID setpoints then blocks again waiting for another message, consuming no bandwidth. The PID callbacks are running off a timer of course but that is all in the PID classes. It is relatively simple and efficient. |
Re: Worried about high CPU usage in CRIO
I finally got some time to look into this. My original numbers seem to be off slightly. The graph of my driver station has an extended scale below 0. (I hope that the scale matches the graph.) Disabling various things and even modifying the framework to control the loops, allowed me to bring the CPU utilization down to well below 10%. I tested the default framework, acquiring but not processing images, sending data to the default data to the dashboard, and a bit of extra code only hit 25% CPU utilization on the first frc-cRIO. I did manage to confirm that the dashboard compiling and sending VI was a significant portion of that 25%.
There seems to be a fair amount of CPU left for vision processing. |
Re: Worried about high CPU usage in CRIO
About the scale on the chart. The graph plots are plotted against the scale, so 0 is not the bottom of the black chart area. If you see your red plot go to the bottom of the black plot area, that would be negative, and I'm not sure if that is a good thing for your CPU.
The reason the scales are lifted off the bottom was to leave room for the instrumentation dots. There are four plots below the 0 on the scale, one for the robot mode that the field or DS gave to the robot, and one each for any calls by the code to declare what was run. At least for LV, the framework has VI calls in it to place a dot on the chart each time the teleop code is run, each time the disabled code is run, and each 20ms while the auto is running. This should help identify when robot code doesn't leave auto, or runs auto and tele or other odd combinations during the event, as well as noting when the code doesn't run anything for extended periods (typically caused by large delays inserted into teleop). Anyway, thanks for verifying. I looked into why the autonomous is by default higher than the others, and it is caused by RobotDrive not being updated. Not updating for 100ms causes the robot safety override to kick in. No big deal, but it uses the uncaught error mechanism to print the message to the DS. This adds the extra overhead. Next year, we'll hopefully be able to redo the uncaught error stuff to be cheaper, and/or explicitly send the safety message. A properly written auto should never go into safety, at least not until it has run to completion and stops sending motor commands. Does anyone else seem to see excessive CPU usage in simple code? Is this one still causing anyone issues? Greg McKaskle |
Re: Worried about high CPU usage in CRIO
I did a bit of testing today. I ran some pretty heavy code (2 camera's and gimble control on both, two Gyro's, potentiometer, and three rangefinders, all sensors giving output to the dashboard and being used in automated actions within Autonomous). We were getting about 60% CPU usage in Teleop, and about 75% in Autonomous.
|
Re: Worried about high CPU usage in CRIO
Quote:
I am lost in this thread at this point. |
Re: Worried about high CPU usage in CRIO
The cRIO and cRIO-II are not that different in performance. Kinda like comparing a 2008 and 2010 models of small Toyota pickups.
It is time to debug this specific to your system. I'll be in touch. Greg McKaskle |
| All times are GMT -5. The time now is 18:38. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi