one of the things on my list for the students this year is logging sensor data during autonomous; want the students to have debugging information at their disposal if (when) autonomous does something weird.
my main concern: I don’t want the logging to impact the autonomous processing; so that suggests that we either:
put a (lower priority) separate parallel loop in to field the sensor data and events and write them out to a file in the cRIO, passing the data between the autonomous loops and the logging loop with a queue or something similar.
queue all the data up during autonomous, then write it out when we go disabled. I need to investigate if this is going to get us into memory trouble.
has anyone seen any good tutorials for passing data in a queue?
are there any advantages to using TDMS over just writing out a CSV file?
The first thing you may want to do is to write an equivalent amount of data that you would want to store and time it to see how long it actually takes for the write operation. This will help you determine if you need to make it more complicated. Ideally, you can simply write in the sensor loop and let the caching improve the performance.
The TDMS file is great for more complex data sets, but I think it is overkill for what you are likely doing. I’d recommend either a csv or a binary array file.
We tried this last year. What we’ve found is it is difficult to write a file on the CRIO itself. What we did instead was to pass the data through to the dashboard and write it to a file there. We used the low priority dashboard packet. The data on the robot was polled at about 10hz.
Greg: good suggestion; I’ll take a look at it. The thing I do like about doing the logging in a separate loop is that we can change the logging destination easily (only need to change the logging loop).
Adciv: good suggestion, we’ll look at it, and it spawns another idea. Doesn’t FIRST allow communication on some additional TCP/UDP ports? Could have the cRio blast the logging data over TCP/UDP, and pick it up on the driver station PC in a non-Labview environment (don’t have to fiddle the dashboard)
I decided to create a binary file and append values inside the loop, in binary.
I added a logging loop to auto, put a delay in the loop, and had parameters for changing the delay, the amount being written, and I charted the loop time so that I could see any glitches. Additionally, I was watching the CPU usage.
I was logging from 10 doubles to 10,000 doubles each iteration with loop delays of from 5ms to 100ms. When logging large amounts of data, I could see some RT glitches. Clip1 shows logging 80kB every 10ms. It shows that the loop was running at a messy 20ms with 60ms glitches due to buffers or flash. The second clip shows saving 8kB each 5ms, pretty clean with similar 60ms glitches. In both cases, CPU usage was still below 50%.
Logging at 20ms was very clean. I logged 8kB and saw one 60ms glitch about every 15 seconds. At 80kB each 20ms, I saw a 60ms glitch every second or so.
While testing I accumulated a single 50MB log file. cRIO CPU impact was certainly reasonable, usually less than 10% additional.
I didn’t try using higher level blocks for logging, but see no reason why well written log program should pose any problem. The cRIO is a logging and monitoring platform after all. This is what the cRIO does in its day job at CERN, JPL, etc. – when it isn’t having fun on FIRST robots ; )
Greg, this was a wonderful piece of work, I was just spooling up to check this myself (been busy attaching a myriad of sensors to our dedicated “learning cRIO” for classes this week, no time to sling code), but now I don’t need to worry about if it’s feasible. I just need to get the students to do it correctly
We’re going to be at the 8kb/20ms end of the spectrum, but the occasional 60ms glitches intrigue me (are those buffer flushes?). It does suggest to me that doing the logging in Teleop would be a little risky; queuing the log data for a loop in Disabled.vi (write at end of autonomous) or in Periodic Tasks may be safer (though more complex, taking more CPU overall, and incurring memory overhead)…
The code used to make the measurement is shown in the attached screenshot.
I’d honestly expect that you will log closer to .1kB per 20ms, meaning that the whole match is hardly any data at all. That means that you can pretty easily append it to an array in a subVI or you can use the disk I/O. I didn’t test enough to understand the 60ms, but I’ve seen it before on other controllers when flash pages had to be flushed. If it happened once a second, it would affect the robot, but if it happens once in a match, I would think that is a reasonable tradeoff. I’m eager to hear what you discover and which approach you decide to use.
Greg, thanks for posting. one of the students got excited about this and ran more tests yesterday, confirmed the results.
We also found that our PCs can log a lot of data really quickly; had to start getting up in the millions of data points per loop to significantly impact a 20ms delay loop. Made a half gig file really fast
we’ll architect and implement this in the next couple of weeks.