|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
| Thread Tools | Rate Thread | Display Modes |
|
#1
|
|||
|
|||
|
Comparing performance of two sets of code
Hi all,
We are experimenting with some stuff that would involve logging each piece of data coming in and out of the robot. We have the logging stuff on a separate branch of our git repository, pulling in changes from master to stay up to date with the current code. I am wondering the best way of comparing the performance of the 2 setups. As I understand from the FMS Whitepaper and DS Log viewer guide, the control packet trip time can be affected by inefficient robot code. However, this solution also takes into account the current network, which should be fine for comparison but adds in external factors. Another alternative would be to check the difference of the system time between the start and end of each Periodic method. We use Command-based architecture in Java (System.currentTimeMillis() is available), so I'm not sure if running the Command-based Scheduler happens in a separate thread (which would make this solution incorrect). We are also not sure what kind of times would indicate a bad difference in performance. The FMS whitepaper and DS Log guide indicate that a trip time above 20ms is undesirable. If we were able to log the loop time in code, assuming we haven't changed any settings about how fast the periodic loop runs, what would be an acceptable range of computation time? In the end, I don't think the logging will be a large burden on our code, but I think it would be a cool experiment to go through with the kids and to be able to talk with technical judges about when talking about our systems. Short version of my main questions: 1. What is the best way to monitor the time my robot control loop takes? Current options are trip time from the DS log and maybe logging the difference in system time in each Periodic call. 2. Under default settings, what is an acceptable range of times for the periodic control loop to take? Thanks for any advice! |
|
#2
|
|||||
|
|||||
|
Re: Comparing performance of two sets of code
Are you logging on the cRIO or on the driver's station?
If the former, how often do you write to "disk" (every loop/N seconds, or just when the match is "done")? The reason I ask is that the cRIO's non-volatile memory can only be written to so many thousands of times in its lifetime. |
|
#3
|
|||
|
|||
|
Re: Comparing performance of two sets of code
I'm not logging anywhere yet. We are implementing a central singleton-ish class that other classes push their data to, like SmartDashboard, with specific keys. Inside the class we customize the data going out to SmartDashboard and the console, and update it periodically.
We were mainly looking for a solution to chasing down println()s and SmartDashboard output statements all over the codebase. If everything can be "logged" without a significant performance deficit it would be nice to just write the code for that and forget about it. Logging probably isn't the best name for this but I can't think of a better one, although the class could be extended to do customized logging in the future. Thanks, by the way, that's good to know too! Last edited by BigJ : 28-02-2013 at 13:28. |
|
#4
|
||||||
|
||||||
|
Re: Comparing performance of two sets of code
Quote:
Quote:
|
|
#5
|
|||
|
|||
|
Re: Comparing performance of two sets of code
Thanks, Joe. I will likely output the loop time that way to smart dashboard and verify whether it comes close to the limit.
|
|
#6
|
||||
|
||||
|
Re: Comparing performance of two sets of code
Be careful on how much you're logging... only log what you absolutely need to! I have two real-world stories about logging for you... one recent (in the past 2 weeks), and the other from back in college.
First, the college one. I was in a class called "Autonomous Robotics". We had 3 person teams, built a small robot, and controlled it with a HandyBoard from MIT. We spent 3 weeks debugging it trying to figure out why the robot would randomly go haywire... and the more logging/display output we did, the worse the behavior got! We eventually found out that there was a bug in the firmware for the board. If you sent an odd-length message to the display, it would corrupt your entire program. 3 weeks of testing to find out that our only issue was the debugging statements we were printing out! Now, the recent one. We recently did a fairly intensive process to reclaim some space on our database. We had it all tested out and everything looked great - it would take about a full day, based on testing in an equivalent, non-production environment, but what can you do? We went to do it on the production environment, and had to stop in the middle because it was taking too long - we projected 3 days, based on what it had finished when we stopped it! It turns out that additional logging we had turned on in the production environment was what was slowing down the process. We were able to turn off the logging and get through it in the anticipated 24 hours. So, be very careful about logging. It's entirely possible to log too much and have it seriously affect your performance! |
|
#7
|
|||
|
|||
|
Re: Comparing performance of two sets of code
Quote:
If everything is too much, we will just go through with what we need at any given moment. |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|