|
Comparing performance of two sets of code
Hi all,
We are experimenting with some stuff that would involve logging each piece of data coming in and out of the robot. We have the logging stuff on a separate branch of our git repository, pulling in changes from master to stay up to date with the current code.
I am wondering the best way of comparing the performance of the 2 setups.
As I understand from the FMS Whitepaper and DS Log viewer guide, the control packet trip time can be affected by inefficient robot code. However, this solution also takes into account the current network, which should be fine for comparison but adds in external factors.
Another alternative would be to check the difference of the system time between the start and end of each Periodic method. We use Command-based architecture in Java (System.currentTimeMillis() is available), so I'm not sure if running the Command-based Scheduler happens in a separate thread (which would make this solution incorrect).
We are also not sure what kind of times would indicate a bad difference in performance. The FMS whitepaper and DS Log guide indicate that a trip time above 20ms is undesirable. If we were able to log the loop time in code, assuming we haven't changed any settings about how fast the periodic loop runs, what would be an acceptable range of computation time?
In the end, I don't think the logging will be a large burden on our code, but I think it would be a cool experiment to go through with the kids and to be able to talk with technical judges about when talking about our systems.
Short version of my main questions:
1. What is the best way to monitor the time my robot control loop takes? Current options are trip time from the DS log and maybe logging the difference in system time in each Periodic call.
2. Under default settings, what is an acceptable range of times for the periodic control loop to take?
Thanks for any advice!
|