I was looking at the FRC Java tutorials and noticed there was a 0.01 second delay at the end of the main teleop while loop. Why is that? I never put one in my code.
It’s to reduce CPU usage by preventing the software from running more frequently than necessary.
That makes sense thank you. But, is there any point in reducing CPU usage, what is the advantage? It’s not like you are freeing up the CPU to do other things.
The CPU also has to process incoming communication data. You will see increased latency if there isn’t free CPU time.
Speed controllers are updated at a maximum of 0.005 seconds. Driver station data is received at a maximum of 0.02 seconds. 0.01 is a happy compromise. If you are only reading data from the driver station, 0.02 is all you need.
Thank you for the replies guys, everything makes a lot of sense now!
…so is it considered good practice to add it in? If your code is entirely event based (or using threads), would you really need it?
It depends on how you implement the thread. If you create a thread which runs continuously in an infinite loop within the thread, then you need to slow down the frequency of that loop by yielding the processor for a period of time every pass through the loop.
If the thread is created, does its thing, and then exits, you don’t need the yield in the thread. You just need to make sure that the parent process that is creating that thread limits the frequency at which it creates the thread.
If you are event based, then your idle loop should have a wait in it.