Field Timing vs. Local Timing

Team 1732’s Robot has autonomous modes which are heavily based upon timing. While in competition, our timing seems to work very consistently, however in testing, (without the field’s timers) our robot has became more inconsistent. This difference is more greatly evident while using “CheesyVision”.

My guess to why the timing is different while the timer is run locally is that our CPU is unable to handle the timer with the required precision while running the robot code simultaneously.

I know very little about threads and multithreading, but want to know if this is the root of the problem. If so, what can our team do to emulate the field timers? Do I simply need to configure windows to give the timer process a higher priority? Is this possible to do in LabView, or do I have to make a greater change to the testing code?

Thank you,
Team 1732

It’s unlikely that the field is affecting your timing. Whether on the field or in testing, timers in your robot code should act the same; there’s no difference in what processing happens where. Instead I would look at your cRIO CPU usage. If you’re consistently maxing the robot CPU, it will throw off any timing you have in your code because the cRIO is unable to keep up with time constraints. If enabling cheesyvision makes your timing worse, it sounds to me like whatever code you have is running at 100% CPU, and adding the cheesyvision receiver is just woresning it by increasing the CPU load.

Watch your CPU usage while running autonomous, and if the CPU is running at 100% check for code errors showing up on the diagnostics tab of the driver station, and if there are no errors causing a CPU spike consider slowing down loops that you have running, and make sure that no loops are running without a delay.

Specifically what environment is the autonomous unreliable in? A local scrimmage? At your build location?

Are you running your code live (runing from main) or deployed (run at startup)?

How are you communicating from the dashboard to the robot?