In LabVIEW, the Teleop VI runs every time the robot receives a packet from the driver station, so the delay could be pretty irregular.
A way to achieve what I think you're looking for would be to measure the time delta between each run of Teleop, and multiply that by the joystick value before applying the scaling factor and integrating, to ensure that the joystick power is being applied "evenly" with respect to time.
For example:
