In the code that you posted, the delay was only in the (vision) disabled case. When vision is enabled, that loop is free running. Most of the advanced vision VIs use external code to do their work, but most are in-line processes. (The subVI waits for the processing to complete before returning) Any wait in sequence with the code will slow down that loop, freeing up the processor. In my opinion, the vision loop as a lower priority than actually driving the robot. I didn't mention setting VI priorities last time to keep things simple. Setting the vision VI to a lower priority may help, but other processes may still be impacted. (Other low priority processes, that you may want to use reliably.) I would typically use both a wait primitive, AND priority settings to achieve the desired balance.
I would also take a second look at your image processing, although I didn't see anything too resource intensive there. Maybe you could try using a lower resolution image. While making some videos, we found that even the lowest resolution setting worked fairly well, while greatly reducing processing.
PS: Thanks Greg for reminding me about priorities, and the great march through the pixels.
