So we just came back from an event and I noticed as time went on, the starting of robot code took an increasing amount of time, starting from about 4 seconds at unbag to 15 seconds at bag. It didn’t seem to be user code because I have prints at the start and the end of robotInit and it was printing the start message a second or two before the robot code led turns green. I was wondering if anyone else noticed this or has a reasonable explanation for the lag.
Also, kinda related, but halfway through the competition the first deploy stopped working. We had to cancel the first deploy while it was looking for the RIO and restart deploy to get it to work (sometimes multiple times). It wasn’t connection or anything because I made sure the drive station showed Robot code led green before deploying but it was no help…
Did not have this problem today. Do you have gyros, camera, navx (though, we have a NavX and it works great).
Also, kinda related, but halfway through the competition the first deploy stopped working. We had to cancel the first deploy while it was looking for the RIO and restart deploy to get it to work (sometimes multiple times). It wasn’t connection or anything because I made sure the drive station showed Robot code led green before deploying but it was no help…
I think it’s mdns related. I had issues with this yesterday on my mac, and basically each time that pyfrc executes a command (I think there’s… 4 of them?), it redoes the mdns lookup, so each command is slloooooooow. I ended up just telling it to ssh to the self-assigned IP that the roborio gives itself, and then it was back to normal speed.
We have a gyro and an IP camera but we’ve had both since day 1
I think it’s mdns related. I had issues with this yesterday on my mac, and basically each time that pyfrc executes a command (I think there’s… 4 of them?), it redoes the mdns lookup, so each command is slloooooooow. I ended up just telling it to ssh to the self-assigned IP that the roborio gives itself, and then it was back to normal speed.
probably. After this event I absolutely hate mDNS. Switching to static IP at unbag time
Heh, I’m afraid that the team will no longer be using Python after last weekend’s competition (not my decision). Hopefully (not hopefully?) another team has this same issue and will be able to shed some light on it.
So we switched back to Python and this time it didnt seem to get progressively longer, but it just seemed to take a generally long time. When we were using Java there was maybe 1-2 seconds between the robot code LED turning on and off and with Python there is a good 10 seconds. Any thought?
My expectation is about 5-10 seconds. I haven’t looked too deeply into this, but I believe it’s because there’s a lot of wpilib .py files that all have to be loaded, regardless of whether you use them or not. Loading the HAL ctypes wrappers is a lot of annoying overhead too.
Just did this experiment:
# time python3 -c 'import wpilib'
real 0m4.450s
user 0m3.410s
sys 0m0.430s
# time python3 -c 'import hal'
real 0m2.413s
user 0m1.250s
sys 0m0.210s
I would love to see this time go down, but I suspect it would take a lot of work to do so (one idea: cythonize everything). Pull requests welcome!
Python does support the reload() builtin, so in theory one could auto-reload code on demand. pyfrc’s deploy even has an --inplace option that doesn’t delete the existing code, so that the yeti framework can do this sort of thing. Would require some deep magic integration somewhere to make it more general.
Most of the time, the real answer is use ntproperty so that you have easy to use tunables, instead of redeploying code.
As long as it connects (which we fixed by switching to static IPs), neither of my teams have had NetworkTables issues on the field that weren’t of their own doing.
If the pyfrc libraries were compiled before being uploaded to the RIO (if they aren’t already), would that make it significantly faster?
Go look at /usr/local/lib/python3.5/site-packages/wpilib/pycache
Bytecode-compiled .pyc files are primarily to reduce file size, and only help where disk I/O is the bottleneck. If that’s the case here then it may help but I wouldn’t expect the difference to be dramatic.
Anecdotally, our robotpy deploy times dropped by at least a couple seconds once we switched our RoboRio to static IP and used that IP (not the hostname) to deploy. I would also second Dustin’s advice on using tunables when practical, and just avoid redeploying for every tweak.
Although coming from using LabView in previous years our team still finds the robotpy deploy times to be a vast improvement.