As I mentioned in the other thread, Patrick (my programming partner) managed to fix the lag problem by running the drive station under the developer account using 320x240 resolution. That worked…for a day. Now the familiar half-second lag is back, even on 320x240.
Does anybody at all have an idea what’s going on? I really don’t want to abandon the camera.
I’m not sure which thread is which, but as I mentioned elsewhere, big lag of around five seconds is caused by the classmate not being able to keep up with the cRIO. Small lags, 0.5 seconds or so is probably not unreasonable and will depend on the image size.
Since you’ve started a new thread anyway, please summarize the issues, what you’ve altered, and how the symptoms changed.
OK, here’s a summary of the issues.
Lag started off as 5 seconds. Patrick set all camera settings to the best possible (highest resolution, 0 compression, etc) and the lag reduced to half a second.
We found out that, contrary to expectations, decreasing resolution and increasing compression both increased lag time.
We then switched to the developer account and found out that lag time on 640x480 was still half a second. Upon switching to 320x240 resolution, lag time decreased to virtually nil and, for the first time, we had a live feed that’s actually live.
A day later, with no altering of camera settings, lag time increased to half a second again.
You might find this answer from the GDC helpful: http://forums.usfirst.org/showthread.php?t=14284
I did some testing and didn’t see any difference with lag between Developer and Driver accounts. For me, using LabVIEW, a compression of 5% helped. Beyond that didn’t make much difference - except very high compression lost sight of the target. While checking loop rates I started out doing some averaging, but I also watch individual loop times. There were occasions when the individual times were obviously too fast (like 2 ms). We did a little poking around and found a timeout in the vision code that was set to 100ms. This means if we took longer, it would return anyway with empty data. The next time through the loop the data might be there after only another millisecond or two. This means for two loops, we really only got one set of vision data, but the timing seemed twice as fast (or at least faster) when averaging.
This doesn’t affect actual performance much, but we will be fixing it in the near future. If you want to look at this, you can find it in Program Files\National Instruments\LabVIEW 8.6\vi.lib\Robotics Library\WPI\Camera\Send Images To PC.vi where the Wait for Raw Image String subVI has 100 wired to it.
Thank you. I’ll be waiting for the update, then.