Quote:
Originally Posted by marshall
Our 'expert' (and I use the term loosely) advice is to limit your streaming to an absolute minimum and to instead grab single frames where appropriate. We did this this past year and it worked great. No bandwidth issues..
|
This is good advice in general. While in the past I have had good luck with streaming (don't know if it was a function of how our code was structured, or getting lucky with particular venues' WiFi, or both), if you are doing image processing you can probably get away with single frames for many applications, and be more robust to unfavorable WiFi conditions.
A common use-case for an onboard camera is for automated alignment to a vision target. In this case, the target is not moving and (in theory*) you only need a single image to compute the transform between your robot and the target. Once you have computed this transform, you can use other sensors (ex. gyro) to close the loop. Camera-in-the-loop control schemes suffer from latency, low (control) bandwidth, and timing jitter, whereas using the camera to derive a setpoint for faster sensors sidesteps these problems.
* In practice, multiple sources of error (intrinsic and extrinsic calibration of the camera, detection errors, robot odometry errors) will mean that you probably want to take a sequence of frames as you turn/move towards the target and iteratively refine your estimate of where the target is.