My team has, in the past, only used a camera for the once in a blue moon vision processing, but we’re considering having a livestream on the Driver Station this year, as with the possibility of stacks blocking you’re vision, it seemed necessary.
What are your thoughts on this? Who else is planning on doing this? For drivers that have driven with a livestream, how useful was it, and did you actually use it?
We were talking about doing this too. We’re considering mounting the camera close to the ground (on a bottom-stacker robot), so that the driver can see totes to line up with them.
We’re also considering using field-centric mecanum drive for normal driving, and having the driver switch to robot-centric when using the camera feed for positioning. We call switching to the robot’s point of view “scoping.”
That’s actually exactly what we’re thinking of at the moment xD and the camerae would also double for vision processing during auto.
We’ve done it before, but if your robot has to process too many things at once, the camera can cause your rio to crash. However, this only happened to us with the cRio and since the roboRio is 500% more powerful, you likely wouldn’t have the same issue this year. There’s a button for enabling/disabling the camera on the driver station dashboard that can become quite useful, so you would only enable the camera when you need to instead of all the time.