View Full Version : LabVIEW Vision Aiming
We are a team who are in the process of programming vision. We have integrated the LabVIEW example into our project and and have the robot aiming at the target using a PID loop which uses the distance off-center (x) as feedback. We are aiming to switch our feedback to a gyro (or encoders) instead, as we don't want to deal with the on-field lag associated with camera-based feedback. Are there any tutorials/resources out there for help with this sort of thing?
We also wanted to display our processed image to the Dashboard, instead of displaying the unaltered image. How would we do this?
Thanks!
414cnewq
09-08-2016, 09:53
Have you looked into the presentation by the Cheesy Poofs at Championship?
(slides and video are in this (https://www.chiefdelphi.com/forums/showthread.php?t=147568&highlight=integrating+computer+vision)thread). It won't help you with video on the dashboard, but it will help you give gyro input.
We are aiming to switch our feedback to a gyro (or encoders) instead, as we don't want to deal with the on-field lag associated with camera-based feedback. Are there any tutorials/resources out there for help with this sort of thing?
Our code may work as a (cluttered) example for this. Short version, when you take the image you also sample your gyro. You calculate your new angle for the robot based on this and feed this as the new set point to a PID with the gyro as the process variable.
https://github.com/FRC-836/2016-RoboBees-OffSeason
We also wanted to display our processed image to the Dashboard, instead of displaying the unaltered image. How would we do this?
Easiest way to do this is to have image processing on both the robot and the dashboard. If you use the same parameters on the dashboard as the robot you'll get the same altered image. Recompressing the processed image to the dashboard will consume significant rio processing time and should be avoided.
Easiest way to do this is to have image processing on both the robot and the dashboard. If you use the same parameters on the dashboard as the robot you'll get the same altered image. Recompressing the processed image to the dashboard will consume significant rio processing time and should be avoided.
Would this increase the number of packets sent over the FMS? (concerned about lag). Or would the computer process the image locally, and not slow down the connection?
Would this increase the number of packets sent over the FMS? (concerned about lag). Or would the computer process the image locally, and not slow down the connection?
The driver station/computer would process the image locally. The only thing being sent over the network is the original image from the camera. Sending the original image over the network vs. a processed image would require more bandwidth/packets due to compression. As long as your image bandwidth stays under 5mbps you shouldn't induce additional lag in the system as you'll be under your limits. That said, you should always strive to have the lower resolution image you can get away with to decrease image processing time.
vBulletin® v3.6.4, Copyright ©2000-2017, Jelsoft Enterprises Ltd.