Hi everyone. I have a problem with Image Proccesing. how can i use image proccesing on autonomous? I have watched all videos on youtube but i couldnt understand.
There are several ways to process images (e.g. in the robot LabVIEW code, in the LabVIEW dashboard, GRIP, roborealm, etc.). Which one are you using?
I’m using LabVIEW Code. I have seen LabVIEW Dashboard but i dont know where i should start for that.
The vision processing algorithm really shouldn’t change between auto and tele-op. Really, if you have an automated feature in tele-op for aiming, even the robot motion shouldn’t be different.
I’m not going to recommend the dashboard, as it introduces latency into your images (there are ways to fix it, but it’s harder), but I did it that way in the offseason so here goes:
- Filter out your surrounding box. You can use a variety of color filters, shape filters, etc.
- Get the info from this box - center, etc.
- Process that info to get how much you need to turn, distance.
- Send this info to the robot.
- Move the robot to desired angle/distance.
If you need help, you can PM me.
See Tutorial 8 in the Tutorials tab on the LabVIEW Getting Started page.
Exactly. In FRC Examples you can see how to analyze the image with LabView