There’s no easy answer to that question. Start with what image processing steps you’re doing and figure out what you want to do with the results. Think of the code in this parameter as the part where you turn the image processing information (e.g. contours or blobs) into the final pieces of information that your robot code needs to have in order to perform an action.
hmm okay. So would I need to make a method in GripPipeline that outputs the final information from the vision processing algorithm?
Essentially yes.
Awesome, thank you!
This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.