Quote:
Originally Posted by JewishDan18
Comparing the bounding box are to the particle area of the convex hull is a good place to start
|
Our team was using the vision assistant to try and create an algorithm, at first we tried using convex hull, but then realized that wouldn't work (or at least we didn't see any way for it to work) in frc java, so we rooted around in the api and found this field: particleQuality.
This is a ratio between a particle's area and any pixels within the particle, but aren't a "true" in the binary image. Fooling around with some pictures of a backboard, we found that square targets (like we're using) usually are within 35%-55%, while other bright things, like reflections and fluorescent lights are somewhere around 90%. Obviously these depend somewhat on your thresholds, but we've managed to find the targets (and
only the targets) pretty consistently, from almost any angle and distance.