We’ve had very good experience with Chameleon vision deployed on Raspberry Pi, but seem to be missing some of the necessary output from the Network Tables. Specifically, Chameleon populates NT with the sizes and rotation of the blue and red bounding rectangles, but not their positions. Without positions we don’t see a way of calculating the actual target location: the midpoint of the top edge of the blue rectangle.
Are we missing something? Or is this data just not exposed?
Thanks.
-rg