Chameleon vision target position

We’ve had very good experience with Chameleon vision deployed on Raspberry Pi, but seem to be missing some of the necessary output from the Network Tables. Specifically, Chameleon populates NT with the sizes and rotation of the blue and red bounding rectangles, but not their positions. Without positions we don’t see a way of calculating the actual target location: the midpoint of the top edge of the blue rectangle.

Are we missing something? Or is this data just not exposed?



The x and y values are calculated as the pitch and yaw

Hm, that’s what I thought. Perhaps I’m misunderstanding smth else.

What does “Take Point” button do? Does it set the reference point? I seems to align the cross hair with the center of the red bounding box, regardless of what the Target Region is (in our case, Top). So, even if we manually align the camera to point directly at the target (mid-point of the top edge of the blue box), we get non-zero pitch and yaw … is there an outline of the calibration process somewhere?


This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.