I am using grip and the frc vision raspberry pi image for vision. I can get it to correctly identify the target however I want to sort through the contour data (MatOfPoint) to get the farthest left and right points’ y values. I get the basic concept of the mat of point data and can form a rectangle around it to get aiming data however I want to be able to use the data for orientation on the field. All welcome, thx.
I don’t quite understand your question. I take your post to mean that you’re able to threshold and filter contours in a way that isolates your target, and that you can draw the rectangle around it.
Then once you have the rectangle drawn, you want to use it for orientation on the field?
I’m just confused because once you have your rect, you have the height, width, and x,y coordinates:
I have the rectangle. A rectangle is limited in your angle perception though. I want to tell the difference in height between the left and right sides.
So you want to know what the actual corners are from the contour to calculate the difference?
Here is a stack overflow post that has example code that would do it roughly the same way I would attempt:
To make it easier, you could also try the convex hulls operation. That should give you a trapezoid-looking shape that forms around the outline of the vision target, in which it is easy to get the corner coordinates from (a well-filtered convex hull should only actually contain four coordinates, one of each corner of the trapezoid).
Here is a GRIP pipeline demonstrating the convex hulls operation
And here is the output
Thank you so much. I think this is what I am looking for.
What does the out out of the convex hull look like in terms of java data?
I have not used this in code before, but from the GRIP documentation and some example code that I generated from GRIP it looks like the
convexHulls operation outputs a “contoursReport” in the form of a
MatOfPoints containing a list of points of the convex hull shape ordered counter-clockwise. The points in the
MatOfPoints output would look something like this:
I still can’t find any way to get the coordinates of the points.
How are you receiving the convex hulls from the GRIP pipeline? Are you receiving it as a list of MatOfPoints?
If so, there should be a function in MatOfPoints to get a point at a certain index. As a disclaimer, I have not used GRIP in code before, but I am just giving my advice on what I would think it to be.
I have gotten some data out of it. I have gotten 4 data points. 2 x values and 2 y values. However the 2 x values are very close in number (444 and 445) and the y values are very close (128 and 130) so it doesn’t make sense.
I am looking for 4 coordinate pairs.
ah, that seems like your convex hull shape is detecting a different shape besides a perfect trapezoid. Would you mind posting a picture of your convex hulls output image? It might be detecting a trapezoid with slightly beveled corners, for example.
Since that is the case, you probably will have to make an algorithm that will retrieve the corners from the list of points. This can be done by looping through all the points and finding the point to the most of the extremes on each corner (most top-left point, most bottom-left point, etc).
That may be the case and I can post an image on Wednesday, but I still can’t find enough data.
This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.