I’ve managed get my GRIP pipeline working, and have successfully manipulated the data from NetworkTables in our Java program to detect the distance from the targets on the sides of the gear hook.
However, I now want to actually use the vision data to determine how the robot drives. Since there are two targets being detected, I felt it would make sense to draw a bounding box around the two and use the center of the bounding box as my target for the hook (as it should be approximately in the middle of the bounding rectangle). I figured that GRIP would have the ability to draw a bounding box, but lo and behold, it does not. So, I figure I can use OpenCV to do this using the boundingRect() method, but my question is: How do I do this? Is there a way for me to access the actual OpenCV code being generated by GRIP and add in the boundingRect() method?
Any help would be greatly appreciated!
So we are using the GRIP generated code on our roboRIO and we encountered the same problem. We tried blurring the image to turn what we detected into one large blob, but that didn’t work as well as we had hoped. What we settled on was looking at each of the contours and then averaging the x values of both in order to find the x coordinates between the pieces of tape.
Yeah, this was sort of what I was thinking, but I was hoping I would be able to draw a big bounding rectangle around the two contours. Guess I’ll have to give this a go instead. Thanks!
To draw a bounding box between two contours, you would just have to pick points on the contours and feed them into the boundingRect() method. Alternatively, you can draw it onto an existing Mat with the Imgproc.Rectangle() method. My team did this in our GRIP Camera code, however it is generated and not deployed to the Rio. Feel free to take a look at it here.
Is there anyway one could do this directly in GRIP? We are using a mini windows PC on the robot this year running grip (so no auto gen code). I’ve looked but have yet to been able to find a way to do this directly in GRIP.
If I auto Gen code, how would I go about running it in a windows environment. I am not overly familiar with programming and vision in particular, hence why I ask.
My co-processor is running a Windows enviroment though… Directly on the robot, not the driver station. So back to the original question of, Is there anyway one could do this directly in GRIP in a Windows environment? At this point I’m guessing the answer is no.
This year the team is well on its way to a generated-code based autonomous, with the grip-pipeline running on the rio - so no extra weight, nor space. And the improvement over last year is potential for thread/input control for 2 cameras. The generate-code instructions are all there in screensteps.