OpenCV Help

How do we get OpenCV to install on the Rio and open it so we can work on vision tracking thx.

OpenCV is already distributed with WPILib (C++ and Java)

on the WPILib website?

OpenCV does not need to be installed on the RIO because it is already there.

I’d like to know how you intend to “open it” as OpenCV is a library, not a program.

I was asking the question for a teammate as he doesn’t have CD but I think he wanted to know how to program with it to practice vision.

I would not recommend jumping straight into OpenCV. It is a massive library with support for much more than what FRC would use.

Instead, I’d start with GRIP, using it to generate a “pipeline”. It’s basically OpenCV, but with visible output.

apt-get install chiefdelphi

I kid.

Seriously though, start with GRIP and then move on to OpenCV on a raspberry pi or something similar. Get the concepts of HSV and filtering and contours down first.

Seconded. This is why we created GRIP.

Perfect Ive heard of GRIP before but didn’t remember, I thought Opev CV was the one with the visible output. Thanks guys :slight_smile:

One more bit of advice if you are just starting out with vision, in recent memory all FRC vision targets have utilized retroreflective tape. Your life will be significantly easier if you follow these steps:

  1. Turn off auto exposure, auto white balance
  2. Lower the exposure settings on your camera to their darkest options
  3. Use a ring light
  4. Adjust your ring light to be bright enough to illuminate the target from the furthest distance you will use vision
  5. Adjust your ring light intensity to not “blow out” (look white, not green or whatever color you are using) when you are at the closest point you can/will use vision).

Once you have done this, you will likely find identifying your target, finding its center and doing some math against that is generally pretty easy. The hard part then comes from using this information in the best manner.

Last tip do not forget to account for the latency in your camera/processing code when controlling your robot. It is simpler to use vision to get a target and then use a different closed control loop (encoders/gyro/etc) to handle targeting/moving.

All of this and more can be found in this video from FRC254: https://youtu.be/rLwOkAJqImo (if anyone has a better version let me know). And there code is very instructive, but very high level for the integration of vision into their robot.