Vision Tracking with RIO

So I’m trying to code some simple vision tracking within the rio to see the ball and align with it but i dont really know where to start i have some stuff that i literally just copy and pasted to try and to get some start to see how it works. if the RIO seems laggy and stuff we could get a co processor its just we dont have a ethernet switch to mess with it yet so i figured this would at least help me get started and have an understanding of vision tracking.

  m_visionThread =
new Thread(
    () -> {
      // Get the UsbCamera from CameraServer
      UsbCamera camera = CameraServer.startAutomaticCapture();
      // Set the resolution
      camera.setResolution(640, 480);

      // Get a CvSink. This will capture Mats from the camera
      CvSink cvSink = CameraServer.getVideo();
      // Setup a CvSource. This will send images back to the Dashboard
      CvSource outputStream = CameraServer.putVideo("Rectangle", 640, 480);

      // Mats are very memory expensive. Lets reuse this Mat.
      Mat mat = new Mat();

      // This cannot be 'true'. The program will never exit if it is. This
      // lets the robot stop this thread when restarting robot code or
      // deploying.
      while (!Thread.interrupted()) {
        // Tell the CvSink to grab a frame from the camera and put it
        // in the source mat.  If there is an error notify the output.
        if (cvSink.grabFrame(mat) == 0) {
          // Send the output the error.
          // skip the rest of the current iteration
        // Put a rectangle on the image
            mat, new Point(100, 100), new Point(400, 400), new Scalar(255, 255, 255), 5);
        // Give the output stream a new image to display


While using solutions like Photonvision on a coprocessor is definitely better, you can run vision pipelines on the RIO.
Look into using GRIP and using the generated code feature or processing on the DS and sending data back over network tables. (The former is better, but since GRIP’s last update was in 2017, you will need to adjust some of the imports and library uses.)

would i be able to transfer the code from processing on rio to RPi with a few changes or would i have to completely rewrite it

Since the code is primarily generated by GRIP, you can easily switch a GRIP pipeline to running on the WPILib raspberry PI image. However, if you are switching to a raspberry pi, going to photonvision will be better, particularly for the vision target this year.

1 Like

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.