Ways to see the robot's vision

Hello guys, my team and I are new to autonomous and vision processing. I saw in the documentation on the subject and saw several ways to do this. Is it possible to mix these shapes? It seems that everyone reaches a similar result and I would like to do more as a way of learning.
PS: our team will probably use python, but I wanted the language or tool to be in line with the robot’s vision issue.


What hardware are you using? How is the vision data returned to the roboRIO? That same method may work for channeling the video stream.


we aren’t using any hardware for the autonomous, such as a preprocessor, but we only have one system to advance the robot forward (necessary in previous seasons when it had the 15 seconds of autonomous robots) and our roboRIO currently doesn’t receive any video data output.

Hmmm, I’m not entirely certain what the actual question is, but I’ll answer my two best guesses:

Assuming you are writing your own code utilizing OpenCV, Yes.

OpenCV defines a video frame by frame, using a code object called a Mat to describe the array of pixel colors/intensities. You can draw additional shapes and colors on top of a given Mat to superimpose debug info about identified targets. This overdrawn Mat must still be sent somewhere useful though - the most common approach is to convert it to an mjpg and stream it out over an ethernet network.

If you’re relatively new to this area, I’d recommend doing a survey of existing solutions. It’s ok to reinvent the wheel for educational purposes, it’s worthwhile to know what’s available to prevent unnecessary rework.

WPILib has a great overview of options. I’m biased and think PhotonVision is the best new option available this year. I can also share this presentation I did last year explaining some of the basics and how to sort between different hardware and software options.


thank you! I will search and see Photovision and see your presentation =)

Like @gerthworm I am also not completely sure of your question. Are you asking about how to see the robot’s vision from the driver station, or how to send data processed by your vision to the RoboRIO? For the former, consider using GStreamer. For the later, you can open up UDP connections in your robot code and in your vision code and send data over the robot LAN. Double check, but I believe ports 5800-5810 are available for team use.

In either case, if you are interested in writing your own vision system and looking for some tips on how to put that together including both of the aforementioned things, perhaps this workshop I presented will be of help to you. Networking and frame streaming is covered in there too.


My team wrote an white paper on vision, now 2 years ago. It will hopefully give you a starting point.

1 Like

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.