Introducing OpenSight - The powerful, all-in-one free vision suite

What is OpenSight?

OpenSight is a free and open source computer vision system targeted specifically for the Raspberry Pi. Our goal is to make it easy for people not familiar with vision to be able to make complex pipelines, while also providing powerful functionality for advanced users.

We highly encourage joining our Discord server if you are interested or have any questions, issues, or feedback.
For general installation, installation, setup, and other information, see our documentation.

Why OpenSight?

OpenSight is built around three principles: free software, ease of use, customization.
OpenSight is free and open source, now and forever. You can check out and contribute to the source code here.
We provide a Raspberry Pi image and packages for common coprocessors, so you can get up OpenSight up and running in a matter of minutes. Our documentation is written with both new and experienced users in mind. We also have some hardware bundles to make your buying process easier. We help handle the hard parts, so you can get right to the vision.

Just how customizable is OpenSight?

Every part of your vision pipeline is customizable. Yes, every part. This may be daunting for new users, however do not fear. The Pi image contains multiple example pipelines to get you started, and we’re working on documentation to make sure you can make your vision pipeline exactly how you want. If your hardware can handle it, you can even track multiple vision targets simultaneously! You can scroll to the bottom to see some example pipelines.

OpenSight is far more flexible than any prebaked solution. No more finding out what NetworkTables value you need to use or limiting yourself to basic tracking. OpenSight lets you create your pipeline how you want. You have full control over your pipeline.

In fact, developers can add their own functions to OpenSight!

If you know Python, you can add your own module to OpenSight. We have worked hard to make sure that it is easy to create modules, ranging from basic operations to complex actions. For example, the blur function is just fifteen lines of code! We are always looking for more modules, and if you have any great ideas or want to contribute to some work in progress modules, come to our Discord server.

How fast is OpenSight?

OpenSight has an efficient design which ensures you get great performance. The FPS you get will likely depend on your camera. Common cameras such as the LifeCam are limited to 30 FPS. However, with high FPS cameras such as the Raspberry Pi Camera, you can run a consistent, full vision tracking pipeline at 85 FPS on the Raspberry Pi 4!

How do I get it?

OpenSight runs on many coprocessors, from the Raspberry Pi to the Jetson Nano. If you have a Linux system, you can use the quick install script to get a testing instance of OpenSight. The installation guide for all systems can be found here.

What’s next?

OpenSight is still an active project. Here are some features we have up and coming for the next release:

  • Angle Finding
  • H.264 Camera Server
  • Static network settings
  • GPIO Control functions
  • Conditional logic functions (boolean testing, AND, OR, etc)

You can view our Trello board here which contains all of the features we’re planning on adding.
Do you want to take part in OpenSight? You can learn about how to contribute here!

A simple blurring pipeline:

Rudimentary vision tracking:

Multiple target tracking:

16 Likes

This looks great, you can track multiple things from one camera and its awesome.
Does it have support for multiple cameras if I want to track things for different systems of the robot?
I can really see how advanced users can create crazy complex things with this.

I think there will be a huge leap in FRC vision system because the Limelight and other programs like this or Chameleon Vision that allows many more teams to use vision code which is kinda complex to write on your own.

Because all the modules connection’s are green its a bit hard to see on a glance what goes where, and it looks dull, is there a way to change it?
Also is there a preview to see the image in the UI as you adjust the pipeline?

Cool.

25 Likes

It’ll be interesting seeing how all these vision programs stack up against each other.

1 Like

Looks interesting. I don’t see any documentation on the website about how the tracking info is passed to the RoboRio. Are you planning on using NetworkTables, similar to how the limelight does? Is that implementation detail left up to the particular module/node developer?

It appears to be using NT but hopefully someone will add ZMQ or other outputs.

1 Like

Yes, OpenSight supports multiple cameras. You can run multiple independent pipelines.

Making connections more visible is planned for beta release 3, however it may be moved up if enough people are interested in it.

There’s no preview in the nodetree view currently, however you can view a Camera Stream by going to Hooks -> opsi.videoio -> (name of CameraServer) .

I will be slowly responding to replies until around 3:30 EDT when I should be able to respond more rapidly.

Currently NT is used however it would likely be trivial to add other output mediums such as ZMQ because of the module system.

It looks like ZMQ will definitely be possible. What other output formats do you think would be useful? The next update has a focus on modules, so if you have any ideas we are definitely looking to increase the amount of nodes that can start/end a nodetree.

This is at least the second or third new vision venture for the 2020 game that will offer an alternative to the Limelight. I’m excited to see what’s possible, but I do hope these offerings can be tested and/or purchased soon, as we will quickly be in build season and need to have an established path forward.

Hot take: install OpenSight on a Limelight. FOSS software on the nice compact Limelight hardware.

3 Likes

That’s certainly an option, although I would recommend staying up to date on some of the up and coming open source hardware solutions.

4 Likes

now just add the zed…

What are those? Limelight and JeVois are the only offerings I know of.

This looks really cool. I have been working with it a bit. How do you filter the contours by size or shape?

1 Like

As far as I know, none have been formally released yet, but I’ve heard OscarEye and LemonLamp as names. I suspect we’ll get more information on those closer to Kickoff, since I assume the people developing those will want to use them on robots next year.

2 Likes

We’re currently working on contour filtering. It will be included in the next release. If you (or anyone in this thread) is interested in helping work on it we would love the help!

Hello everyone,

We have released OpenSight hotfix update v0.1.1 (View install instructions or upgrade instructions). In this version, it will allow you save the pipeline even if there are unconnected nodes, however it will remove anything unnecessary from actually running. This will allow you to disconnect entire branches from the rest of your pipeline and still be able to save your configuration and layout.

We’re currently working on contour filtering and other features. It will be included in v0.2.0. Join our Discord Server to keep up with development, ask questions, and get support.

 
 
 
what could this be? :flushed:

fdfn2x21

2 Likes

ThonkSpin

3 Likes

When GRIP was introduced I told my team mates that NT no longer had a construct for a “transaction.” There was the potential for related data to be split and get out of sync. We took our chances and tried running GRIP from a laptop sending data through NT to the roboRIO. Data was scrambled after only 45 seconds! Now we run a Java GRIP pipeline on an RPi using UDP and Google GSON json format. So far so good. We used a checksum at one time but that doesn’t seem to be necessary now.