I was looking through WPILib documentation and noticed that they added the information for the 2017 control system here:
https://wpilib.screenstepslive.com/s/4485/m/13503
I didn't see any other threads for discussing this. This part stood out to me immediately. Having OpenCV bundled with WPILib will be very nice so that it will automatically be on the roboRIO. In addition, it seems that we will be able to do vision processing on the RIO more efficiently and then send the processed image to the dashboard, which could eliminate the need for a coprocessor if I am understanding this correctly. I'm very excited to see how this works during build season.
Correct me if I'm wrong on any of this because I am relatively new to this, so I don't know.
Quote:
Computer vision and camera support
For 2017 the most significant features added to WPILib Suite have been in the area of computer vision. First and foremost, we have moved from the NIVision libraries to OpenCV. OpenCV is an open source computer vision library widely used through academia and industry. It is available in many languages, we specifically support C++, Java, and Python. There is a tremendous wealth of documentation, videos, tutorials, and books on using OpenCV in a wide-ranging set of applications with much emphasis on robotics.
OpenCV libraries are now bundled with WPILib and will be downloaded to the roboRIO without the need for teams to locate and download it themselves.
There is complete support for USB and Axis cameras in the form of a CameraServer class and specific camera classes that will produce OpenCV images that can be used for further processing. You can either let the CameraServer automatically stream camera video to the SmartDashboard or you can add processing steps on the robot between capture and sending to the Dashboard. All the example programs in eclipse have been updated to show how the new Camera server is used.
GRIP, the graphical vision pipeline generator can be used to quickly and easily create and test computer vision algorithms that can run standalone on your Driver Station computer sending results back to the robot via NetworkTables. New for 2017, GRIP can generate code in either C++, Java or Python for your vision algorithm that can easily be incorporated into robot programs.
The NIVision libraries have been removed from WPILib to a separately installable package.
|