We’re somewhat at a loss for vision. We have the Axis Camera M1011, and we’ve already used the setup axis camera tool to assign it an IP address. At this stage, we’re trying to do the vision processing on the cRIO (depending on the amount of lag, we may later switch to processing on the computer).
Should I use the Rectangular Target Processing project (included with Labview) or should I use the NI Vision Assistant to process the images. Are you supposed to use one or another? We’re at a loss for how to actually process the images in our project.
The rectangular target vision processing vi is a sample code to demonstrate pre-built vision processing techniques. You can work with this code directly if you incorporate into your program. It includes 3 different algorithms you can experiment with to locate the rectangle. The output of the vision processing is written to a global variable that you can access anywhere in your code. The output includes the x, y coord of the center of the rectangle with respect to the center of the camera’s field of view along with a calc. distance to target.
The vision assistant is a utility provided by National Instruments that allows you to build your own algorithm. When it is done, it includes a vi code generator. Search NI website for the vision assistant tutorial if you are interested in going this route.
Thank you very much. We’ve decided to use the Rectangular Target Processing example in our code. Our problem is this -
We go to test the code by running it locally, and can calibrate the camera and code to recognize the reflective goal marker, but when we stop the code, our calibrations are lost. How do we save the camera calibrations - and do we do it before or after we import the Rectangular Target Processing project to our cRIO project?
Thanks so much for your prompt response.
Make the current values the default. There is a popup for individual controls and also a menu to do it for the entire panel.
I have been messing with the default rectangular code and I can get an image when I have a camera direct to the computer, but can not get a picture when I run it on the crio or when I put it into our project. If I run the driver station I can get a picture there. So I know I am getting an image but it isnt getting into the vision processing vi. Any ideas? Also is there a way to process it on the computer instead of the crio to save processing power and increase speed?
Thanks for Any Help
If you want to run the vision code on the laptop, the primary you need to solve is getting the data back to the robot. There are threads about using UDP or network tables to do this.
If you want to get the vision working on the robot, try starting with the example and tutorial. See if those steps work out.