Sending Processed Image Stream to Dashboard?

My team is trying to figure out if we can get the vision assistant code to process an image for us. We decided that if we could send the processed image (The Black and White squares) to the image output on the dashboard, it would work.
Unfortunately, we do not know how. Can anyone help?

While it is possible to compress and send images from the cRIO to the dashboard, it should be far less intensive to process the rectangles a bit further and send the results to the dashboard. If you like, you can then annotate the original image as was done in 2009 and 2010.

Where are you trying to put the vision assistant code?

Greg McKaskle

Thanks for the quick reply.

We are trying to put the code in the vision assistant created VI into the Vision Processing VI. We were only trying to view the processed image on the dashboard to help us trouble shoot our vision tracking system which would return the distance the robot was from the scoring peg. So let me explain our code:

First we modeled our design off of the “Vision Target Tutorial.” We set up the NI Vision Assistant to use the Brightness function to filter the image to essentially black and white. From there we used the “Shape Detection” function to search the image for ellipses (we didn’t go with rectangles just because the poor image quality did not define the pieces of tape well enough to be identified as rectangles). This returned the Center X’s and minor radii of each tape piece which gave us enough information to develop the algorithms for calculating the distance.

Okay here were the problems we ran into:
First, in the vision assistant part, we could not use the RGB-32 bit images the webcam was outputting, since the “Shape Detection” function said that it could only use binary, or 8-bit images. We got it work by using Photoshop to save an image into a grayscale 8 bit bitmap (we tried saving it as an 8-bit color bitmap but vision assistant kept reading it as a 32-bit image). But we were not sure how to convert the images into the right type within LabView.

Second, we were not quite sure how to integrate the vision assistant VI into the Vision Processing VI or whether that was the right place for it. I attached a screenshot of what we did. More specifically we were not sure how to unbundle the wire from the Vision Assistant VI. We used a 1-D split array VI then an array to cluster, then unbundle cluster functions to separate the individual data. I wasn’t sure whether we needed the 1-D split array and how the number of matches wire from the Vision assistant VI fit into it, and how to know what data is for what detected shape. From there we ran the minor radii data and the center x data into a vi that we created that calculated the distances, and originally ran that into the necessary functions described in the Custom dashboard tutorial, and put a field on the dashboard, but when we ran it only a “int” was being displayed. Then to trouble shoot we wired the minor radius value from the cluster unbundle to the dashboard, but it now just displayed a 0 (this setup is what is shown in the screenshot).

So thank you for replying to our previous posts and see if you can provide any input on these issues.

heres the screenshot if it didn’t attach http://www.rmhsrobotics.com/overview.jpg

Thanks.

The first thing that jumps out in the attachment is that you have changed Primary Image, the one sent to Get Image, to be a U8 Grayscale image. I believe IMAQ will give an error when the jpg is decoded. If you built the detector to be based on monochrome images, I believe you want to put an extract plane between the Get Image and the retro tape locator. I’d probably extract the Intensity or Luminance plane and this will convert it to a grayscale.

When you convert an array to a cluster, arrays are dynamically sized, so you have to tell the convertor how many elements to put into the cluster. You do that by right-clicking on it and saying if you want three elements out. I believe it defaults to eight. The other way to get the element out of an array is to drop the array indexer. If you grow it, it will index and return more than one element. If you don’t wire to the index, it starts at the first element and progresses. You can also use it with multiple indexes wired up.

Does the retro subVI return an array that it too big? LV arrays are dynamically sized, so it is odd that you have to split to take only the elements you want. You’d normally just return the array of the correct size.

As to the original question about sending images to the dashboard. If it is just for debugging, is there a reason you aren’t simply probing the wire? If you run RobotMain by pressing the Run arrow, you are running out of RAM and can right click on wires to view data values, including images. Remember to right-click and change Palette to Binary if the image is a binary image. You can probe other types, you can set breakpoints, pause VIs, etc.

The reason to spend the effort to work these things into the dashboard is because you want them at an event where you will have to run deployed.

Greg McKaskle

As to the original question about sending images to the dashboard. If it is just for debugging, is there a reason you aren’t simply probing the wire? If you run RobotMain by pressing the Run arrow, you are running out of RAM and can right click on wires to view data values, including images. Remember to right-click and change Palette to Binary if the image is a binary image. You can probe other types, you can set breakpoints, pause VIs, etc.

The reason to spend the effort to work these things into the dashboard is because you want them at an event where you will have to run deployed.

Okay, I have had some success probing the camera image prior to processing on my VI, but probing at the output of the VI results in a black screen. Opening the VI and probing various points within it also gives a black screen, with occasionally a flash of camera image (unprocessed). Probing the numerical results of the image processing works fine – I can see the number of objects detected, etc. But trying to use the probe to view images has not worked for me.

Regardless, I would really like to send the processed image stream back to the Dashboard, instead of using the Dashboard to view the actual camera footage. Ideally I would like to be able to switch between the two.

Q1: how do I insert the processed image stream into the data bundle going back to the Dashboard?
Q2: how do I pull that image stream back out of the data bundle and view it on the Dashboard?
Q3: other than these two items, is there anything else that I would need to do?

Thanks a bunch!
Daniel

The output image is likely black on the probe because the palette needs to be set on the image. If viewing a thresholded image with a grayscale palette, you will get 0 and 1 being black and very dark gray and you cannot see anything. Change it to binary using the right click>>palette and you can see the image.

Sending an entire image to the dashboard is somewhat expensive to the cRIO. It is far easier to send the summary data, the target position and distance, and even scores if you like, and overlaying that on the original image. This is how it has been done in previous years.

You can then make it possible to turn the overlays on and off, similar to what you were describing.

Greg McKaskle

Thanks, Greg – changing to binary made all the difference!!