Modifying image sent to Dashboard

Is there an easy way to modify the image displayed on the dashboard, so I can test my vision code (i.e. check if the thresholding is working well)?

Thank you for any help.

EDIT: Just to clarify, this is Java.

TL;DR version, No.

Full Version, From what ive seen, im not sure if you can directly modify the images with, say for example a SetPixel() or DrawLine() function or anything of that matter. The closest thing ive seen to this, is that the dashboard is specfically programmed to recieve extra data on a specific port, then draw over the recieved image, post-crio. None of that data was specifically documented which is slightly annoying because it would be nice to be able to do custom implementations. For example, last year, it was sent on another port, that contained all dashboard data (minus image), it had different sections of data for different parts, such as PWM readouts, solenoid readouts, etc, but also send over a list of points and sizes for circles, that would be drawn over the image.

So, then, does anyone have a good way to test image processing code?

Thank you for your help.

There are only 2 options i can think of.

  1. just output to console some info, life for example points of detected items and the such.
  2. Do the same as the dashboard, but write your own.

Second option is a obviously more difficult, but if you would like some help with programming it, i would be more than happy to help you write it. Im more partial to C#, but i know Java very fluently as well, and its what im programming the robot in. Just let me know if i can be of any help.

You can use the ZomB Dashboard, It has multiple target tracking support. Once installed:

  1. make sure you team number is set (set it, then exit and re-open Visual ZomB)
  2. Add a camera view (see note at bottom)
  3. Click the … button next to targets
  4. add targets, giving them names and setting colors (if desired)
  5. in java, send to the targets (see below)

Target format:
widthxheight+top,left where all numbers are normalized (-1 to 1)
z.Add(“targetname”, width+“x”+height+"+"+top+","+left);

camera view notes:
You must create a camera instance in your code, connecting to the switch directly is not supported yet (wait a few days)

Okay, I have tried to use the following methods to view the result of my processing:

Method              | Reason it failed
Send to Dashboard   | Haven't found any way to do it
Save to file        | NI disabled it
Output with println | Can't read individual pixels

Does anyone know how to do this?

As far as I can tell, there is no way to do it if you wish to use NIVision (due to the undocumented nature of EVERYTHING). I ended up just plugging in the camera to the bridge/router and I am going to be writing my own image processing code which will run on the computer.

Due to the reason that WPI Lib is HALF-BAKED this year, it will be nearly impossible for you to do any effective image processing in Java. This is because WPI Lib does not expose any native NI Vision functions (imaq*) to you (which are accessible in C++, but still undocumented.) Even if you do get image processing code you wrote yourself in Java working, it will take you a while to optimize it, and once you do, it still won’t be nearly as fast as native optimized code from NI’s Vision library, and will probably cause too much lag to be usable.

If you are not going to program in LabVIEW, I would recommend going C++ this year if you need to do image processing.

Those undocumented functions are “described” in C:\Program Files\National Instruments\Vision\Documentation\NIVisionCVI.chm.

If some imaq functions have not been wrapped for Java, perhaps you can do the wrapping by following the code.

If you would like to communicate data back to the dashboard app, this was done last year in all languages I believe. The data is packed and sent as part of the UDP response packet. At the dashboard, it is unpacked using the LV unflatten node, and the data can be used to annotate the image.

When you say that NI disabled saving to file, are you talking about the IMAQ file functions? Some of these rely on Windows elements and that is why they do not exist for any language. You should be able to use file I/O to save the data of the image for later analysis. You can also open up any TCP port you like and use those in the lab for debugging. The ports will be disabled for official fields.

To use the standard dashboard communications look at the packing classes.

Greg McKaskle

I have already successfully used the image code in java. Given I don’t get the detail suggested by the NI documentation. But it is simply not necessary for this game or the task to be accomplished. I was able to drive to target with a variable speed based on distance to the target. Keeping it simple will most likely be more successful than the image processing suggested by the NI docs.

My personal opinion but the language you use should have no effect on the success of your code this year. As proven by my week 1 camera tracker done in java.

can i get that code for camera tracker. you can email it to me at [email protected]

What I was thinking is use the circle tracker demo modified to not look for the inner circle. . .

Where would this documentation be located on a Linux machine? (We picked Java due to a lack of a Windows machine for programming (My personal laptop is our main programming machine)). Otherwise, I’d probably be using C++ (possibly LV, which I used in past years (when a Windows machine was available)).

However, I have not been able to find libraries for many functions. For example, I do not see any way to extract one pixel of information from an image (not necessary at the moment, but possibly useful). Another issue is a lack of file IO (I think… I’m not sure). I have also tried to use Java’s TCP and UDP libraries, but to no avail.

Can someone enlighten me as to how to use these (or verify that they’re unavailable)?

They’re available as C++ functions, you can dump an image into an integer array. There’s no wrappers for them at the moment. (I forget their names, it’s in one of the camera threads)

The api (documentation) is in netbeans. Cluck the first logo in top mid left. Click jacadoc.

You shouldnt need sockets so they are not part of the api. File io isf available (search cd for it)

File IO uses the same pieces of the API as the socket IO, namely“file://[name]”); instead of a different protocol.

If you wish to open a basic socket connection,“socket://ip”). There’s some related classes (for both file and socket connections) which take the form of TYPEConnection, where type is File, Socket, HTTP, whatever.