Java Swing display image stream

Tthe camera can output a stream of images into a specific IP address (depending on where we plug it).

Now, our team is aiming to write an entirely Java/Swing-based Dashboard. So far, we are successful in using sockets and connectors to open up a connection from the robot (server side) to the driver station (client side). And we’ve managed to pass values and strings over the network through this connection.

The problem is, we can’t find a way for us to get camera code. My understanding is that we need to get the client side computer to somehow connect to the image IP address, get the image, and display it on the Swing dashboard. We have no idea how to do this (which classes to use, how to get the image, do we need external libraries for decompressing MPEG stream…etc)

Help?

The Axis site has CGI details in the VAPIX reference manual. Basically, you initiate a session with the camera’s web server, it sends you images, and you display them using an MJPG compatible viewer.

Greg McKaskle

The easiest way to get the image is to request <ip>/axis-cgi/jpg/image.cgi?resolution=320x240. Make sure to have the camera connected to the bridge. ALSO make sure you set the anonymous viewing option of the Axis to true, so you don’t have to authenticate.

Then, call:

ImageIO.read(new URL(“<ip>/axis-cgi/jpg/image.cgi?resolution=<res>”));

ImageIO.read() returns a BufferedImage object, which you can paint with Graphics.paint(Image) from inside a component. This method works very fast, I can get upwards of 30fps (seeing as the camera and computer are on the same LAN) and this offers basically the same performance, as the mjpeg stream is not any more compressed.

Since I don’t think I could adequately explain it here, I’ve attached my dashboard project. The files you should look at are probably AxisCamera.java (getting the image), JImagePanel.java (painting the image), and AxisCameraProcessor.java’s run method (for getting/painting the image). Post if you need something clarified; basically, the JImagePanel takes an array of pixels (as ints) which have been grabbed from the camera and processed, and loads them into a BufferedImage via the BufferedImage’s setRGB() method (the code as-is could be sped up by minimizing the amount of intermediate copies), which the JImagePanel then displays in its paint method via Graphics.drawImage().

Also, make sure you look at BigBrotherGUI.java, starting at line 50. If you want to run it, this sets the IP of the camera (and the port of the custom Dashboard, but ignore that for now – I have most things working, but they’re not in a state to be shared yet, as in, not very portable as far as code goes).

There’s some other stuff (parts of AxisCameraProcessor and ImageOps) you can ask about if you want, it’s object detection by filtering colors and then using blob detection (right now it should be wired up for motion detection, hook it up to the camera, it’s kinda cool). There’s also some edge detection laying around. I would suggest, though, using something like OpenCV if you really wanted to have some vision for autonomous.

The overall project layout is kind of lousy, but I tried to document it. You can ignore large parts of JImagePanel as of now, most of the shape/text/etc code was from when I was going to have the robot do image processing (and send back markup commands), before I learned it was so slow and then moved it to the computer. Otherwise, I think I documented most things or made them self-explanatory (except the GUI code, I guess, but that’s “just how it works”). If you need help with your GUI code go ahead and post what you need and your attempt here, I’ll try to help the best I can (or use NetBean’s Matisse, which is good for getting something made quick).

EDIT: I’m not sure how you’d go about displaying the mjpeg stream (as in, if you could use ImageIO in some way or not), but I really don’t see the benefit in doing so. The real bottleneck will be any processing you do, and if you don’t do any, then both methods will be very similar in speed. The only thing that seems to slow it down noticeably is increasing the size of the image.

BigBrother.zip (86.8 KB)


BigBrother.zip (86.8 KB)

Gah, I can’t find it! I had the code around here for a simple java based web browser that would show you how to do it.

Eh, found the code online at http://www.dreamincode.net/code/snippet919.htm. If you look through that you will see how to set up a Swing panel to display a web page, which is how the camera natively outputs it’s picture. Should be enough to get you started.

Edit: A bit more searching and I found this, which seems to be a better example.

Thanks guys!

I find Netbean’s in-built GUI builder a good way to do Swing. Doesn’t teach me HOW to actually use Swing, but is useful for getting stuff done, which is what matters at the moment. (Like using Dreamweaver to build a website instead of notepad.)

So far, I’ve written some code based on some parts of the BigBrother project, namely the parts I think I understand. (Which is AxisCamera, parts of AxisCameraProcessor). Everything is compilable and debug-ready and I’ll see what happens when I get to the shop tomorrow afternoon. :slight_smile: And if it works, we’ll be sharing our dashboard code so everyone can use it.

We are looking at using Processing as our front and back end for our camera.
Processing.org