The easiest way to get the image is to request <ip>/axis-cgi/jpg/image.cgi?resolution=320x240. Make sure to have the camera connected to the bridge. ALSO make sure you set the anonymous viewing option of the Axis to true, so you don’t have to authenticate.
Then, call:
ImageIO.read(new URL(“<ip>/axis-cgi/jpg/image.cgi?resolution=<res>”));
ImageIO.read() returns a BufferedImage object, which you can paint with Graphics.paint(Image) from inside a component. This method works very fast, I can get upwards of 30fps (seeing as the camera and computer are on the same LAN) and this offers basically the same performance, as the mjpeg stream is not any more compressed.
Since I don’t think I could adequately explain it here, I’ve attached my dashboard project. The files you should look at are probably AxisCamera.java (getting the image), JImagePanel.java (painting the image), and AxisCameraProcessor.java’s run method (for getting/painting the image). Post if you need something clarified; basically, the JImagePanel takes an array of pixels (as ints) which have been grabbed from the camera and processed, and loads them into a BufferedImage via the BufferedImage’s setRGB() method (the code as-is could be sped up by minimizing the amount of intermediate copies), which the JImagePanel then displays in its paint method via Graphics.drawImage().
Also, make sure you look at BigBrotherGUI.java, starting at line 50. If you want to run it, this sets the IP of the camera (and the port of the custom Dashboard, but ignore that for now – I have most things working, but they’re not in a state to be shared yet, as in, not very portable as far as code goes).
There’s some other stuff (parts of AxisCameraProcessor and ImageOps) you can ask about if you want, it’s object detection by filtering colors and then using blob detection (right now it should be wired up for motion detection, hook it up to the camera, it’s kinda cool). There’s also some edge detection laying around. I would suggest, though, using something like OpenCV if you really wanted to have some vision for autonomous.
The overall project layout is kind of lousy, but I tried to document it. You can ignore large parts of JImagePanel as of now, most of the shape/text/etc code was from when I was going to have the robot do image processing (and send back markup commands), before I learned it was so slow and then moved it to the computer. Otherwise, I think I documented most things or made them self-explanatory (except the GUI code, I guess, but that’s “just how it works”). If you need help with your GUI code go ahead and post what you need and your attempt here, I’ll try to help the best I can (or use NetBean’s Matisse, which is good for getting something made quick).
EDIT: I’m not sure how you’d go about displaying the mjpeg stream (as in, if you could use ImageIO in some way or not), but I really don’t see the benefit in doing so. The real bottleneck will be any processing you do, and if you don’t do any, then both methods will be very similar in speed. The only thing that seems to slow it down noticeably is increasing the size of the image.
BigBrother.zip (86.8 KB)
BigBrother.zip (86.8 KB)