Chief Delphi

Chief Delphi (http://www.chiefdelphi.com/forums/index.php)
-   Technical Discussion (http://www.chiefdelphi.com/forums/forumdisplay.php?f=22)
-   -   Axis Cam transport layer? (http://www.chiefdelphi.com/forums/showthread.php?t=110408)

josh.pruim 04-01-2013 18:02

Axis Cam transport layer?
 
What is the transport layer for the Axis cam? I am going to be attaching the Kinect to the robot in testing, and would like to replace the Axis cam with the Kinect via a co-processor that would stream the images along with processing depth data. Is this at all possible, or does the Axis cam use a proprietary transport method? If I remember correctly, I did see a post mentioning that it's over HTTP (MotionJPEG?), and I could leverage that as well.

Tom Bottiglieri 04-01-2013 18:10

Re: Axis Cam transport layer?
 
You can access a MJPG stream over HTTP. Basically the way this works is you handle a request for the stream URI, and immediately return the full jpeg including header. You then send a divider string (which you specify in the HTTP header as content type multipart/x-mixed-replace;boundary=<boundary-name>) and repeat the process with a new frame.

You can google around to find some Python+gstreamer implementations. Switching out the gstreamer backend for an OpenCV webcam source is relatively easy.

Radical Pi 04-01-2013 18:48

Re: Axis Cam transport layer?
 
We had the Kinect on our robot last year, and tried to do something similar. We did all of the vision processing on the coprocessor though, so the Axis emulation wasn't as important. We just included it so that the dashboard could get the image for debugging. We never did get it to be 100% reliable though, and the dashboard stream often fail as soon as the cRIO enabled (we sent results over a second channel to the cRIO which didn't fail), but I think that may have been a result of our choice of HTTP libraries.

If you need a starting place and are working in C(++), you can find our primary code for the coprocessor here. That's the file with most of the kinect interface stuff and parts of the vision algorithm, the rest are mainly for other parts of the vision processing. You can also see beagleSender.cpp for the code that sent the results to the cRIO. Our axis implementation isn't in there yet, but I'll see about getting it up.

With our system, we used a BeagleBoard-xM with Angstrom Linux, but that choice shouldn't matter too much. You'll need to install OpenCV if you want to do processing on the board, and libfreenect (which requires libusb) to get images from the Kinect. We didn't use the depth stream from the Kinect (it drops in accuracy significantly after ~2 meters), but you can see how we grab the raw IR images (which work wonders with the retroreflective tape). Note that you can't have both IR and color video enabled simultaneously. You can also get a hi-res (1208x1024) image from one of the two cameras if you disable the depth data.

Good luck, and feel free to ask if you have any more questions.

adciv 05-01-2013 15:05

Re: Axis Cam transport layer?
 
Please see my post over here. The code is already written & released.
http://www.chiefdelphi.com/forums/sh...d.php?t=110241


All times are GMT -5. The time now is 13:30.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi