Using a Kinect as a primary camera for drivers

Hello! I was wondering how we would go about streaming images from a Kinect which is plugged into an on-board computer to the driver station. I’ve been looking at TCP (using C++, because we’re using C++ OpenCV) for sending both targeting data (just three floating point numbers) and video, but I’m not entirely sure where to begin. What have other teams done? Do they use 3rd party TCP/UDP/___ libraries or do they write their own? I guess if all else fails we can use the Axis Camera for the driver and simply send targeting data to the cRIO via NetworkTables (or the C++ equivalent). It’s bugging me. All the vision processing I’ve learned so far has just been on my personal computer, because I know that actually sending that information to the cRIO is going to be a challenge of it’s own. Any thoughts on the matter would be greatly appreciated!

I’ve been working on C# code to get kinect depth camera and color camera and I’ve gotten success in recieving the feeds in an onoard computer, but I’m having trouble with networking as well.

I know *TCP *is the protocol used by the axis camera and the port it uses is 80.If you were going to use just the kinect and no axis camera, you could use port *80 *for the data. If you needed another port, look in the manual for guidelines on which ones you can use and select one that is bidirectional, same as 80.

If you can’t write your own TCP libraries, i’d definitely find a third-party client to work with.

Hopefully this helps!

Last year when we used a Kinect sensor, we ran into some troubles with a network based solution. That being said we chose to use the serial port on our CRio. Going serial allowed us to get an uninterrupted signal from our onboard computer to the CRio.

I played around with using the kinect as a driver camera last year, and I managed to get it working with just some of the shelf freeware:

  1. KinectCam, a driver to make the kinect work as a USB webcam.

  2. yawcam, a small aplication to host the camera feed over the network.

It worked quit well, but at about a second of lag with the classmate on the robot transmitting over wifi, it was too slow to be useful, and we didn’t have any weight allowance left.

Cool part is that KinectCam was able to place a BMP crosshair at a certain “depth” into the image, which was great for aiming. :smiley:

There are numerous threads asking how to communicate between devices on the robot.

I’ve attached an image to show how the networking can be accomplished in LV.

The TCP and UDP icons are in the Data Communication/Protocols directory. When you don’t need to receive all data, and are sending smallish amounts, I prefer to use UDP. It is really simple to use. You need to identify how to encode the data. You can use json, xml, or flattened binary. Just have the read side match and have it check for timeout errors.

TCP is similar – but different. Large amounts of data, above a few Kbytes will typically use TCP instead of UDP, and if you are sending a protocol where all elements must be processed in order, TCP is your friend.

Greg McKaskle





sigh… I really hope it turns out to be like that in C++… or even Java. Actually it wouldn’t be too much of a learning stretch (formatting wise) to use java.

Why not use the axis camera for the targeting? I think that it may suffice. Also, I believe that there is a framework, openKinect that may be what you are looking for. Also, personally, do you have anyone who I can talk to about openCV? I am trying to learn it.
Thanx, Dev

610 explored the use of Kinect and an onboard netbook (running RoboRealm) during the build season. Here are a few highlights:

  1. Infra-red beam is strong enough for it to guide our auto-aiming routines (full court and under the pyramid.) The programmers had it working beautifully during practice, and were pretty proud that our robot can auto-aim and shoot in the dark (full court!)

  2. Unfortunately, in real competition, the other sources of infra-red lights in the Verizon Arena (BAE) rendered it unusable. The team reverted back to using it as a camera only for driving and manual aiming. We later removed the Kinect and Netbook to make room for our disc-pickup enhancement during Waterloo.

  3. Overall, Kinect/Netbook/RoboRealm combo was very promising until we got into the BAE arena. The team may try it again if the new Kinect that comes with XBox ONE is more capable in filtering. RoboRealm is great and we received great support from the product guys.

Feel free to PM me if you need more details.

This post just solves half the problems that I am having with OpenCV and hacking the kinect:D :smiley: :smiley:

According to the post by Domtech, I believe that you can use an RPi with the software that he listed and do the processing on the driver station, probably a more accessible and powerful computer.

If you are interested in using Kinect with RoboRealm, here is a good tutorial on the 2013 game.

That’s Nice. Thank You. I have been trying to find vision tracking resources for many years and now I think that I am close.

Thanks! That really does help. I’ve been looking all over, and I’ve gotten the impression MJPEG just passed java by, and wasn’t really picked up and developed by the java community (which makes finding useful resources a pain…). I’ve found great libraries for streaming straight JPG images via UDP, but I have a feeling that the DriverStation is pretty locked down and resistant to change. I’ve also found a few promising and “robust” third party TCP libraries, but I’m almost certain we would end up having to write our own program to encode the series of JPEG images to MJPEG video (I can’t find info on this ANYWHERE). Are there any other options for streaming video through port 80 to the driver station (without creating a custom dashboard or something like that)? Thanks!

Since I am familiar with PHP, can I have nginx and php serve a file to the cRIO? How would I do this?

I found this
[FRC 2012] Basic Kinect Tutorial
[FRC 2013] Kinect Tutorial
Robotic Arm Control Using Kinect Camera
Using the Kinect with LabVIEW and the Upgraded Microsoft .NET API
Kinect use with Code Laboratories drivers

Thanks! In the event that MJPEG doesn’t work out, are there different streaming protocols we’re allowed to implement that we could access in a web browser (like the Axis, but not MJPEG)? I swear I saw a few teams doing this at the competition last year…

If you’re trying to use the driver station to connect, it is gonna be locked down, but the dashboard isn’t. If you need help programming a custom dashboard, there are white papers all over the internet that can help or you can post in NI Labview!

If you’re completely shut down to the custom dashboard idea, you can try writing a seperate program that you can run on the driver computer that *only *gets what you need from the stream, but I don’t think it would be easily integrated into driver-side vision processing.

If NI redesigns the driver station with the new control system, it would be a great idea to add an SDK for the driver station; in other words, they should allow macros, scripts and ability to add controls to the driver station

Can you explain what you mean by “add controls”?

Greg McKaskle

Yes, sir, indeed.
What I mean by controls is add features to it, like multiple cameras, or buttons within the driver station or anything else you could possibly think of