View Full Version : Using a Kinect as a primary camera for drivers
ekapalka
04-10-2013, 10:18
Hello! I was wondering how we would go about streaming images from a Kinect which is plugged into an on-board computer to the driver station. I've been looking at TCP (using C++, because we're using C++ OpenCV) for sending both targeting data (just three floating point numbers) and video, but I'm not entirely sure where to begin. What have other teams done? Do they use 3rd party TCP/UDP/___ libraries or do they write their own? I guess if all else fails we can use the Axis Camera for the driver and simply send targeting data to the cRIO via NetworkTables (or the C++ equivalent). It's bugging me. All the vision processing I've learned so far has just been on my personal computer, because I know that actually sending that information to the cRIO is going to be a challenge of it's own. Any thoughts on the matter would be greatly appreciated!
Invictus3593
04-10-2013, 13:59
I've been working on C# code to get kinect depth camera and color camera and I've gotten success in recieving the feeds in an onoard computer, but I'm having trouble with networking as well.
I know TCP is the protocol used by the axis camera and the port it uses is 80.If you were going to use just the kinect and no axis camera, you could use port 80 for the data. If you needed another port, look in the manual for guidelines on which ones you can use and select one that is bidirectional, same as 80.
If you can't write your own TCP libraries, i'd definitely find a third-party client to work with.
Hopefully this helps!
curtTheGreat
04-10-2013, 15:01
Last year when we used a Kinect sensor, we ran into some troubles with a network based solution. That being said we chose to use the serial port on our CRio. Going serial allowed us to get an uninterrupted signal from our onboard computer to the CRio.
I played around with using the kinect as a driver camera last year, and I managed to get it working with just some of the shelf freeware:
1. KinectCam, a driver to make the kinect work as a USB webcam.
2. yawcam, a small aplication to host the camera feed over the network.
It worked quit well, but at about a second of lag with the classmate on the robot transmitting over wifi, it was too slow to be useful, and we didn't have any weight allowance left.
Cool part is that KinectCam was able to place a BMP crosshair at a certain "depth" into the image, which was great for aiming. :D
Greg McKaskle
04-10-2013, 22:26
There are numerous threads asking how to communicate between devices on the robot.
I've attached an image to show how the networking can be accomplished in LV.
The TCP and UDP icons are in the Data Communication/Protocols directory. When you don't need to receive all data, and are sending smallish amounts, I prefer to use UDP. It is really simple to use. You need to identify how to encode the data. You can use json, xml, or flattened binary. Just have the read side match and have it check for timeout errors.
TCP is similar -- but different. Large amounts of data, above a few Kbytes will typically use TCP instead of UDP, and if you are sending a protocol where all elements must be processed in order, TCP is your friend.
Greg McKaskle
ekapalka
04-10-2013, 23:18
I've attached an image to show how the networking can be accomplished in LV.
[small, elegant LabView code]
sigh... I really hope it turns out to be like that in C++... or even Java. Actually it wouldn't be too much of a learning stretch (formatting wise) to use java.
Why not use the axis camera for the targeting? I think that it may suffice. Also, I believe that there is a framework, openKinect that may be what you are looking for. Also, personally, do you have anyone who I can talk to about openCV? I am trying to learn it.
Thanx, Dev
610 (http://www.youtube.com/watch?v=FsFT4Wk54V0) explored the use of Kinect and an onboard netbook (running RoboRealm) during the build season. Here are a few highlights:
1. Infra-red beam is strong enough for it to guide our auto-aiming routines (full court and under the pyramid.) The programmers had it working beautifully during practice, and were pretty proud that our robot can auto-aim and shoot in the dark (full court!)
2. Unfortunately, in real competition, the other sources of infra-red lights in the Verizon Arena (BAE) rendered it unusable. The team reverted back to using it as a camera only for driving and manual aiming. We later removed the Kinect and Netbook to make room for our disc-pickup enhancement during Waterloo.
3. Overall, Kinect/Netbook/RoboRealm combo was very promising until we got into the BAE arena. The team may try it again if the new Kinect that comes with XBox ONE is more capable in filtering. RoboRealm is great and we received great support from the product guys.
Feel free to PM me if you need more details.
I played around with using the kinect as a driver camera last year, and I managed to get it working with just some of the shelf freeware:
1. KinectCam, a driver to make the kinect work as a USB webcam.
2. yawcam, a small aplication to host the camera feed over the network.
It worked quit well, but at about a second of lag with the classmate on the robot transmitting over wifi, it was too slow to be useful, and we didn't have any weight allowance left.
Cool part is that KinectCam was able to place a BMP crosshair at a certain "depth" into the image, which was great for aiming. :D
This post just solves half the problems that I am having with OpenCV and hacking the kinect:D :D :D
According to the post by Domtech, I believe that you can use an RPi with the software that he listed and do the processing on the driver station, probably a more accessible and powerful computer.
If you are interested in using Kinect with RoboRealm, here is a good tutorial (http://www.roborealm.com/FRC2013/index.php) on the 2013 game.
That's Nice. Thank You. I have been trying to find vision tracking resources for many years and now I think that I am close.
ekapalka
08-10-2013, 21:12
I know TCP is the protocol used by the axis camera and the port it uses is 80.
[...] If you can't write your own TCP libraries, i'd definitely find a third-party client to work with. Thanks! That really does help. I've been looking all over, and I've gotten the impression MJPEG just passed java by, and wasn't really picked up and developed by the java community (which makes finding useful resources a pain...). I've found great libraries for streaming straight JPG images via UDP, but I have a feeling that the DriverStation is pretty locked down and resistant to change. I've also found a few promising and "robust" third party TCP libraries, but I'm almost certain we would end up having to write our own program to encode the series of JPEG images to MJPEG video (I can't find info on this ANYWHERE). Are there any other options for streaming video through port 80 to the driver station (without creating a custom dashboard or something like that)? Thanks!
Since I am familiar with PHP, can I have nginx and php serve a file to the cRIO? How would I do this?
I found this
[FRC 2012] Basic Kinect Tutorial
[FRC 2013] Kinect Tutorial
Robotic Arm Control Using Kinect Camera
Using the Kinect with LabVIEW and the Upgraded Microsoft .NET API
Kinect use with Code Laboratories drivers
https://decibel.ni.com/content/docs/DOC-15655
ekapalka
09-10-2013, 18:08
Thanks! In the event that MJPEG doesn't work out, are there different streaming protocols we're allowed to implement that we could access in a web browser (like the Axis, but not MJPEG)? I swear I saw a few teams doing this at the competition last year...
Invictus3593
15-10-2013, 09:35
Thanks! That really does help. I've been looking all over, and I've gotten the impression MJPEG just passed java by, and wasn't really picked up and developed by the java community (which makes finding useful resources a pain...). I've found great libraries for streaming straight JPG images via UDP, but I have a feeling that the DriverStation is pretty locked down and resistant to change. I've also found a few promising and "robust" third party TCP libraries, but I'm almost certain we would end up having to write our own program to encode the series of JPEG images to MJPEG video (I can't find info on this ANYWHERE). Are there any other options for streaming video through port 80 to the driver station (without creating a custom dashboard or something like that)? Thanks!
If you're trying to use the driver station to connect, it is gonna be locked down, but the dashboard isn't. If you need help programming a custom dashboard, there are white papers all over the internet that can help or you can post in NI Labview!
If you're completely shut down to the custom dashboard idea, you can try writing a seperate program that you can run on the driver computer that only gets what you need from the stream, but I don't think it would be easily integrated into driver-side vision processing.
If NI redesigns the driver station with the new control system, it would be a great idea to add an SDK for the driver station; in other words, they should allow macros, scripts and ability to add controls to the driver station
Greg McKaskle
20-10-2013, 19:28
Can you explain what you mean by "add controls"?
Greg McKaskle
Yes, sir, indeed.
What I mean by controls is add features to it, like multiple cameras, or buttons within the driver station or anything else you could possibly think of
ekapalka
20-10-2013, 21:19
Can you explain what you mean by "add controls"? You seem like the right person to go about making suggestions to regarding the DriverStation. What I want is a full-fledged DriverStation SDK that gives programmers straightforward and simplistic control over the entire atmosphere (not only the SmartDashboard, but the look and feel of the DriverStation app, as well). Also easy two way control between the DriverStation and an onboard computer. What I'd really love would be to have things like dynamic apps that can change depending on the task and non-standard I/O, custom buttons, custom sliders, particle affects, colour changing text, OpenGL simulations, realtime graphs, and simple video streaming procedures for non-IP cameras. I think I've set my expectations wayy too high :P I would at least like a change of DriverStation colours... I always feel like I'm using Windows 98... I'm so sorry for complaining... I'm just working on a custom dashboard, and I've set a lot of really high goals... thanks!
I will have to agree with that. I think that that is one of the "dreams" of many Driver Station users :D
Greg McKaskle
21-10-2013, 06:54
Thanks for elaborating. It would seem like most of what you guys want to do is already available, but it is not called a Driver Station SDK. The DS laptop runs the Driver Station App -- bottom of the window. That app launches the Dashboard. The Driver Station App is closed-source in order to provide consistency and safety, but the dashboard is open and is actually in charge of many of the things you mention.
The DS reads joysticks and looks for Cypress I/O. It controls TeleOp, Auto, Test, Practice, Enable, and Disable. It reminds you of the laptop battery state, logs communications quality, supports setup, displays error messages, and has the LCD box on the front to mimic a feature of the Blue Box from 2009.
The Dashboard is anything you want. It can be written in any language you want, use any display technology you want, and can even be running on a separate laptop. The LabVIEW environment supplies dashboard example source code and an SDK for some aspects like camera. It includes realtime graphs, and a 2D drawing canvas used for Kinect. If you want animated rainbow text, it isn't in the SDK, and honestly I wouldn't have thought to include it or document it either, but it is open, so do your own.
Communications between robot and dashboard can use SmartDashboard variables, also known as Network Tables. It can also use UDP and TCP. Since there is no mandated way to do an onboard computer, it would be difficult to put it in an SDK. But the industry standard protocols will work to almost anything ethernet based.
If you choose to do a custom dashboard, I'd encourage you to first look at the ones provided if you need to access any of the protocol data sent it by the Driver Station App. Ask questions if you don't understand what you are looking at. You can also find many inspiring dashboard apps written and published by other teams. Not all of them look like Win98.
Greg McKaskle
ekapalka
29-10-2013, 11:30
Hey! I'm pretty sure I've figured it out. The only problem I have left to tackle is how to set up the network. Specifically, what address and port to assign to the socket running on the on-board computer and what IPv4 and Subnet mask to set in order to broadcast in such a way that it can be received by the standard driver station. Any ideas? I'm assuming that I would set up the IPv4 on the Ethernet connection to 10.[te].[am].9, the Subnet mask to 255.255.255.0, the socket IP to 10.[te].[am].2 (do I actually need to specify this?), and the port for the socket to 1180. I don't currently have the robot to test on, so could anyone confirm or suggest an alternative configuration? Thanks!
Mark McLeod
29-10-2013, 11:38
Stay away from .9, that is reserved for the Driver Station wireless used for at home settings.
.8 is currently not spoken for, or anything .13 and higher.
I just like assigning IPs from 254 and down. That way, no worries about the cRIO or any other control system part from having an IP conflict!
ekapalka
30-10-2013, 22:52
[Sigh]
We finally got to try out my program with the robot today, and I can't say I'm happy with the results. A few other programmers and I got to spend about an hour with the robot, and all we managed to do was eliminate a few potential problems. Still no connection, though. By the end we were just plugging in random IPv4 address suffixes and seeing what type of errors we'd receive. From what I've been lead to believe, the entire computer on board the robot has to replace the functionality of the Axis Camera in order to properly send MJPEG data to the DriverStation. The actual "sending it out through the router" part was probably the biggest issue to wrap our heads around. We disabled all of the adapters except the Ethernet (on the on-board computer), disabled any and all firewalls, and checked to make sure that the java Server library (java.net.Server) was available. Then we configured the DriverStation IPv4 address and Subnet mask the way we always have (10.[te].[am].9 and 255.255.255.0). That's about all of the variables we could rule out. We tried all sorts of IP address and port combinations for the socket running on the on-board computer, and multiple variations of IPv4 and Subnet settings on it's Ethernet network adapter. The errors we received were either timeouts or refused connections (when creating the socket). Is there something obvious we're missing? Should the Socket IP address and the on-board computer's IPv4 be set to the same address or something? Thanks!
Hjelstrom
31-10-2013, 01:22
Sorry if I missed part of your earlier posts but what are you using to take the images from the Kinect and turn them into an mjpeg stream?
ekapalka
31-10-2013, 08:13
I'm grabbing them using OpenNI and OpenCV, converting them into a jpeg image using ImageIO, and converting it into a byte[] to send. I get the same issues when I comment all that stuff out, though. It's at the moment the socket is created when the problems occur...
vBulletin® v3.6.4, Copyright ©2000-2017, Jelsoft Enterprises Ltd.