|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
| Thread Tools |
Rating:
|
Display Modes |
|
#16
|
||||
|
||||
|
Re: Using a Kinect as a primary camera for drivers
Thanks! In the event that MJPEG doesn't work out, are there different streaming protocols we're allowed to implement that we could access in a web browser (like the Axis, but not MJPEG)? I swear I saw a few teams doing this at the competition last year...
|
|
#17
|
||||
|
||||
|
Re: Using a Kinect as a primary camera for drivers
Quote:
If you're completely shut down to the custom dashboard idea, you can try writing a seperate program that you can run on the driver computer that only gets what you need from the stream, but I don't think it would be easily integrated into driver-side vision processing. |
|
#18
|
||||
|
||||
|
Re: Using a Kinect as a primary camera for drivers
If NI redesigns the driver station with the new control system, it would be a great idea to add an SDK for the driver station; in other words, they should allow macros, scripts and ability to add controls to the driver station
|
|
#19
|
|||
|
|||
|
Re: Using a Kinect as a primary camera for drivers
Can you explain what you mean by "add controls"?
Greg McKaskle |
|
#20
|
||||
|
||||
|
Re: Using a Kinect as a primary camera for drivers
Yes, sir, indeed.
What I mean by controls is add features to it, like multiple cameras, or buttons within the driver station or anything else you could possibly think of |
|
#21
|
||||
|
||||
|
Re: Using a Kinect as a primary camera for drivers
You seem like the right person to go about making suggestions to regarding the DriverStation. What I want is a full-fledged DriverStation SDK that gives programmers straightforward and simplistic control over the entire atmosphere (not only the SmartDashboard, but the look and feel of the DriverStation app, as well). Also easy two way control between the DriverStation and an onboard computer. What I'd really love would be to have things like dynamic apps that can change depending on the task and non-standard I/O, custom buttons, custom sliders, particle affects, colour changing text, OpenGL simulations, realtime graphs, and simple video streaming procedures for non-IP cameras. I think I've set my expectations wayy too high :P I would at least like a change of DriverStation colours... I always feel like I'm using Windows 98... I'm so sorry for complaining... I'm just working on a custom dashboard, and I've set a lot of really high goals... thanks!
|
|
#22
|
||||
|
||||
|
Re: Using a Kinect as a primary camera for drivers
I will have to agree with that. I think that that is one of the "dreams" of many Driver Station users
![]() |
|
#23
|
|||
|
|||
|
Re: Using a Kinect as a primary camera for drivers
Thanks for elaborating. It would seem like most of what you guys want to do is already available, but it is not called a Driver Station SDK. The DS laptop runs the Driver Station App -- bottom of the window. That app launches the Dashboard. The Driver Station App is closed-source in order to provide consistency and safety, but the dashboard is open and is actually in charge of many of the things you mention.
The DS reads joysticks and looks for Cypress I/O. It controls TeleOp, Auto, Test, Practice, Enable, and Disable. It reminds you of the laptop battery state, logs communications quality, supports setup, displays error messages, and has the LCD box on the front to mimic a feature of the Blue Box from 2009. The Dashboard is anything you want. It can be written in any language you want, use any display technology you want, and can even be running on a separate laptop. The LabVIEW environment supplies dashboard example source code and an SDK for some aspects like camera. It includes realtime graphs, and a 2D drawing canvas used for Kinect. If you want animated rainbow text, it isn't in the SDK, and honestly I wouldn't have thought to include it or document it either, but it is open, so do your own. Communications between robot and dashboard can use SmartDashboard variables, also known as Network Tables. It can also use UDP and TCP. Since there is no mandated way to do an onboard computer, it would be difficult to put it in an SDK. But the industry standard protocols will work to almost anything ethernet based. If you choose to do a custom dashboard, I'd encourage you to first look at the ones provided if you need to access any of the protocol data sent it by the Driver Station App. Ask questions if you don't understand what you are looking at. You can also find many inspiring dashboard apps written and published by other teams. Not all of them look like Win98. Greg McKaskle |
|
#24
|
||||
|
||||
|
Re: Using a Kinect as a primary camera for drivers
Hey! I'm pretty sure I've figured it out. The only problem I have left to tackle is how to set up the network. Specifically, what address and port to assign to the socket running on the on-board computer and what IPv4 and Subnet mask to set in order to broadcast in such a way that it can be received by the standard driver station. Any ideas? I'm assuming that I would set up the IPv4 on the Ethernet connection to 10.[te].[am].9, the Subnet mask to 255.255.255.0, the socket IP to 10.[te].[am].2 (do I actually need to specify this?), and the port for the socket to 1180. I don't currently have the robot to test on, so could anyone confirm or suggest an alternative configuration? Thanks!
|
|
#25
|
|||||
|
|||||
|
Re: Using a Kinect as a primary camera for drivers
Stay away from .9, that is reserved for the Driver Station wireless used for at home settings.
.8 is currently not spoken for, or anything .13 and higher. Last edited by Mark McLeod : 30-10-2013 at 09:03. |
|
#26
|
||||
|
||||
|
I just like assigning IPs from 254 and down. That way, no worries about the cRIO or any other control system part from having an IP conflict!
|
|
#27
|
||||
|
||||
|
Re: Using a Kinect as a primary camera for drivers
[Sigh]
We finally got to try out my program with the robot today, and I can't say I'm happy with the results. A few other programmers and I got to spend about an hour with the robot, and all we managed to do was eliminate a few potential problems. Still no connection, though. By the end we were just plugging in random IPv4 address suffixes and seeing what type of errors we'd receive. From what I've been lead to believe, the entire computer on board the robot has to replace the functionality of the Axis Camera in order to properly send MJPEG data to the DriverStation. The actual "sending it out through the router" part was probably the biggest issue to wrap our heads around. We disabled all of the adapters except the Ethernet (on the on-board computer), disabled any and all firewalls, and checked to make sure that the java Server library (java.net.Server) was available. Then we configured the DriverStation IPv4 address and Subnet mask the way we always have (10.[te].[am].9 and 255.255.255.0). That's about all of the variables we could rule out. We tried all sorts of IP address and port combinations for the socket running on the on-board computer, and multiple variations of IPv4 and Subnet settings on it's Ethernet network adapter. The errors we received were either timeouts or refused connections (when creating the socket). Is there something obvious we're missing? Should the Socket IP address and the on-board computer's IPv4 be set to the same address or something? Thanks! |
|
#28
|
||||
|
||||
|
Re: Using a Kinect as a primary camera for drivers
Sorry if I missed part of your earlier posts but what are you using to take the images from the Kinect and turn them into an mjpeg stream?
|
|
#29
|
||||
|
||||
|
Re: Using a Kinect as a primary camera for drivers
I'm grabbing them using OpenNI and OpenCV, converting them into a jpeg image using ImageIO, and converting it into a byte[] to send. I get the same issues when I comment all that stuff out, though. It's at the moment the socket is created when the problems occur...
|
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|