Inspired by this project: http://roboticjourney.blogspot.com/, me and my team have decided to try and make a Nintendo DS act as a robot base station/control board. I was wondering the following things:
How do we route the video to the ds?
How do we get the robot to interpret the button presses on the ds as raw buttons on a “joystick”?
It is possible to do what you want, but you will not find the path an easy one.
The Driver Station source code is not published. The Driver Station-to-Robot communication protocol is private information. It might be easier to implement your own controller than to figure out how to emulate the FRC Driver Station.
Now, you can try to capture the packets being sent and recieved and reverse engineer from that. But honestly that seems like an awful lot of work to do something like that.
After the 2009 season I attempted to reverse engineer the ds-robot, ds-fms, and fms-ds communication. Its not complete, but, IIRC, I had a proof of concept ds which snet static data to the robot.
The protocol isn’t that complex, and a soft DS has been written by several people. If you can get the DS to display a jpg, that part will fall into place. The only issue is that this is close to kickoff, but otherwise, a very doable project.
The contents of each packet are listed in the FRCCommonControlData struct in the NetworkCommunication/FRCComm.h file of the WPILib source. Wireshark (a.k.a. Ethereal) might help with disassembling the packets. I’ve looked at the packets with Wireshark, but it’s hard to extract useful information without a disassembler plugin. Does anyone know why the protocol is supposedly private/closed-source? I would think that FIRST would encourage exactly this kind of innovation.
I’ve done this, including the Robot->DS direction (which is notably not documented in FRCComm.h), as part of an effort to build a robot side simulator in Python (related to my RobotPy work). So far I have complete functionality of basic robot operation running on Python on a normal PC interoperating with the official DS. Still to be done is enhanced IO, but much of this is documented in the EnhancedIO WPILib headers. I’m planning on completing this, along with implementing a simple non-GUI DS for testing purposes, in the next week or so.
I have not yet published this work, partly because I too am wondering why FIRST has kept the Robot/DS protocol secret, particularly given how easy it is to reverse engineer . I can understand the desire to keep the FMS protocol secret (due to competition network security issues), but not the rest.
As far as I know, the protocol was not meant to be private, but it is not documented either. Perhaps this is to keep things flexible from year to year. Perhaps there was some concern about safety.
The system watchdog will shut down the robot if the communications halts, but if a badly written DS continues sending enabled packets and ignores the driver, the robot is essentially out of control. Something to keep in mind if building a DS for robots of this size and speed.
What I’d recommend is to put the camera on the external switch beside the cRIO. With that, you can make direct requests to the camera requesting a JPEG. The Axis website documents the syntax of the request – it is part of the VAPIX API.
The way the camera has been used the last few years is to put it behind the cRIO soft-switch. The command to the camera is then made on the cRIO, and the contents are retransmitted to the DS.
There is actually a third way, involving making a pass-thru for the cRIO TCP stack, but again, I’d suggest the first approach.
No, not quite.
You still want to communicate with the robot regardless of its state. Just use the L key to determine whether the robot is enabled or not.
while (1) { //in nds programming, while (1) loops are common
scanKeys();
if (lButton()) {
enabled = 1;
}
if (enabled && upDPad()) {
send(dir_FORWARD);
}
if (enabled && downDPad()) {
send(dir_BACKWARD);
}
}
all of this assuming that enabled is a bool and that send is a void