Using a DS to control a FRC robot

Hello all.

Inspired by this project: http://roboticjourney.blogspot.com/, me and my team have decided to try and make a Nintendo DS act as a robot base station/control board. I was wondering the following things:

  1. How do we route the video to the ds?
  2. How do we get the robot to interpret the button presses on the ds as raw buttons on a “joystick”?

please reply :slight_smile:

Oddly enough, the very first DS was a DS.

That is, the first demo we gave used a Nintendo DS as the Driver Station. The touch screen makes for some really fun driving.

can i please have a copy of that code? it would make this project exponentially easier

This was back before NI officially joined the project, so it is a touch out of date :frowning:

The protocol is completely different, so it wouldn’t really help at all. Not even sure where it is archived.

Cool project though, it should be fun!

/me sadface

do you know where the code for the driver station is? preferably in some !labview, but I can try to parse it if labview is the only road.

It is possible to do what you want, but you will not find the path an easy one.

The Driver Station source code is not published. The Driver Station-to-Robot communication protocol is private information. It might be easier to implement your own controller than to figure out how to emulate the FRC Driver Station.

Now, you can try to capture the packets being sent and recieved and reverse engineer from that. But honestly that seems like an awful lot of work to do something like that.

Use this to capture the packets:
http://www.ethereal.com/

The easier thing to do is make drivers for the DS on the computer to use as a controller

yeah, i’ll probably go back to my original idea of using a ds to control an ardubot. thanks for the advice though :wink:

After the 2009 season I attempted to reverse engineer the ds-robot, ds-fms, and fms-ds communication. Its not complete, but, IIRC, I had a proof of concept ds which snet static data to the robot.

My work is at http://frc1103dashboard.codeplex.com.

You may find some of it useful.

The protocol isn’t that complex, and a soft DS has been written by several people. If you can get the DS to display a jpg, that part will fall into place. The only issue is that this is close to kickoff, but otherwise, a very doable project.

Greg McKaskle

I have code that does exactly that, i just have to set the image size to QQVGA(?) (160x120). Where do i send data to get the image to come back?

The contents of each packet are listed in the FRCCommonControlData struct in the NetworkCommunication/FRCComm.h file of the WPILib source. Wireshark (a.k.a. Ethereal) might help with disassembling the packets. I’ve looked at the packets with Wireshark, but it’s hard to extract useful information without a disassembler plugin. Does anyone know why the protocol is supposedly private/closed-source? I would think that FIRST would encourage exactly this kind of innovation.

I’ve done this, including the Robot->DS direction (which is notably not documented in FRCComm.h), as part of an effort to build a robot side simulator in Python (related to my RobotPy work). So far I have complete functionality of basic robot operation running on Python on a normal PC interoperating with the official DS. Still to be done is enhanced IO, but much of this is documented in the EnhancedIO WPILib headers. I’m planning on completing this, along with implementing a simple non-GUI DS for testing purposes, in the next week or so.

I have not yet published this work, partly because I too am wondering why FIRST has kept the Robot/DS protocol secret, particularly given how easy it is to reverse engineer :confused:. I can understand the desire to keep the FMS protocol secret (due to competition network security issues), but not the rest.

As far as I know, the protocol was not meant to be private, but it is not documented either. Perhaps this is to keep things flexible from year to year. Perhaps there was some concern about safety.

The system watchdog will shut down the robot if the communications halts, but if a badly written DS continues sending enabled packets and ignores the driver, the robot is essentially out of control. Something to keep in mind if building a DS for robots of this size and speed.

Greg McKaskle

I’m gonna have it control a servobot over a custom udp protocol

do you know how to grab images from the robot camera over the wireless?

There are two choices for getting the images.

What I’d recommend is to put the camera on the external switch beside the cRIO. With that, you can make direct requests to the camera requesting a JPEG. The Axis website documents the syntax of the request – it is part of the VAPIX API.

The way the camera has been used the last few years is to put it behind the cRIO soft-switch. The command to the camera is then made on the cRIO, and the contents are retransmitted to the DS.

There is actually a third way, involving making a pass-thru for the cRIO TCP stack, but again, I’d suggest the first approach.

Greg McKaskle

For the system disable, I would suggest a deadman using the L button, so the robot is only enabled while the L button is pressed.

so something like this:

while(keysDown() & KEY_L) {
    scanKeys();
    //do the funky robot controlling stoofe
}

No, not quite.
You still want to communicate with the robot regardless of its state. Just use the L key to determine whether the robot is enabled or not.

or then:

while (1) { //in nds programming, while (1) loops are common
    scanKeys();
    if (lButton()) {
        enabled = 1;
    }
    if (enabled && upDPad()) {
        send(dir_FORWARD);
    }
    if (enabled && downDPad()) {
        send(dir_BACKWARD);
    }
}

all of this assuming that enabled is a bool and that send is a void