We cannot deploy code to the robot under normal conditions.
When we attempt to do so, we get the “Waiting for Real-Time target (RT CompactRIO Target) to respond” alert and the program stagnates.
The only instance where we can deploy code is immediately after reformatting the CRIO, which sometimes fails (though this has been attributed to problems with our Ethernet Cable).
We have attempted to reformat and load simple code on, as suggested. This did not work. We can load the code, but once it is loaded we cannot load code again.
What we are using:
4 Port Crio
What we have tried:
Connect Ethernet directly to Crio
Cycling power to both Crio and computer
Ending and restarting all background Labview utilities as suggested
We also have another, separate issue which may or may not be related. After code is loaded onto the robot, we can drive it around for a time. When we reboot the Crio, we have both communication and code. However, as soon as we move the drive motors, we lose both. They resume within a few seconds, and we can drive around for a few minutes until eventually we lose everything and the Crio has to be reformatted again.
This problem has had us scratching our heads since a day or two after the 2013 FRC Labview was downloaded. Any advice would be very much appreciated.
I got a balky 8-slot cRIO to accept code today by turning on the NO APP switch before rebooting it. I’m not sure what’s keeping it from responding properly otherwise, but I’m confident I’ll figure it out soon, and I expect it’ll be something simple.
We turned on the NO APP switch and found out that we could no longer download or run code on the robot. We recieved the following error: NI-system configuration(hex 0x80004003) required ponter parameter was null
No other switches on the CRIO are on, save for the Console Out switch, which has been on for as long as I can remember. Should we set that to off and try again?
We edited the Begin VI in the basic robot code to accomodate a 4-motor drivetrain. Other than that, no edits were made.
You should have been able to deploy using the “Run at startup” command, but not being able to run code is expected. NO APP explicitly prevents LabVIEW user code from running when the cRIO starts up.
The CONSOLE OUT switch is only important if you want to use the serial console, in which case it needs to be ON, or if you want to use a serial-to-CAN gateway (i.e. black Jaguar), in which case it needs to be OFF.
We managed to recreate an error that we think may have soemthing to do with out other problems.
This error, which is in the attched image, occurs consistantly wheneve we deploy code to a robot the second time after reimaging. We have gotten this error of the past and never thought anything of it. However, posts about unending While loops made us reconsider.
We thought that this error might be related to Labview 2013, so we took out an old computer with Labview 2012 (but not 2013), reimaged the CRIO with 2012, and then downloaded simple code. This is where things get interesting.
We got the exact same error as described above, but we were able to download code as much as we want.
Right now we are uninstalling and reinstalling Labview 2013 on our first computer and crossing our fingers.
We’re having exactly the same issue here with team 871. Sometimes code deploys most of the time it does not. We’ve tried everything we can think of from switching radios, to cRIOs to cables to laptops, various combinations of rebooting, battery changes, cable orientations (away from motors/controllers anything that could possibly put out any kind of EMI). We’ve done this with shiny new example projects created directly from the FRC labview 2012 splash menu. Always the same results, lots of “waiting for cRIO to respond” and occassionally a success, but rarely. I’ve brought my controller home to see if i can get any more information from it, but I don’t have high hopes. We’ve already wasted valuable hours trying to just deploy ANYTHING but it always ends up being 5 minutes of coding, 45 minutes of rebooting things and trying to download.
Also, we re-flashed to last years cRIO image and the problem goes away, of course this is unacceptable since it’s not compatible with this years FMS and all that jazz…
Any help would be appreciated, we’re practically dead in the water.
After nosing around with Wireshark for a while I found this: When the robot is operating normally (IE after a reboot without doing a code download) I’m seeing traffic between the robot and PC on TCP port 1735. Every TCP packet from the laptop gets a good ACK from the cRIO.
Now as SOON as i start downloading code, i see some traffic on TCP 3079 between the pc and crio (Wireshark is calling it lv-frontpanel), then it halts, then I start seeing TCP RST’s from the robot when the PC pokes it on 1735. Is this symptomatic of something? I noticed that port 1735 stays solidly dead until the robot reboots, at which point it’s accepting connections/traffic on this port.
I haven’t done the research on what NI LabView does with 1735 or the FRC software does with it yet, but i wanted to get the info up here
I can re-create and provide wireshark logs to the NI gurus if anyone needs it
We talked with a few other teams and even went as far as to use another team’s copy of Labview on their computer to see if there was any difference. Booting the Crio in safe mode, playing with IP addresses, and replicating the problem on 3 different Crios with 3 different computers leads us to believe that it is a problem with Labview 2013.
A minor breakthrough: tinkering around with the NO APP switch allows us to download code, which was the first thing Mr. Anderson mentioned. (Thanks, Mr. Anderson! ) It doesn’t affect the underlying problem, though. It still takes significantly longer to test and deploy than last year which may or may not be a problem come competition. Hopefully an update will be rolled out by then.
Did you notice that the problem was less horrible when the driver station/dashboard was not open? It seemed like it to me but i’m not sure if i just want it to seem like it was better, or it actually was. I’ll try the no-app switch today and see if that works for us
There are two separate things here. The dialog that the OP mentioned is just a scary dialog I’ve been trying to get RT to change for a few releases. It is “normal”, telling you that you are about to kick some things out of memory in the process of carrying out the operation that was requested.
The second issue is that when the dashboard is connected to the robot and SmartDashboard traffic is flowing, it seems to prevent the controller from aborting the deployed app. I do not remember this being an issue during the beta, but confirmed it this morning. I’ll be working with the RT team to get to the bottom of it tomorrow, but in the meantime, either kill the dashboard, or don’t deploy code, but run it in debug mode. The debug procedure is faster anyway.
Using the no app switch gets rid of the deployed app. This is about the same as unsetting the startup app.
In other words, on the project, go to the build specification and unset it. Then restart the cRIO and nothing should be running. Using debugging mode, run button and menu, should then work fine. The issue seems to be with aborting the deployed app.
Have you been able to get a solution to this issue yet? We too are having the exact same issue with waiting for RT to respond. If the DS isn’t running then it seems to work flawlessly. No switched are set to on.
I know the problem and have a solution, but we were wanting to batch any updates a bit.
The complication is that the design requires that the SmartDashboard server supports connections to multiple clients. Each client opens a TCP connection to port 1735 and gets its own session. That means that the server creates a TCP listener and loops waiting for clients to connect. As each connects, it spins up a unique handler for that client session. There are several ways to spin up an arbitrary number of handlers, and the one I chose, the new one with simpler syntax called the Asynchronous Call by Reference node, has an issue in the corner case of being deployed on RT and being asked to clean up all independent VIs so that the controller can be targeted by a different project.
The RT runtime will correct the corner case, but this year the solution is to use the slightly older Run method and open my own instances to the handler VI and assign the parameters using the ByName method. Not much work, not much risk, but it is painful enough to have one update, so we are waiting just a bit to see if anything else is reported.
The workaround, as Alan mentioned is to exit the SmartDashboard clients. When they close their connection, the server handler completes and the Waiting issue doesn’t occur. So you don’t need to kill the DS, but the DB and any other SmartDashboard clients.
We hope to have a patch released in a few days.
I’m not using the smartdashboard at all and am having fits with
the 8-slot cRIO. The 4 slot ones are working fine.
I simply cannot deploy code to the 8-slot with symptoms similar
to those described by others in this thread. I can successfully
“run as startup” right after imaging and then subsequent “run as startup”
attempts just hang. You can’t cancel the dialog - you have to kill Labview.
I have two 8-slotters and they both were fine last year. The only
workaround I’ve found is the no-app dipswitch. Having that
on allows me to “run as startup” successfully.