THE HORROR! THE HORROR!

Team RUSH 27 ran into Smart Dashboard issues at Northern Lights that spammed our cRIO and ran our processing to over 100%. Blame was originally on 7Mbps but after further investigation, NI advisors suggested it may be an issue with the smart dashboard itself. This caused major issues that hindered our robot for 3 matches. Any other issues similar?

They were simply failing batteries/cells that managed to emerge at the worst possible time. You can be sure we are taking measures to ensure that does not happen again.

Black Jags are serial controllable and have current control modes plus built voltage and current feedback. So all you would need for a battery tester is a serial port (or USB-Serial adapter), a serial to Jaguar adapter, a Black Jaguar that most every FRC team has 1 of, and a power resistor you can get at Mouser or your local electronics surplus store. Battery on the input side, power resistor on the output side, tell the Jag to dump X amps into the resistor, then record and plot your feedback. Primary difficulty is making the program to control the Jaguar. Either I’d need to figure out how to generate the FRC heartbeat, or you’d have to flash the Jag with custom firmware.

I remember one work session where a robot decided to “spin out” as soon as it was enabled. It did not respond to any control inputs. The cause turned out to be a disconnected gamepad. Pressing F1 on the Driver Station brought things back to normal.

Is that documented somewhere, or would you have to put a sniffer on the line to analyze the traffic?

I have no idea. I’ve put zero effort into the project at the moment. I’ve run serial sniffers before for various work projects, so I have a reasonable idea how to go about it. I’m hoping it’s actually just documented somewhere, though.

If you roll your jaguar back to the factory firmware (available at the bottom of VEX’s product page), the “trusted mode” heartbeat isn’t required. Last I heard, the trusted mode stuff isn’t documented to prevent people from replicating it (the code is in the closed-source NetworkCommunication library. Security though obscurity I guess). AFAIK, the factory firmware is functionally identical to the FRC one, minus the heartbeat.

You can also find the source code for a slightly older version of both the factory firmware and bdc-comm in TI’s RDK-BDC24 package. I have no idea whether VEX plans on making a similar release with the newest code.

Wow, that is an absolutely brilliant idea I think that our electronics guys would probably be interested in trying something like this out at some point. It also give me something to do with all the ancient computers with serial ports I have floating around my house.

We use c++.

You can turn those ancient computers with serial ports into test equipment, like for example a poor-man’s logic analyzer to inspect the timing of digital signals like encoder pulses. Or DIO set by tasks to inspect scheduling timing and jitter.

You can turn those ancient computers with serial ports into test equipment, like for example a poor-man’s logic analyzer to inspect the timing of digital signals like encoder pulses. Or DIO set by tasks to inspect scheduling timing and jitter.

I recently turned an old PCI-X sound card I had in an old gateway into a low-voltage oscilloscope and it happens to have a built in mic input and radio antenna so I can analyze radio and sound frequency as well (if I ever have the time to really muck around with it). I have never actually used a serial port for anything so my lack of familiarity could be a bit of a hindrance as far as creating makeshift equipment is concerned, but maybe over the summer…

In the two regionals I’ve attended so far, I have only ever seen one robot lose comms. I didn’t watch every match, but it’s been one of the best years for this sort of thing since 2009 in my experience.

624 was having comms problems at Lone Star, but they definitively tied it to a radio reboot under heavy current draw. Unfortunately, they’ve swapped out everything, sometimes twice, in their attempts to fix it. I know they were fine in their last quals match, but I don’t know if that stayed that way during the elims.

Borrow Jan Axelson’s book Serial Port Complete at your local library. It will get you up and going in no time.

If your local library does not have it, use inter-library loan to get it.

Or buy it.

The demo code and code examples downloadable here show how to read and write some of the RS232 control pins, and how to read the Pentium’s RDTSC 64-bit nanosecond timer.

I’m running Lucid Linux with Xoscope and Osqoop on a Gateway PA6. All you need is to download the Lucid Live CD ISO and burn it and boot from it, then get online and download the scopes.

The trickiest part (but hardly difficult though) is to make a small passive circuit with caps and resistors to divide down the signal voltage and block the mic DC power coming out of the laptop mic port.

It will work with just about any reasonable signal voltage, given the appropriate voltage divider. The limitation is the 96KHz sampling frequency. The upside is that it will store hours and hours of data (limited only by available disc space).
*
*

Our team had a problem like that this year - except it wasn’t a failed battery. The robot took a hit in an earlier match which made the DC connector on the AXIS camera touch the metal on the back of the camera (which is metal. Doh!) The back of the camera was touching the frame which caused some fun grounding issues. The driver station was showing the battery around 7V.

Fortunately, Rob the head robot inspector, saw this once before with an AXIS camera and was able to point this out to us.

Hah! 1551’s pain was your gain! (Rob’s first experience with the extreme wonkiness that can come from a frame-grounded Axis camera was at our expense.)

Based on reports from this thread and others, it seems that the bandwidth limits are either not in place or not working, at least not at all events. It also seems that the Quality of Service packet prioritization is not working properly either. Several people have reported needing to turn off or turn down camera resolutions to resolve lag and loss of comms issues not only for their robot but for others in the same match.

I was not at any of these events nor do I know anyone personally who has reported these symptoms, so it’s second hand information at best. The fact remains that turning camera feedback to the driverstation down or off seems to have resolved many of the issues at events. Hopefully FIRST will address this in some regard in an update or blog post soon.

I’d like to ask that if you’d like to blame the field for the issues, you should post the code and pictures of your electronics/wiring so that we could look at it. You can’t criticize the field without full disclosure of the code. I say this because I was talking to a few CSA’s at a week two event, and more than half of the issues with FMS malfunctions were robot-based and not field-based. Many of us are eager to blame the field equipment on faults, but that’s rarely the case. Usually it’s something with the robot.

Also, what’s supposed to happen when multiple robots trip the bandwidth limit?

“Something with the robot”. I really, *really *hate that teminology. It is entirely too general, and has connotations that it is something in the team’s code. Anything between the DS and robot is not the FMS, true, but that does not imply that it is something the teams have control over. Case in point is the C++ issue in SmartDashboard that they released a bug fix for in Team Update 2013-03-05. The bugs affected teams at Week 1 events, but were not part of the FMS. Yes, a lot of times it can be something in the team’s code, but there can be “robot side” issues that are not the team’s fault or responsibility to fix. It is very frustrating for teams to encounter these, and be told “It’s something with your robot”, but have no ability to diagnose or fix the problem. The term is general, and can apply to anything that is not due to the FMS, regardless of who has ownership of the buggy code.

According to the FMS whitepaper, the FMS puts a priority on robot control and status packets, so any other packets are likely to be dropped. Trip times will also increase drastically above 6mb/s, and the team exceeding their bandwidth cap may experience lag.