THE HORROR! THE HORROR!

Black Jags are serial controllable and have current control modes plus built voltage and current feedback. So all you would need for a battery tester is a serial port (or USB-Serial adapter), a serial to Jaguar adapter, a Black Jaguar that most every FRC team has 1 of, and a power resistor you can get at Mouser or your local electronics surplus store. Battery on the input side, power resistor on the output side, tell the Jag to dump X amps into the resistor, then record and plot your feedback. Primary difficulty is making the program to control the Jaguar. Either I’d need to figure out how to generate the FRC heartbeat, or you’d have to flash the Jag with custom firmware.

I remember one work session where a robot decided to “spin out” as soon as it was enabled. It did not respond to any control inputs. The cause turned out to be a disconnected gamepad. Pressing F1 on the Driver Station brought things back to normal.

Is that documented somewhere, or would you have to put a sniffer on the line to analyze the traffic?

I have no idea. I’ve put zero effort into the project at the moment. I’ve run serial sniffers before for various work projects, so I have a reasonable idea how to go about it. I’m hoping it’s actually just documented somewhere, though.

If you roll your jaguar back to the factory firmware (available at the bottom of VEX’s product page), the “trusted mode” heartbeat isn’t required. Last I heard, the trusted mode stuff isn’t documented to prevent people from replicating it (the code is in the closed-source NetworkCommunication library. Security though obscurity I guess). AFAIK, the factory firmware is functionally identical to the FRC one, minus the heartbeat.

You can also find the source code for a slightly older version of both the factory firmware and bdc-comm in TI’s RDK-BDC24 package. I have no idea whether VEX plans on making a similar release with the newest code.

Wow, that is an absolutely brilliant idea I think that our electronics guys would probably be interested in trying something like this out at some point. It also give me something to do with all the ancient computers with serial ports I have floating around my house.

We use c++.

You can turn those ancient computers with serial ports into test equipment, like for example a poor-man’s logic analyzer to inspect the timing of digital signals like encoder pulses. Or DIO set by tasks to inspect scheduling timing and jitter.

You can turn those ancient computers with serial ports into test equipment, like for example a poor-man’s logic analyzer to inspect the timing of digital signals like encoder pulses. Or DIO set by tasks to inspect scheduling timing and jitter.

I recently turned an old PCI-X sound card I had in an old gateway into a low-voltage oscilloscope and it happens to have a built in mic input and radio antenna so I can analyze radio and sound frequency as well (if I ever have the time to really muck around with it). I have never actually used a serial port for anything so my lack of familiarity could be a bit of a hindrance as far as creating makeshift equipment is concerned, but maybe over the summer…

In the two regionals I’ve attended so far, I have only ever seen one robot lose comms. I didn’t watch every match, but it’s been one of the best years for this sort of thing since 2009 in my experience.

624 was having comms problems at Lone Star, but they definitively tied it to a radio reboot under heavy current draw. Unfortunately, they’ve swapped out everything, sometimes twice, in their attempts to fix it. I know they were fine in their last quals match, but I don’t know if that stayed that way during the elims.

Borrow Jan Axelson’s book Serial Port Complete at your local library. It will get you up and going in no time.

If your local library does not have it, use inter-library loan to get it.

Or buy it.

The demo code and code examples downloadable here show how to read and write some of the RS232 control pins, and how to read the Pentium’s RDTSC 64-bit nanosecond timer.

I’m running Lucid Linux with Xoscope and Osqoop on a Gateway PA6. All you need is to download the Lucid Live CD ISO and burn it and boot from it, then get online and download the scopes.

The trickiest part (but hardly difficult though) is to make a small passive circuit with caps and resistors to divide down the signal voltage and block the mic DC power coming out of the laptop mic port.

It will work with just about any reasonable signal voltage, given the appropriate voltage divider. The limitation is the 96KHz sampling frequency. The upside is that it will store hours and hours of data (limited only by available disc space).
*
*

Our team had a problem like that this year - except it wasn’t a failed battery. The robot took a hit in an earlier match which made the DC connector on the AXIS camera touch the metal on the back of the camera (which is metal. Doh!) The back of the camera was touching the frame which caused some fun grounding issues. The driver station was showing the battery around 7V.

Fortunately, Rob the head robot inspector, saw this once before with an AXIS camera and was able to point this out to us.

Hah! 1551’s pain was your gain! (Rob’s first experience with the extreme wonkiness that can come from a frame-grounded Axis camera was at our expense.)

Based on reports from this thread and others, it seems that the bandwidth limits are either not in place or not working, at least not at all events. It also seems that the Quality of Service packet prioritization is not working properly either. Several people have reported needing to turn off or turn down camera resolutions to resolve lag and loss of comms issues not only for their robot but for others in the same match.

I was not at any of these events nor do I know anyone personally who has reported these symptoms, so it’s second hand information at best. The fact remains that turning camera feedback to the driverstation down or off seems to have resolved many of the issues at events. Hopefully FIRST will address this in some regard in an update or blog post soon.

I’d like to ask that if you’d like to blame the field for the issues, you should post the code and pictures of your electronics/wiring so that we could look at it. You can’t criticize the field without full disclosure of the code. I say this because I was talking to a few CSA’s at a week two event, and more than half of the issues with FMS malfunctions were robot-based and not field-based. Many of us are eager to blame the field equipment on faults, but that’s rarely the case. Usually it’s something with the robot.

Also, what’s supposed to happen when multiple robots trip the bandwidth limit?

“Something with the robot”. I really, *really *hate that teminology. It is entirely too general, and has connotations that it is something in the team’s code. Anything between the DS and robot is not the FMS, true, but that does not imply that it is something the teams have control over. Case in point is the C++ issue in SmartDashboard that they released a bug fix for in Team Update 2013-03-05. The bugs affected teams at Week 1 events, but were not part of the FMS. Yes, a lot of times it can be something in the team’s code, but there can be “robot side” issues that are not the team’s fault or responsibility to fix. It is very frustrating for teams to encounter these, and be told “It’s something with your robot”, but have no ability to diagnose or fix the problem. The term is general, and can apply to anything that is not due to the FMS, regardless of who has ownership of the buggy code.

According to the FMS whitepaper, the FMS puts a priority on robot control and status packets, so any other packets are likely to be dropped. Trip times will also increase drastically above 6mb/s, and the team exceeding their bandwidth cap may experience lag.

I was behind the FTA table at GTR east this weekend learning how to be a score keeper. From what I could tell any problems that happened was because of the teams error. there was one team in particular who was dieing in the middle of matches all the time. The FTA’s went to their pit and discovered the problem was in the program, (i’m not a programmer so sorry if I’m wrong) there was a problem where the amount of time between the robot reading the code was too slow. However the Programmer Refused to change the code for some stupid reason and kept blaming it on the field.

In FLR we had a problem where we died in 2 of our matches. the first match that it happened the FTA’s quickly hurried to our pits to try and figure out the problem with our robot, as they knew the problem was not in the FMS. I am very pleased with the FTA’s this year, and I am sure they don’t want to have a year like last year. We figured out that the problem was the cRIO wiring, the two wires powering the cRIO were so close together that when we got hit by another robot we would die.

The FMS system is working perfectly fine to what I can tell. For some reason it seems like some people are having issues with their code, where they get stuck in a “loop” and then their robot is confued, dies, or does the same thing it was just doing. Again thats the teams fault not the FMS

While you may hate that term, it’s still a fact that many matches were replayed due to code (such as the camera settings being wrong). Streaming max resolution at 30Fps will tend to kill the connection to your robot, and because of odd circumstances, can even affect the rest of the field. And yes, the C++ issue wasn’t any team’s fault, but FIRST didn’t know about it/couldn’t release a patch till week two.

My point still stands: the FMS isn’t the cause of most of the issues this year. From what I’ve seen, it has almost always is something in the code, or on the robot. Now, I’m not saying that it’s always the fault of the robot. The Field breaks sometimes. However, the majority of issues so far from what I’ve seen (this year) have been code related. This is why I asked for the OP to post code and photos of the wiring, not only for us to be able to say “that’s what caused the issue” (if it happened to be the robot or code), but also to look at it and say “that’s something I want to avoid doing with my code/wiring/robot, so I don’t have the same issues”

If this is the case, why does a field with say three robots that exceed the cap happen to kill the whole field? It should only cause the offending robot/s to cut in and out, right? Or am I missing something?

It’s funny though, the camera seems to be the bane of all field issues so far this year (spotlighted by the FTA at NY regional having all the teams turn off their cameras for eliminations). What has changed between the past few years and 2013 that could cause issues like this, other than the bandwidth cap?