Log in

View Full Version : THE HORROR! THE HORROR!


Team23pitboss
08-03-2013, 15:44
https://sphotos-a.xx.fbcdn.net/hphotos-prn1/421805_371480319539553_942198271_n.jpg

I really hope I'm wrong but it looks to my untrained eyes like the FMS could possibly be as unstable as it was last season. I have already watched multiple robots begin to spin wildly mid-match while watching the Florida and Texas streams. If you don't already know Team 23's season was brought to an end by a round loss due to a possible FMS malfunction in the quarter-finals last season and the team has been especially wary of a repeat of this ever since. Do you think the FMS will be an issue as it was for some teams last year and if it is what can FIRST do about it?
http://s1.postimage.org/y2y5izgy5/Not_0a1d39_92289.gif

joelg236
08-03-2013, 15:48
I wouldn't get too excited yet. I can't see the image you posted, but I've not seen anything game-changing because of the FMS. If problems do arise, I'm confident FIRST will take them as seriously as we do.

JohnFogarty
08-03-2013, 15:49
I'd like to see some things before I flat out blame the field. If the field works properly for 3000 other teams and not yours then chances are the field may not be to blame.
There are plenty of factors...like...static discharge....Overflow of your 7Mbs connection...cRIO rebooting...etc.

Andy A.
08-03-2013, 16:16
The field worked flawlessly at BAE, at least in every match I played. If there were any issues with the field they didn't seem to warrant an announcement.

Remember that individual robots having issues isn't really an indication of a FMS problem, and we've already completed a number of regionals without any major issues. I think there might be some confirmation bias going on here.

engunneer
08-03-2013, 16:16
Z,

You're just watching the wrong streams. Watch Toronto and Portland instead.

At BAE buzz did circles once in elims, but I think they had the driver station swapped with 61.

I'm also a bit more confident about 23's code this year.

Kims Robot
08-03-2013, 16:20
I have already watched multiple robots spin out mid-match while watching the Florida and Texas streams.
I highly doubt you can diagnose an FMS problem via a webcast. I know of only a handful of people that are even remotely qualified to do that - and they are called FTA's. I presume you are not one, thus I'm not sure how a robot dying on a field (Im guessing thats what you mean by "spin out") at their very first event can be assumed to be an FMS fault. Many teams have not finalized or fixed or even know if their code really works yet.

If you don't already know Team 23's season was brought to an end by an FMS malfunction in the quarter-finals last season and I have been horrified of a repeat of this ever since.
What evidence do you have of this? I hate to dredge it up, but the Einstein report showed that the majority of the issues were actually team faults. I highly suggest you go back and read it if you haven't already. It will give you some great insights into things that your team should not do if you want to have a successful fully running on field season.

Should there have been better diagnostics? sure. But its not as simple as saying "the FMS ended our season." Maybe in your case it was one of the rare instances it was true... but I don't see how we can/should panic over seeing a couple of dead robots on a webcast.

JohnSchneider
08-03-2013, 18:55
16 and ourselves had a buggy match at Lubbock.....where we both sort of spun after losing connection. A few times titanium was spinning in auto in Elims as well...

From what I've seen this is sort of a popular failure...why I wonder do all the robots react like that?

Kusha
08-03-2013, 19:15
16 and ourselves had a buggy match at Lubbock.....where we both sort of spun after losing connection. A few times titanium was spinning in auto in Elims as well...

From what I've seen this is sort of a popular failure...why I wonder do all the robots react like that?

What language did you guys program in? (Curious)

itsjustmrb
08-03-2013, 19:25
We had very few problems at Hub City until the championship finals. In the first match our CRio decided to reboot in the middle of the match and in the last match it rebooted as soon as autonomous started. Please don't start flaming me, I am not making excuses or blaming anyone or anything. I am just stating the facts. I know there was at least 2 others that had the same problems and the suggested fixes were cleaning the CRio and changing the black connector to the green one.

We use labview and did not make any code changes before the hiccups started.

Mr. B

DominickC
08-03-2013, 20:46
Being the programmer last year, and the one on disaster control at the time of the incident at the WPI Regional last year, I can speak in greater detail to the issues we experienced.

It was round 3 of the quarterfinals. Only mild issues with initial FMS connection were experienced at any point prior to the issue. Auton executed as expected. When transitioning from Auton Disabled to Teleop Enabled, the robot began spinning in a tight circle, and was not responding to any driver commands.

Later inspection from the FTA, CSA, as well as a National Instruments rep yielded no result. The code was ruled out as a non-issue.

EDIT - We later attempted to replicate the failure using FMS simulation software provided to us by the CSA. All attempts at replication failed.

Team23pitboss
08-03-2013, 21:16
I highly doubt you can diagnose an FMS problem via a webcast. I know of only a handful of people that are even remotely qualified to do that - and they are called FTA's. I presume you are not one, thus I'm not sure how a robot dying on a field (Im guessing thats what you mean by "spin out") at their very first event can be assumed to be an FMS fault. Many teams have not finalized or fixed or even know if their code really works yet.


What evidence do you have of this? I hate to dredge it up, but the Einstein report showed that the majority of the issues were actually team faults. I highly suggest you go back and read it if you haven't already. It will give you some great insights into things that your team should not do if you want to have a successful fully running on field season.

Should there have been better diagnostics? sure. But its not as simple as saying "the FMS ended our season." Maybe in your case it was one of the rare instances it was true... but I don't see how we can/should panic over seeing a couple of dead robots on a webcast.

To fill in the blanks about what exactly happened to us last season, it was the third elimination round of the quarter finals, autonomous had just ended when suddenly the robot began to spin wildly in circles and continued to do so until the end of the round, eliminating us from the competition. The team was incredibly distraught and was disappointed to have lost due to something so completely out of our control.

My :ahh: panicking:ahh: was meant to be taken with a grain of salt and was extremely sarcastic, hence the memes and star trek GIF. However, my questions about the integrity of the FMS last season (and possibly this season) is quite serious. While I make no claims to be able to diagnose an FMS failure over the web I can say with a high degree of certainty that our failure at WPI last year was not based on a fault in our code. We functioned normally for the entirety of the competition and had not experienced any previous failures. The FTA's at the event looked over our watchdog logs and our robot code at length and were incredibly helpful in trying to deduce what had caused our problem. They agreed with us that it was not an error in our code and even tried to get us a rematch to no avail. We have tried on multiple occasions to replicate our spinning on the field and have been unable to do so.

I trust enough in the folks behind FIRST to believe that the FMS will be a non-issue but our experiences last year will never be too far from my mind.

Kevin Sevcik
08-03-2013, 21:18
I'm told many problems at Lone Star were caused by multiple teams trying to run cameras at 640x480 at 30fps. Or in one case, TWO cameras at that rate. Teams have been instructed to throttle down to 320x240 @ 15fps and things have calmed down a good bit.

thinker&planner
08-03-2013, 21:57
May I say that we were one of those teams in Florida who was "spinning out", and that was not due to FMS. It was our vision code that kept spinning because it couldn't find a target (we are still having problems, but whatever). Unfortunately, this happened during autonomous and we had discs in the bot. The judges weren't very happy about our spinning, disc-shooting robot (but they were understanding later). We call this the "Death Spin".

On the other hand, we HAVE had some problems with communications. In our last two matches, we suddenly lost comm right after auto ended. we grabbed the controls, drove to the feeder,loaded discs, turned around, and died. The driver station said that we had comm and code, and yes, we do have the most recent version. In case you want more details; we program in Java and use the Smart Dashboard. Our robot worked for the whole day until our last two matches, and we can't remember changing our code majorly. Our robot works great while tethered with the ethernet cord. We had to reduce the resolution of our camera because the control system was delaying so much that we accidentally fell over.

Other teams seem to be having this problem and we have consulted with the control system experts who have told us this: Java has a 5-sec delay when it resets the gyro (andymark) which our whole drive system is based off of (orientation to the field). We moved the gyro zero function into the disabled code but haven's been able to test it yet. Our code might be crashing. We need to get to the arena early to test tomorrow.

If you have suggestions, that would be great, and I hope that we can resolve this issue soon (1st match tomorrow!).

cmrnpizzo14
08-03-2013, 22:03
I understand your point, but I believe that you are overreacting just a little bit. In particular, "THE HORROR! THE HORROR!" might not be the most appropriate title on this thread.

As far as I know, there were no significant field issues at FLR this year. That is the only regional I know about for sure right now. I think that things will get fixed by FIRST as soon as they can, but for now there should probably be more work devoted to fixing the automatic scoring system. I think more teams are getting burned by not knowing what the score is at the end of the match as opposed to FMS issues.

We will see more as time continues but for now I think that FMS is doing a pretty good job overall.

pfreivald
08-03-2013, 22:16
The only robot at FLR that randomly spun in circles was 1551 in our first match, and that stopped when we disabled our gyro...

Kims Robot
08-03-2013, 22:30
We functioned normally for the entirety of the competition and had not experienced any previous failures.
This actually sounds EXACTLY like the situation that much of the Einstein report pointed out, as well as several other teams noted during the season. The most well known of these probably being 118. They functioned "completely normally" until the elims in CT. They also functioned "completely normally" until Einstein. Again, I reference the Einstein report (http://www3.usfirst.org/sites/default/files/uploadedFiles/Robotics_Programs/FRC/Game_and_Season__Info/2012_Assets/Einstein%20Investigation%20Report.pdf). I highly suggest you go and read pages 12-13 [and really the rest of it, but specifically those pages if you don't want to learn from the whole thing].

And I appreciate that different people have different senses of humor, and that your graphics should have indicated your tone of sarcasm... but as the saying goes "TOO SOON...TOO SOON".

arun4444
08-03-2013, 22:42
We had problems also with all robots shutting down and having to play a rematch.

Not sure why frc still uses wifi, should move to independent channel RF.

Gregor
08-03-2013, 22:49
We had to reduce the resolution of our camera because the control system was delaying so much that we accidentally fell over.

What?!?!

Alpha Beta
08-03-2013, 23:01
A few times titanium was spinning in auto in Elims as well...

From what I've seen this is sort of a popular failure...why I wonder do all the robots react like that?

Our spinning was due to a failed battery. We had 2 separate instances in elims where batteries showed 13 volts on the charger and dropped to 6 volts within seconds during autonomous.

To the best of my knowledge we had no problems with the FMS in Hub City.

Jared Russell
08-03-2013, 23:06
I'm told many problems at Lone Star were caused by multiple teams trying to run cameras at 640x480 at 30fps. Or in one case, TWO cameras at that rate. Teams have been instructed to throttle down to 320x240 @ 15fps and things have calmed down a good bit.

I thought bandwidth caps were in place...?

Team23pitboss
08-03-2013, 23:07
batteries showed 13 volts on the charger and dropped to 6 volts within seconds during autonomous.

Out of curiosity did you ever figure out why the batteries behaved in such a strange way?

Kevin Sevcik
08-03-2013, 23:09
I thought bandwidth caps were in place...?I can only tell you what the FTA mentioned to me in passing. I'll note that he didn't say and I didn't say that the cameras were causing issues for teams other than the ones running the cameras.

Tom Line
08-03-2013, 23:10
Out of curiosity did you ever figure out why the batteries behaved in such a strange way?

This is almost always a failed cell in the battery - or many failed cells.

We use the West Mountain battery tester to test our batteries. We found several this year that had normal voltages initially, checked good on our battery beak, but fell off after 5-10 minutes of testing fairly sharply.

Kevin Sevcik
08-03-2013, 23:24
This is almost always a failed cell in the battery - or many failed cells.

We use the West Mountain battery tester to test our batteries. We found several this year that had normal voltages initially, checked good on our battery beak, but fell off after 5-10 minutes of testing fairly sharply.On the one hand I want one of these. On the other hand I don't want to spend the money. Maybe I'll spend some time this offseason working out some software to use a Black Jaguar and some power resistors as a poor man's version of this.

SGK
08-03-2013, 23:40
Buzz's dance was not due to FMS. Due to possible programming issue plus the fact that we had to keep safety pins in to keep the feeder up ans therefore within starting envelope. We did not have enough pressure in the system after the last match and refs would not let us power up to recharge.

Ether
08-03-2013, 23:49
Maybe I'll spend some time this offseason working out some software to use a Black Jaguar and some power resistors as a poor man's version of this.

Hmm. You've got me curious. Can you give a general idea of what you have in mind? Is the purpose of the Jag to allow you to record data (via CAN) and shut off automatically so you don't have to babysit it?

stingray27
09-03-2013, 00:03
https://sphotos-a.xx.fbcdn.net/hphotos-prn1/421805_371480319539553_942198271_n.jpg

I really hope I'm wrong but it looks to my untrained eyes like the FMS could possibly be as unstable as it was last season. I have already watched multiple robots begin to spin wildly mid-match while watching the Florida and Texas streams. If you don't already know Team 23's season was brought to an end by a round loss due to a possible FMS malfunction in the quarter-finals last season and the team has been especially wary of a repeat of this ever since. Do you think the FMS will be an issue as it was for some teams last year and if it is what can FIRST do about it?
http://s1.postimage.org/y2y5izgy5/Not_0a1d39_92289.gif

Team RUSH 27 ran into Smart Dashboard issues at Northern Lights that spammed our cRIO and ran our processing to over 100%. Blame was originally on 7Mbps but after further investigation, NI advisors suggested it may be an issue with the smart dashboard itself. This caused major issues that hindered our robot for 3 matches. Any other issues similar?

jspatz1
09-03-2013, 00:13
Out of curiosity did you ever figure out why the batteries behaved in such a strange way?

They were simply failing batteries/cells that managed to emerge at the worst possible time. You can be sure we are taking measures to ensure that does not happen again.

Kevin Sevcik
09-03-2013, 00:29
Hmm. You've got me curious. Can you give a general idea of what you have in mind? Is the purpose of the Jag to allow you to record data (via CAN) and shut off automatically so you don't have to babysit it?Black Jags are serial controllable and have current control modes plus built voltage and current feedback. So all you would need for a battery tester is a serial port (or USB-Serial adapter), a serial to Jaguar adapter, a Black Jaguar that most every FRC team has 1 of, and a power resistor you can get at Mouser or your local electronics surplus store. Battery on the input side, power resistor on the output side, tell the Jag to dump X amps into the resistor, then record and plot your feedback. Primary difficulty is making the program to control the Jaguar. Either I'd need to figure out how to generate the FRC heartbeat, or you'd have to flash the Jag with custom firmware.

Alan Anderson
09-03-2013, 00:48
I remember one work session where a robot decided to "spin out" as soon as it was enabled. It did not respond to any control inputs. The cause turned out to be a disconnected gamepad. Pressing F1 on the Driver Station brought things back to normal.

Ether
09-03-2013, 00:54
I'd need to figure out how to generate the FRC heartbeat.

Is that documented somewhere, or would you have to put a sniffer on the line to analyze the traffic?

Kevin Sevcik
09-03-2013, 01:02
Is that documented somewhere, or would you have to put a sniffer on the line to analyze the traffic?I have no idea. I've put zero effort into the project at the moment. I've run serial sniffers before for various work projects, so I have a reasonable idea how to go about it. I'm hoping it's actually just documented somewhere, though.

Radical Pi
09-03-2013, 02:00
I have no idea. I've put zero effort into the project at the moment. I've run serial sniffers before for various work projects, so I have a reasonable idea how to go about it. I'm hoping it's actually just documented somewhere, though.

If you roll your jaguar back to the factory firmware (available at the bottom of VEX's product page (http://www.vexrobotics.com/vexpro/motor-controllers/217-3367.html)), the "trusted mode" heartbeat isn't required. Last I heard, the trusted mode stuff isn't documented to prevent people from replicating it (the code is in the closed-source NetworkCommunication library. Security though obscurity I guess). AFAIK, the factory firmware is functionally identical to the FRC one, minus the heartbeat.

You can also find the source code for a slightly older version of both the factory firmware and bdc-comm in TI's RDK-BDC24 (http://www.ti.com/tool/sw-rdk-bdc24) package. I have no idea whether VEX plans on making a similar release with the newest code.

Team23pitboss
09-03-2013, 11:21
Black Jags are serial controllable and have current control modes plus built voltage and current feedback. So all you would need for a battery tester is a serial port (or USB-Serial adapter), a serial to Jaguar adapter, a Black Jaguar that most every FRC team has 1 of, and a power resistor you can get at Mouser or your local electronics surplus store. Battery on the input side, power resistor on the output side, tell the Jag to dump X amps into the resistor, then record and plot your feedback. Primary difficulty is making the program to control the Jaguar. Either I'd need to figure out how to generate the FRC heartbeat, or you'd have to flash the Jag with custom firmware.

Wow, that is an absolutely brilliant idea I think that our electronics guys would probably be interested in trying something like this out at some point. It also give me something to do with all the ancient computers with serial ports I have floating around my house.

Jefferson
09-03-2013, 12:01
What language did you guys program in? (Curious)

We use c++.

Ether
10-03-2013, 09:44
It also give me something to do with all the ancient computers with serial ports I have floating around my house.

You can turn those ancient computers with serial ports into test equipment, like for example a poor-man's logic analyzer to inspect the timing of digital signals like encoder pulses. Or DIO set by tasks to inspect scheduling timing and jitter.

Team23pitboss
10-03-2013, 10:48
You can turn those ancient computers with serial ports into test equipment, like for example a poor-man's logic analyzer to inspect the timing of digital signals like encoder pulses. Or DIO set by tasks to inspect scheduling timing and jitter.

I recently turned an old PCI-X sound card I had in an old gateway into a low-voltage oscilloscope and it happens to have a built in mic input and radio antenna so I can analyze radio and sound frequency as well (if I ever have the time to really muck around with it). I have never actually used a serial port for anything so my lack of familiarity could be a bit of a hindrance as far as creating makeshift equipment is concerned, but maybe over the summer...

Chris is me
10-03-2013, 10:59
In the two regionals I've attended so far, I have only ever seen one robot lose comms. I didn't watch every match, but it's been one of the best years for this sort of thing since 2009 in my experience.

Kevin Sevcik
10-03-2013, 11:29
624 was having comms problems at Lone Star, but they definitively tied it to a radio reboot under heavy current draw. Unfortunately, they've swapped out everything, sometimes twice, in their attempts to fix it. I know they were fine in their last quals match, but I don't know if that stayed that way during the elims.

Ether
10-03-2013, 14:08
I have never actually used a serial port for anything so my lack of familiarity could be a bit of a hindrance as far as creating makeshift equipment is concerned, but maybe over the summer...

Borrow Jan Axelson's book Serial Port Complete at your local library. It will get you up and going in no time.

If your local library does not have it, use inter-library loan (http://en.wikipedia.org/wiki/Interlibrary_loan) to get it.

Or buy it (http://www.betterworldbooks.com/serial-port-complete-H0.aspx?SearchTerm=serial+port+complete).

The demo code and code examples downloadable here (http://www.chiefdelphi.com/media/papers/2702) show how to read and write some of the RS232 control pins, and how to read the Pentium's RDTSC 64-bit nanosecond timer.

Ether
10-03-2013, 14:16
I recently turned an old PCI-X sound card I had in an old gateway into a low-voltage oscilloscope

I'm running Lucid Linux with Xoscope and Osqoop on a Gateway PA6. All you need is to download the Lucid Live CD ISO and burn it and boot from it, then get online and download the scopes.

The trickiest part (but hardly difficult though) is to make a small passive circuit with caps and resistors to divide down the signal voltage and block the mic DC power coming out of the laptop mic port.

It will work with just about any reasonable signal voltage, given the appropriate voltage divider. The limitation is the 96KHz sampling frequency. The upside is that it will store hours and hours of data (limited only by available disc space).

epylko
11-03-2013, 20:06
Our spinning was due to a failed battery. We had 2 separate instances in elims where batteries showed 13 volts on the charger and dropped to 6 volts within seconds during autonomous.


Our team had a problem like that this year - except it wasn't a failed battery. The robot took a hit in an earlier match which made the DC connector on the AXIS camera touch the metal on the back of the camera (which is metal. Doh!) The back of the camera was touching the frame which caused some fun grounding issues. The driver station was showing the battery around 7V.

Fortunately, Rob the head robot inspector, saw this once before with an AXIS camera and was able to point this out to us.

pfreivald
11-03-2013, 20:26
Fortunately, Rob the head robot inspector, saw this once before with an AXIS camera and was able to point this out to us.

Hah! 1551's pain was your gain! (Rob's first experience with the extreme wonkiness that can come from a frame-grounded Axis camera was at our expense.)

Nuttyman54
11-03-2013, 22:41
I thought bandwidth caps were in place...?

Based on reports from this thread and others, it seems that the bandwidth limits are either not in place or not working, at least not at all events. It also seems that the Quality of Service packet prioritization is not working properly either. Several people have reported needing to turn off or turn down camera resolutions to resolve lag and loss of comms issues not only for their robot but for others in the same match.

I was not at any of these events nor do I know anyone personally who has reported these symptoms, so it's second hand information at best. The fact remains that turning camera feedback to the driverstation down or off seems to have resolved many of the issues at events. Hopefully FIRST will address this in some regard in an update or blog post soon.

coalhot
11-03-2013, 23:53
I really hope I'm wrong but it looks to my untrained eyes like the FMS could possibly be as unstable as it was last season. I have already watched multiple robots begin to spin wildly mid-match while watching the Florida and Texas streams. If you don't already know Team 23's season was brought to an end by a round loss due to a possible FMS malfunction in the quarter-finals last season and the team has been especially wary of a repeat of this ever since. Do you think the FMS will be an issue as it was for some teams last year and if it is what can FIRST do about it?


I'd like to ask that if you'd like to blame the field for the issues, you should post the code and pictures of your electronics/wiring so that we could look at it. You can't criticize the field without full disclosure of the code. I say this because I was talking to a few CSA's at a week two event, and more than half of the issues with FMS malfunctions were robot-based and not field-based. Many of us are eager to blame the field equipment on faults, but that's rarely the case. Usually it's something with the robot.

Also, what's supposed to happen when multiple robots trip the bandwidth limit?

Nuttyman54
12-03-2013, 00:38
I'd like to ask that if you'd like to blame the field for the issues, you should post the code and pictures of your electronics/wiring so that we could look at it. You can't criticize the field without full disclosure of the code. I say this because I was talking to a few CSA's at a week two event, and more than half of the issues with FMS malfunctions were robot-based and not field-based. Many of us are eager to blame the field equipment on faults, but that's rarely the case. Usually it's something with the robot.

"Something with the robot". I really, really hate that teminology. It is entirely too general, and has connotations that it is something in the team's code. Anything between the DS and robot is not the FMS, true, but that does not imply that it is something the teams have control over. Case in point is the C++ issue in SmartDashboard that they released a bug fix for in Team Update 2013-03-05. The bugs affected teams at Week 1 events, but were not part of the FMS. Yes, a lot of times it can be something in the team's code, but there can be "robot side" issues that are not the team's fault or responsibility to fix. It is very frustrating for teams to encounter these, and be told "It's something with your robot", but have no ability to diagnose or fix the problem. The term is general, and can apply to anything that is not due to the FMS, regardless of who has ownership of the buggy code.

Also, what's supposed to happen when multiple robots trip the bandwidth limit?

According to the FMS whitepaper (http://www.usfirst.org/sites/default/files/uploadedFiles/Robotics_Programs/FRC/Game_and_Season__Info/2013/FMSWhitePaper_RevA.pdf), the FMS puts a priority on robot control and status packets, so any other packets are likely to be dropped. Trip times will also increase drastically above 6mb/s, and the team exceeding their bandwidth cap may experience lag.

akoscielski3
12-03-2013, 00:53
I was behind the FTA table at GTR east this weekend learning how to be a score keeper. From what I could tell any problems that happened was because of the teams error. there was one team in particular who was dieing in the middle of matches all the time. The FTA's went to their pit and discovered the problem was in the program, (i'm not a programmer so sorry if I'm wrong) there was a problem where the amount of time between the robot reading the code was too slow. However the Programmer Refused to change the code for some stupid reason and kept blaming it on the field.

In FLR we had a problem where we died in 2 of our matches. the first match that it happened the FTA's quickly hurried to our pits to try and figure out the problem with our robot, as they knew the problem was not in the FMS. I am very pleased with the FTA's this year, and I am sure they don't want to have a year like last year. We figured out that the problem was the cRIO wiring, the two wires powering the cRIO were so close together that when we got hit by another robot we would die.

The FMS system is working perfectly fine to what I can tell. For some reason it seems like some people are having issues with their code, where they get stuck in a "loop" and then their robot is confued, dies, or does the same thing it was just doing. Again thats the teams fault not the FMS

coalhot
12-03-2013, 01:46
"Something with the robot". I really, really hate that teminology. It is entirely too general, and has connotations that it is something in the team's code. Anything between the DS and robot is not the FMS, true, but that does not imply that it is something the teams have control over. Case in point is the C++ issue in SmartDashboard that they released a bug fix for in Team Update 2013-03-05. The bugs affected teams at Week 1 events, but were not part of the FMS. Yes, a lot of times it can be something in the team's code, but there can be "robot side" issues that are not the team's fault or responsibility to fix. It is very frustrating for teams to encounter these, and be told "It's something with your robot", but have no ability to diagnose or fix the problem. The term is general, and can apply to anything that is not due to the FMS, regardless of who has ownership of the buggy code.

While you may hate that term, it's still a fact that many matches were replayed due to code (such as the camera settings being wrong). Streaming max resolution at 30Fps will tend to kill the connection to your robot, and because of odd circumstances, can even affect the rest of the field. And yes, the C++ issue wasn't any team's fault, but FIRST didn't know about it/couldn't release a patch till week two.

My point still stands: the FMS isn't the cause of most of the issues this year. From what I've seen, it has almost always is something in the code, or on the robot. Now, I'm not saying that it's always the fault of the robot. The Field breaks sometimes. However, the majority of issues so far from what I've seen (this year) have been code related. This is why I asked for the OP to post code and photos of the wiring, not only for us to be able to say "that's what caused the issue" (if it happened to be the robot or code), but also to look at it and say "that's something I want to avoid doing with my code/wiring/robot, so I don't have the same issues"


According to the FMS whitepaper (http://www.usfirst.org/sites/default/files/uploadedFiles/Robotics_Programs/FRC/Game_and_Season__Info/2013/FMSWhitePaper_RevA.pdf), the FMS puts a priority on robot control and status packets, so any other packets are likely to be dropped. Trip times will also increase drastically above 6mb/s, and the team exceeding their bandwidth cap may experience lag.

If this is the case, why does a field with say three robots that exceed the cap happen to kill the whole field? It should only cause the offending robot/s to cut in and out, right? Or am I missing something?

It's funny though, the camera seems to be the bane of all field issues so far this year (spotlighted by the FTA at NY regional having all the teams turn off their cameras for eliminations). What has changed between the past few years and 2013 that could cause issues like this, other than the bandwidth cap?

Nuttyman54
12-03-2013, 02:05
If this is the case, why does a field with say three robots that exceed the cap happen to kill the whole field? It should only cause a single robot to cut in and out, right? Or am I missing something?

It's funny though, the camera seems to be the bane of all field issues so far this year (spotlighted by the FTA at NY regional having all the teams turn off their cameras for eliminations). What has changed between the past few years and 2013 that could cause issues like this, other than the bandwidth cap?

You are quite correct that the vast majority of team issues are in fact problematic code on their robot. My point wasn't directed at you, but more at the general way that everyone seems to say "well, it's not the FMS so it must be your robot". The system is far more complex than that, and it's time we as a community start respecting that sometimes, just being not-FMS related does not indemnify FIRST from responsibility for the issue. Likewise, however, the issue IS very rarely FMS, and when it is the FTA can usually identify that.

I don't know why teams exceeding the cap are affecting the rest of the field. It's not my area of expertise, and the FMS whitepaper does give many details on how the bandwidth cap is implemented. It does not seem to be working correctly, or at least how any reasonable person would expect. The logical assumption is that all teams are allocated 7mb/s, and any team exceeding that will be throttled so as not to affect the bandwidth of the other teams on the field. Any usage over 6mb/s sees a sharp increase in trip time, and will result in possible control lag (presumably only for the team nearing their limit).

I would say that the camera is an issue primarily because of it's increased usefulness and the ability of teams to stream live feedback to their robots. In 2009, 2010 and 2011, this ability was not particularly useful and was note widely used. Last year, it was extremely useful for vision tracking or just for lining up shots. This year, I suspect even more teams have started putting robot-eye feedback, which seems to result in problems when several of these robots are on the field at once. The bandwidth cap may also be affecting things, since it limits each team to ensure that all teams will have an equal share, rather than dynamically reallocating bandwidth up to the maximum the system can handle to accommodate for a few robots using more than their even share.

DominickC
12-03-2013, 06:11
I'd like to ask that if you'd like to blame the field for the issues, you should post the code and pictures of your electronics/wiring so that we could look at it. You can't criticize the field without full disclosure of the code. I say this because I was talking to a few CSA's at a week two event, and more than half of the issues with FMS malfunctions were robot-based and not field-based. Many of us are eager to blame the field equipment on faults, but that's rarely the case. Usually it's something with the robot.

Unfortunately we will not be posting code or pictures of our wiring.

Later inspection from the FTA, CSA, as well as a National Instruments rep yielded no result. The code was ruled out as a non-issue.

EDIT - We later attempted to replicate the failure using FMS simulation software provided to us by the CSA. All attempts at replication failed.

Gary Dillard
15-03-2013, 16:23
Our robot just did the circle dance for no reason. We tested code before the match and no problems. It started when autonomous started, then continued in teleop so we killed it. After long discussion with FTA about this thread, we powered down and went to the pits. When we started up in the pits it did the same thing. We checked grounds and vacuumed out the robot, no change. We redeployed the same code and then it worked fine. We had Rick Foley check the code and it all looks fine. Hmmmmmm....

Brandon Holley
15-03-2013, 16:49
The "system" (FMS, robots, programming languages, vision processing, CAN, etc) is very complicated, and needs to be respected as Evan stated. We have a natural tendency to draw conclusions by process of elimination, ie: if its not this thing it MUST be this other thing. With complex systems, this technique tends to not work as desired and in our case leaves people feeling helpless. I totally agree with Evan in that just saying to a team "its your robot" is an absolute disservice. It very well might be their robot, in fact 95% of the time it probably is their robot.

However, we as a community and FIRST as an organization have a duty to help get teams working correctly. We (FIRST and its community) have a duty to educate people wherever we can and to help bring people up to speed with this system and how to troubleshoot it. Do you think every team knows that their 30fps camera is bogging down the entire field? In my experience the answer is an overwhelming NO. This is the system FIRST has chosen for all teams, so its on their shoulders to help make this education possible (and have done a decent job as time has progressed with it.)


All that being said, cries of "FMS is messing up our robot!!" may be shortsighted, but are not always unfounded. This all stems from the fallout of Einstein and the investigation that followed. We must stress patience on BOTH sides (volunteers and teams) and understand we all have one common goal: to get every robot running through FMS on the field flawlessly. How the robot performs is up to the individual team obviously, but we all need to work to get teams operational at events.

-Brando

Radical Pi
15-03-2013, 17:58
In my experience, often when people blame the FMS for their problems, they really should be looking at the overall setup of the robot on the field. By simply running it in the pits and saying "it works fine, must be a problem with the field", a whole host of diagnosable issues are being ignored. For instance, by default any robot using RobotBuilder will have their main loop directly tied to the packets coming from the Driver Station. When the robot is operating over wireless, there's many more dropped packets, and each dropped packet is one loop of the robot code that doesn't run. An overly strict watchdog could easily cause problems there.

Case in point, last year at champs we had an issue where auton would work fine in the pits, but on field it would just sit there. It had worked at our previous regional, and under FMS lite we couldn't replicate the problem. Nobody could figure out what was up, and as much as I would have loved to blame the Newton field, the issue turned out to be a seemingly unrelated change to the code which canceled the autonomous command if the Driver Station wasn't connected when the cRIO booted. Since the DS was always tethered in the pits, but on the field it had to wait for the radio to finish connecting, the problem only showed in the pits if we knew what to look for.

Greg McKaskle
16-03-2013, 12:35
Our robot just did the circle dance ...

I assisted a team in Lubbock after a similar issue. It happened at the end of Th and we didn't find it. It happened again on Fri and we saw that most of the motor controller weren't receiving a signal when code was enabled. We discovered that the ribbon cable was no longer zip tied down. It was lifted on one side, so some connections were present, some weren't.

I've also seen this when a limit jumper "fell off" of a jaguar. Without the jumper, the motor will not run in one direction.

If you haven't done it already, check the basics, verify both the code, the electrical, and the mechanical. Could a bad sensor value cause this? Gyro's are finicky if not calibrated when the robot is stationary.

I'll be happy to help interpret the log file if you want to PM me.

Greg McKaskle

mtaman02
16-03-2013, 14:04
I'm told many problems at Lone Star were caused by multiple teams trying to run cameras at 640x480 at 30fps. Or in one case, TWO cameras at that rate. Teams have been instructed to throttle down to 320x240 @ 15fps and things have calmed down a good bit.

^^^
Had the same issues with the NYC Regional with the cameras. Once we tamed the settings the field ran the way it did the previous 2.5 days before. Also had to look for a couple of other things that tied up the comm bands.


I don't believe there will EVER be a comm problem free season. Thats the nature of technology. The field and FMS always gets heavily looked at both locally and through a remote connection whenever a field allegedly has issues. Nothing is fool proof - all that can happen is that it improves a little with each passing day.