![]() |
THE HORROR! THE HORROR!
![]() I really hope I'm wrong but it looks to my untrained eyes like the FMS could possibly be as unstable as it was last season. I have already watched multiple robots begin to spin wildly mid-match while watching the Florida and Texas streams. If you don't already know Team 23's season was brought to an end by a round loss due to a possible FMS malfunction in the quarter-finals last season and the team has been especially wary of a repeat of this ever since. Do you think the FMS will be an issue as it was for some teams last year and if it is what can FIRST do about it? http://s1.postimage.org/y2y5izgy5/Not_0a1d39_92289.gif |
Re: THE HORROR! THE HORROR!
I wouldn't get too excited yet. I can't see the image you posted, but I've not seen anything game-changing because of the FMS. If problems do arise, I'm confident FIRST will take them as seriously as we do.
|
Re: THE HORROR! THE HORROR!
I'd like to see some things before I flat out blame the field. If the field works properly for 3000 other teams and not yours then chances are the field may not be to blame.
There are plenty of factors...like...static discharge....Overflow of your 7Mbs connection...cRIO rebooting...etc. |
Re: THE HORROR! THE HORROR!
The field worked flawlessly at BAE, at least in every match I played. If there were any issues with the field they didn't seem to warrant an announcement.
Remember that individual robots having issues isn't really an indication of a FMS problem, and we've already completed a number of regionals without any major issues. I think there might be some confirmation bias going on here. |
Re: THE HORROR! THE HORROR!
Z,
You're just watching the wrong streams. Watch Toronto and Portland instead. At BAE buzz did circles once in elims, but I think they had the driver station swapped with 61. I'm also a bit more confident about 23's code this year. |
Re: THE HORROR! THE HORROR!
Quote:
Quote:
Should there have been better diagnostics? sure. But its not as simple as saying "the FMS ended our season." Maybe in your case it was one of the rare instances it was true... but I don't see how we can/should panic over seeing a couple of dead robots on a webcast. |
Re: THE HORROR! THE HORROR!
16 and ourselves had a buggy match at Lubbock.....where we both sort of spun after losing connection. A few times titanium was spinning in auto in Elims as well...
From what I've seen this is sort of a popular failure...why I wonder do all the robots react like that? |
Re: THE HORROR! THE HORROR!
Quote:
|
Re: THE HORROR! THE HORROR!
We had very few problems at Hub City until the championship finals. In the first match our CRio decided to reboot in the middle of the match and in the last match it rebooted as soon as autonomous started. Please don't start flaming me, I am not making excuses or blaming anyone or anything. I am just stating the facts. I know there was at least 2 others that had the same problems and the suggested fixes were cleaning the CRio and changing the black connector to the green one.
We use labview and did not make any code changes before the hiccups started. Mr. B |
Re: THE HORROR! THE HORROR!
Being the programmer last year, and the one on disaster control at the time of the incident at the WPI Regional last year, I can speak in greater detail to the issues we experienced.
It was round 3 of the quarterfinals. Only mild issues with initial FMS connection were experienced at any point prior to the issue. Auton executed as expected. When transitioning from Auton Disabled to Teleop Enabled, the robot began spinning in a tight circle, and was not responding to any driver commands. Later inspection from the FTA, CSA, as well as a National Instruments rep yielded no result. The code was ruled out as a non-issue. EDIT - We later attempted to replicate the failure using FMS simulation software provided to us by the CSA. All attempts at replication failed. |
Re: THE HORROR! THE HORROR!
Quote:
My :ahh: panicking:ahh: was meant to be taken with a grain of salt and was extremely sarcastic, hence the memes and star trek GIF. However, my questions about the integrity of the FMS last season (and possibly this season) is quite serious. While I make no claims to be able to diagnose an FMS failure over the web I can say with a high degree of certainty that our failure at WPI last year was not based on a fault in our code. We functioned normally for the entirety of the competition and had not experienced any previous failures. The FTA's at the event looked over our watchdog logs and our robot code at length and were incredibly helpful in trying to deduce what had caused our problem. They agreed with us that it was not an error in our code and even tried to get us a rematch to no avail. We have tried on multiple occasions to replicate our spinning on the field and have been unable to do so. I trust enough in the folks behind FIRST to believe that the FMS will be a non-issue but our experiences last year will never be too far from my mind. |
Re: THE HORROR! THE HORROR!
I'm told many problems at Lone Star were caused by multiple teams trying to run cameras at 640x480 at 30fps. Or in one case, TWO cameras at that rate. Teams have been instructed to throttle down to 320x240 @ 15fps and things have calmed down a good bit.
|
Re: THE HORROR! THE HORROR!
May I say that we were one of those teams in Florida who was "spinning out", and that was not due to FMS. It was our vision code that kept spinning because it couldn't find a target (we are still having problems, but whatever). Unfortunately, this happened during autonomous and we had discs in the bot. The judges weren't very happy about our spinning, disc-shooting robot (but they were understanding later). We call this the "Death Spin".
On the other hand, we HAVE had some problems with communications. In our last two matches, we suddenly lost comm right after auto ended. we grabbed the controls, drove to the feeder,loaded discs, turned around, and died. The driver station said that we had comm and code, and yes, we do have the most recent version. In case you want more details; we program in Java and use the Smart Dashboard. Our robot worked for the whole day until our last two matches, and we can't remember changing our code majorly. Our robot works great while tethered with the ethernet cord. We had to reduce the resolution of our camera because the control system was delaying so much that we accidentally fell over. Other teams seem to be having this problem and we have consulted with the control system experts who have told us this: Java has a 5-sec delay when it resets the gyro (andymark) which our whole drive system is based off of (orientation to the field). We moved the gyro zero function into the disabled code but haven's been able to test it yet. Our code might be crashing. We need to get to the arena early to test tomorrow. If you have suggestions, that would be great, and I hope that we can resolve this issue soon (1st match tomorrow!). |
Re: THE HORROR! THE HORROR!
I understand your point, but I believe that you are overreacting just a little bit. In particular, "THE HORROR! THE HORROR!" might not be the most appropriate title on this thread.
As far as I know, there were no significant field issues at FLR this year. That is the only regional I know about for sure right now. I think that things will get fixed by FIRST as soon as they can, but for now there should probably be more work devoted to fixing the automatic scoring system. I think more teams are getting burned by not knowing what the score is at the end of the match as opposed to FMS issues. We will see more as time continues but for now I think that FMS is doing a pretty good job overall. |
Re: THE HORROR! THE HORROR!
The only robot at FLR that randomly spun in circles was 1551 in our first match, and that stopped when we disabled our gyro...
|
Re: THE HORROR! THE HORROR!
Quote:
And I appreciate that different people have different senses of humor, and that your graphics should have indicated your tone of sarcasm... but as the saying goes "TOO SOON...TOO SOON". |
Re: THE HORROR! THE HORROR!
We had problems also with all robots shutting down and having to play a rematch.
Not sure why frc still uses wifi, should move to independent channel RF. |
Re: THE HORROR! THE HORROR!
Quote:
|
Re: THE HORROR! THE HORROR!
Quote:
To the best of my knowledge we had no problems with the FMS in Hub City. |
Re: THE HORROR! THE HORROR!
Quote:
|
Re: THE HORROR! THE HORROR!
Quote:
|
Re: THE HORROR! THE HORROR!
Quote:
|
Re: THE HORROR! THE HORROR!
Quote:
We use the West Mountain battery tester to test our batteries. We found several this year that had normal voltages initially, checked good on our battery beak, but fell off after 5-10 minutes of testing fairly sharply. |
Re: THE HORROR! THE HORROR!
Quote:
|
Re: THE HORROR! THE HORROR!
Buzz's dance was not due to FMS. Due to possible programming issue plus the fact that we had to keep safety pins in to keep the feeder up ans therefore within starting envelope. We did not have enough pressure in the system after the last match and refs would not let us power up to recharge.
|
Re: THE HORROR! THE HORROR!
Quote:
|
Re: THE HORROR! THE HORROR!
Quote:
|
Re: THE HORROR! THE HORROR!
Quote:
|
Re: THE HORROR! THE HORROR!
Quote:
|
Re: THE HORROR! THE HORROR!
I remember one work session where a robot decided to "spin out" as soon as it was enabled. It did not respond to any control inputs. The cause turned out to be a disconnected gamepad. Pressing F1 on the Driver Station brought things back to normal.
|
Re: THE HORROR! THE HORROR!
Quote:
|
Re: THE HORROR! THE HORROR!
Quote:
|
Re: THE HORROR! THE HORROR!
Quote:
You can also find the source code for a slightly older version of both the factory firmware and bdc-comm in TI's RDK-BDC24 package. I have no idea whether VEX plans on making a similar release with the newest code. |
Re: THE HORROR! THE HORROR!
Quote:
|
Re: THE HORROR! THE HORROR!
Quote:
|
Re: THE HORROR! THE HORROR!
Quote:
|
Re: THE HORROR! THE HORROR!
Quote:
|
Re: THE HORROR! THE HORROR!
In the two regionals I've attended so far, I have only ever seen one robot lose comms. I didn't watch every match, but it's been one of the best years for this sort of thing since 2009 in my experience.
|
Re: THE HORROR! THE HORROR!
624 was having comms problems at Lone Star, but they definitively tied it to a radio reboot under heavy current draw. Unfortunately, they've swapped out everything, sometimes twice, in their attempts to fix it. I know they were fine in their last quals match, but I don't know if that stayed that way during the elims.
|
Re: THE HORROR! THE HORROR!
Quote:
If your local library does not have it, use inter-library loan to get it. Or buy it. The demo code and code examples downloadable here show how to read and write some of the RS232 control pins, and how to read the Pentium's RDTSC 64-bit nanosecond timer. |
Re: THE HORROR! THE HORROR!
Quote:
The trickiest part (but hardly difficult though) is to make a small passive circuit with caps and resistors to divide down the signal voltage and block the mic DC power coming out of the laptop mic port. It will work with just about any reasonable signal voltage, given the appropriate voltage divider. The limitation is the 96KHz sampling frequency. The upside is that it will store hours and hours of data (limited only by available disc space). |
Re: THE HORROR! THE HORROR!
Quote:
Fortunately, Rob the head robot inspector, saw this once before with an AXIS camera and was able to point this out to us. |
Re: THE HORROR! THE HORROR!
Quote:
|
Re: THE HORROR! THE HORROR!
Quote:
I was not at any of these events nor do I know anyone personally who has reported these symptoms, so it's second hand information at best. The fact remains that turning camera feedback to the driverstation down or off seems to have resolved many of the issues at events. Hopefully FIRST will address this in some regard in an update or blog post soon. |
Re: THE HORROR! THE HORROR!
Quote:
Also, what's supposed to happen when multiple robots trip the bandwidth limit? |
Re: THE HORROR! THE HORROR!
Quote:
Quote:
|
Re: THE HORROR! THE HORROR!
I was behind the FTA table at GTR east this weekend learning how to be a score keeper. From what I could tell any problems that happened was because of the teams error. there was one team in particular who was dieing in the middle of matches all the time. The FTA's went to their pit and discovered the problem was in the program, (i'm not a programmer so sorry if I'm wrong) there was a problem where the amount of time between the robot reading the code was too slow. However the Programmer Refused to change the code for some stupid reason and kept blaming it on the field.
In FLR we had a problem where we died in 2 of our matches. the first match that it happened the FTA's quickly hurried to our pits to try and figure out the problem with our robot, as they knew the problem was not in the FMS. I am very pleased with the FTA's this year, and I am sure they don't want to have a year like last year. We figured out that the problem was the cRIO wiring, the two wires powering the cRIO were so close together that when we got hit by another robot we would die. The FMS system is working perfectly fine to what I can tell. For some reason it seems like some people are having issues with their code, where they get stuck in a "loop" and then their robot is confued, dies, or does the same thing it was just doing. Again thats the teams fault not the FMS |
Re: THE HORROR! THE HORROR!
Quote:
My point still stands: the FMS isn't the cause of most of the issues this year. From what I've seen, it has almost always is something in the code, or on the robot. Now, I'm not saying that it's always the fault of the robot. The Field breaks sometimes. However, the majority of issues so far from what I've seen (this year) have been code related. This is why I asked for the OP to post code and photos of the wiring, not only for us to be able to say "that's what caused the issue" (if it happened to be the robot or code), but also to look at it and say "that's something I want to avoid doing with my code/wiring/robot, so I don't have the same issues" Quote:
It's funny though, the camera seems to be the bane of all field issues so far this year (spotlighted by the FTA at NY regional having all the teams turn off their cameras for eliminations). What has changed between the past few years and 2013 that could cause issues like this, other than the bandwidth cap? |
Re: THE HORROR! THE HORROR!
Quote:
I don't know why teams exceeding the cap are affecting the rest of the field. It's not my area of expertise, and the FMS whitepaper does give many details on how the bandwidth cap is implemented. It does not seem to be working correctly, or at least how any reasonable person would expect. The logical assumption is that all teams are allocated 7mb/s, and any team exceeding that will be throttled so as not to affect the bandwidth of the other teams on the field. Any usage over 6mb/s sees a sharp increase in trip time, and will result in possible control lag (presumably only for the team nearing their limit). I would say that the camera is an issue primarily because of it's increased usefulness and the ability of teams to stream live feedback to their robots. In 2009, 2010 and 2011, this ability was not particularly useful and was note widely used. Last year, it was extremely useful for vision tracking or just for lining up shots. This year, I suspect even more teams have started putting robot-eye feedback, which seems to result in problems when several of these robots are on the field at once. The bandwidth cap may also be affecting things, since it limits each team to ensure that all teams will have an equal share, rather than dynamically reallocating bandwidth up to the maximum the system can handle to accommodate for a few robots using more than their even share. |
Re: THE HORROR! THE HORROR!
Quote:
Quote:
|
Re: THE HORROR! THE HORROR!
Our robot just did the circle dance for no reason. We tested code before the match and no problems. It started when autonomous started, then continued in teleop so we killed it. After long discussion with FTA about this thread, we powered down and went to the pits. When we started up in the pits it did the same thing. We checked grounds and vacuumed out the robot, no change. We redeployed the same code and then it worked fine. We had Rick Foley check the code and it all looks fine. Hmmmmmm....
|
Re: THE HORROR! THE HORROR!
The "system" (FMS, robots, programming languages, vision processing, CAN, etc) is very complicated, and needs to be respected as Evan stated. We have a natural tendency to draw conclusions by process of elimination, ie: if its not this thing it MUST be this other thing. With complex systems, this technique tends to not work as desired and in our case leaves people feeling helpless. I totally agree with Evan in that just saying to a team "its your robot" is an absolute disservice. It very well might be their robot, in fact 95% of the time it probably is their robot.
However, we as a community and FIRST as an organization have a duty to help get teams working correctly. We (FIRST and its community) have a duty to educate people wherever we can and to help bring people up to speed with this system and how to troubleshoot it. Do you think every team knows that their 30fps camera is bogging down the entire field? In my experience the answer is an overwhelming NO. This is the system FIRST has chosen for all teams, so its on their shoulders to help make this education possible (and have done a decent job as time has progressed with it.) All that being said, cries of "FMS is messing up our robot!!" may be shortsighted, but are not always unfounded. This all stems from the fallout of Einstein and the investigation that followed. We must stress patience on BOTH sides (volunteers and teams) and understand we all have one common goal: to get every robot running through FMS on the field flawlessly. How the robot performs is up to the individual team obviously, but we all need to work to get teams operational at events. -Brando |
Re: THE HORROR! THE HORROR!
In my experience, often when people blame the FMS for their problems, they really should be looking at the overall setup of the robot on the field. By simply running it in the pits and saying "it works fine, must be a problem with the field", a whole host of diagnosable issues are being ignored. For instance, by default any robot using RobotBuilder will have their main loop directly tied to the packets coming from the Driver Station. When the robot is operating over wireless, there's many more dropped packets, and each dropped packet is one loop of the robot code that doesn't run. An overly strict watchdog could easily cause problems there.
Case in point, last year at champs we had an issue where auton would work fine in the pits, but on field it would just sit there. It had worked at our previous regional, and under FMS lite we couldn't replicate the problem. Nobody could figure out what was up, and as much as I would have loved to blame the Newton field, the issue turned out to be a seemingly unrelated change to the code which canceled the autonomous command if the Driver Station wasn't connected when the cRIO booted. Since the DS was always tethered in the pits, but on the field it had to wait for the radio to finish connecting, the problem only showed in the pits if we knew what to look for. |
Re: THE HORROR! THE HORROR!
Quote:
I've also seen this when a limit jumper "fell off" of a jaguar. Without the jumper, the motor will not run in one direction. If you haven't done it already, check the basics, verify both the code, the electrical, and the mechanical. Could a bad sensor value cause this? Gyro's are finicky if not calibrated when the robot is stationary. I'll be happy to help interpret the log file if you want to PM me. Greg McKaskle |
Re: THE HORROR! THE HORROR!
Quote:
Had the same issues with the NYC Regional with the cameras. Once we tamed the settings the field ran the way it did the previous 2.5 days before. Also had to look for a couple of other things that tied up the comm bands. I don't believe there will EVER be a comm problem free season. Thats the nature of technology. The field and FMS always gets heavily looked at both locally and through a remote connection whenever a field allegedly has issues. Nothing is fool proof - all that can happen is that it improves a little with each passing day. |
| All times are GMT -5. The time now is 07:54. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi