Thanks for the explanation, Eric. I was unaware there were any functional differences between the 2009 and 2010 PD boards. Since they look identical and carry the same part number (having just checked the 2009 and 2010 KOP checklists), how can I tell which one is which?
The '10 PD will have red LEDs under any wired Wago connections that also have pulled breakers. It’s pretty easy to identify when it glows in the dark. 
P.S. Just to keep thread clutter all in one place:
-
the new robot bridges (WET610N) are too slow in connecting, but the reason Kate (FRC KOP Engineer) gave for picking them included improved streaming video performance.
-
the Digital sidecars had some debris induced, shock induced, etc. damage that was sometimes difficult to diagnose on the field. From the returned units Eric has been able to examine, are there any common failures that could be readily identified via additional status LEDs?
-
Several issues with the Classmate driver station. All-in-all it worked well. The obvious problems:
- depending on battery power on the playing field
- random failed USB power re-negotiations (game controllers, Cypress)
- failed services upon waking from Sleep mode (Cypress)
- failing to connect to FMS (rare cases)
*]easily broken Ethernet cable retention
-cRIO module connections came loose on robot impacts (50g clips are not enough sometimes).
Thanks! Other than looking prettier, are there any functional differences? I’m trying to setup some summer sessions and would like to utilize all of our assets!
I didn’t intend to derail the conversation, sorry!! If more discussion on this topic is required, could we please have a moderator pull it to its own thread?
The 2010 PD adds the blinky lights that Mark mentioned, a self-resetting fuse in the camera return path, and a tiny bit of extra power supply conditioning.
None of these improvements matter in the nominal “everything is happy” case. If you wire your robot correctly, a 2010 PD is identical to a 2009 PD. If you wire the robot incorrectly, a 2010 PD provides slightly more information and fails more gracefully* in a few specific fault cases.
You can tell a 2009 from a 2010 by the color of the PCB. Red for 2009, Blue for 2010.
The 2010FRC rules did specify a 2010PD, but a 2009PD works just fine. Please feel free to use them for whatever off-season uses you want to, secure in the knowledge that they are 99.something% functionally identical.
- A few 2009 units blew out the return path of the camera supply when it was shorted to the battery input. A 2010 unit subjected to the same fault will protect itself and usually recover in 5-10 seconds. In an extreme situation, the third line of defense will kick in. This takes 20-40 minutes of resting unpowered to recover entirely, but I’ve never seen it happen in real life.
Eric, speaking of 2010 enhancements, what improvements were made to the Analog Breakout?
As for the cRio hardware itself, it was fairly good (although it does weigh alot). It would be nice to have a separate Tether and Radio connection to avoid unplugging the radio, but that could be fixed by using a radio with multiple ports (this would also make the tether easier to access, as the radio is generally in a more visible location than the cRio.
We have had some issues involving a broken Analog Bumper or Analog Module, we don’t know which but we replaced both and now it works. The voltage was oscillating considerable, as a graph of the analog inputs showed. We also had an issue where the analog module came out of the cRio and caused the arm to freak out and almost damage the robot (the e-stop helped here).
There are also many seperate points of failure here. There is the crio itself, its connections to the 3-6 modules, the connections of the 2-4 solenoid and analog modules to their bumpers, the connections on each end of 1-2 db37 cables, and up to 7 power connections not including the radio (4 bumpers, 2 sidecars, and the crio.)
Proposed solution: A crio backpack would attach to the top of all of the modules, providing a more robust connection with two screws to each module, and accept a single 12v unregulated input which would feed the 12v radio, 6v servos, 12v solenoids, 5v DIO and AI, and the 24v cRio. The camera could come off the 5v feed if it needed to, eliminating the PD board completely, some money savings for off-season projects. It could be designed for 2 analog modules, 1 digital module, and 1 solenoid module. the 32 DIO channels could have a fixed number of PWM’s, DIO’s, relays, SPI, and I2C. If you needed more than that, you could use modules like you have now in the remaining slots.
As for DSC’s shorting out, we have always fixed that by turning the robot upside-down and shaking it out. Works well.
Strictly to robot-end control system components, my biggest complaint is the radios. 18 seconds into a match at Kettering, we sailed over the bump after autonomous (not very roughly compared to other teams), hit the ground, and the radio stopped for some unknown reason. When we returned to the pits, we had a huge delay between matches (mostly because of field problems), got a new radio from spare parts, and had to wait for a really long time to have it reprogrammed. Once it was reprogrammed, we put it back on (in a different orientation this time), and used about 12-18 inches of duck tape on the connections, plus two zip ties to go with the velcro. No more problems at Kettering. Since then, we have carefully used a ton of duct tape on all radio connections and had no problems. However, the radios should be much more reliable then they are. The power connections are just friction locks - and that will never work for FIRST, especially a game as rough as Breakaway. To make things worse, that button on the top can press itself and cause problems, so disabling that in the firmware might be a good idea.
And, of course, I hate the Cypress comm. Totally sucks. But that is for a different thread.
I’m fairly certain we already used six separate access points. I’d imagine they’re on different channels as well. The 5GHz range provides 19 non-interfering channels in the United States.
If FIRST could pull this off, my entire team will videotape ourselves giving a round of applause and send it to the engineers responsible. 
I would like to see a change to the FPGA that will allow teams to read PWM outputted by a sensor.
Also, this thread has gone WAY off-topic. This is a control sytem feedback thread, NOT KoP feeback thread! problems with the PDB, for instance, do not belong here.
I’m just going to echo some of the comments already made about the radio. The connection is not nearly robust enough for the competition, if I need to duck tape a wire into place to prevent communication issues, it’s not designed right.
I’d also like to agree with the reset button issues, and just generally the suitability of all of the parts and pieces for shock and the dual ports for tether, and radio would be very nice.
The comment about making one board to attach on top of the cRio is awesome, that would make everything so much easier, I would love it.
Though I do like the sidecar for being physically seperate from the cRio.
On the last 2 robots my team (399) made, the cRio has been in a some-what hard to reach with cords area.
But, the sidecar could be located in a better area because it was so small, so that cords could easily be added / removed and that made debugging much easier.
cRIO – excellent. The only trouble I’ve had is accidentally bending a pin of the compact DB15 in one of the module slots. This was probably due to sloppy and repeated insertion of a module.
Robot Radio – Hasn’t shown much reliability on-field, and I have no scientific method of troubleshooting.
Camera – It’s a pretty good camera. A shame we can’t use the FPGA to process images at the 30fps 640x480 that the camera is capable of. A method of increasing the view angle would be quite useful. Insulating the case from the frame presents a problem if you use the provided camera gimble. Gimble could use redesign to aid in insulation and robustness.
Solenoid breakout – I’m not sure why this was created. I would prefer screw terminals to those PWM-style connectors. (I believe the screw terminals are a removable phoenix connector, which would make it easier to move the cRIO from 'bot to 'bot)
http://sine.ni.com/images/products/us/040729_crio9472_m.jpg](http://sine.ni.com/nips/cds/view/p/lang/en/nid/208822)
Analog breakout – I appreciate that the noisy regulator was replaced with a linear regulator this year. However, I find it limiting that the only power supply is 5v. What about 10v? -10v? 3.3v? It’s silly to have a 12 bit AD converter, and only use 10 bits of it. Plus, those cables look just like the 3-pin cables we use for most of our other signals. I’ve attached an example pin-out.
In addition to this, I’ve always found the height of the board and that retaining tab on the plastic shield to get in the way.
Digital sidecar – Everything is packed in way too tightly, and I don’t use most of it anyways. I hate having to pull connectors apart with the cable, especially with PWM-style cables, because it tends to pull the connector apart FROM the cable. I would really like the option to use the digital module in slot 6 as 32 DIO, straight-through (even without the resistor pull-ups). This would allow use of sensors that have send and receive on the same line, and it would also allow the use of MORE sensors. (Someone could line their 'bot with limit switches, or have 12 optical encoders, or use the parallax SONAR). Also, it needs much better color-coding. Every year I have to go through and paint it up with nail polish so we don’t put things in backwards. I like the lights on the relay outputs, and the space in between the PWM connectors for the PWM signal. In general, it’s hard to place anywhere, because wires go to it from all directions, and it’s hard to neaten those up.
Power Distribution board – Seems to discourage proper protection. The main breaker should fit in neatly where power is supplied. The camera power supply should have a crowbar at 500mA, not 3000mA. (The camera datasheet specifies 5.0–5.5VDC with a max power draw of 2.5W) The cover of the PD board should not prevent fuses from being used for supplying power to the analog breakout (2A), the solenoid module (10A), and the Digital Sidecar (5A with many servos, 1A without). The power supplies for the camera, the cRIO, and the Robot Radio also throw a lot of noise on the power line. The camera and the Robot Radio should have physical switches to disconnect power from their DC-DC supplies. There should be a large capacitor to buffer noise, because the noise on the main power supply affects the reliability of analog sensors.
Wago Terminals – A little tricky at first, but pretty good. Requires two wago tools if you’re using zip cord. I would prefer the wago terminals that have plastic levers on them instead of requiring a wago tool.
http://www.wago.us/images/PCB_2716_87x87.jpg](http://www.wago.us/products/27816.htm)
Wago Connectors – Need to be color-coded, so wires are not inserted backwards. Split apart when non-official wago tools are used (because most screwdrivers are too wide).
Spike Relay – Push-ons are susceptible to failure. May break off from the board after repeated use. Very expensive item for just an opto-isolator and a three-position double-pole, double-throw relay. No color coding or keying. I would like a CAN-enabled relay to use instead.
Victor 884 – Small package. Nonlinear control. Unprotected fan hurts if you don’t expect it. No color coding or keying.
Jaguar – Lots of features available with CAN. Linear control. Forward and reverse limit switches. No color coding or keying. No reverse-voltage protection. Fork terminals are prone to failure and reversal. A better solution would be to use a pluggable connector on both sides of the Jaguar.

I would totally agree with this point. Our team (997) has a recurring electrical issue during the seeding rounds at Atlanta that would manifest as the robot completely dying for upwards of 30 seconds after being/receiving a bump. This was traced to a bad ground connection on the cRIO! There has to be a more secure method to connect wires to this critical piece of hardware.
I have heard the complaint regarding having to disconnect the cRIO from the radio multiple times on this and other threads. What our team did to solve this issue (it was WAY too hard to get to the connector on the cRIO) was to unplug the network connection from the radio end and then plug it into a simple (cheap 4 port variety) network switch connected to the classmate, etc. It made things easy in the pits to convert from a competition (radio) configuration to a pit/testing (tether) configuration.
However, I will agree that the time for the new wireless switch to boot is very long and the power connection on the radio is very prone on problems as well.
Enjoy!
Floyd Moore
Mentor Team 997
A few new comments after IRI:
Stop Button Override wait: This is annoying. We broke our USB hub, but since we only have two USB devices (gamepad and Cypress board), we just hooked everything right up to the Classmate and unplugged the stop button. We never use it anyway since it kills the code on the robot, and the space bar works just as well. BUT, now we have to wait 20 seconds in addition to the FMS lock time to tether the robot after a match (this is especially time critical during eliminations). The old IFI system could be run without a competition dongle with no problem (if it needed to be disabled, you could just unplug the tether cable or the power cable when on radio)
Wait for code when downloading code: Sometimes I need to download code FAST. Something like an autonomous change between elim. matches. I already know I have to wait for the robot to boot, and wait for the code to build, but I also have to wait for the existing code on the robot to load for some reason. I have no knowledge of if it is loaded or not. I can tell if the robot has booted by looking at the RSL, but there is no indication of code. I could tether it to the Classmate, but that is busy clearing an FMS lock.
Robot crash when downloading code: I experience this every now and then. For no real reason, the robot crashes and reboots (loss of comm and code on the Classmate, RSL goes out, a minute or so later it is back up). I didn’t notice this much during the season, but had a huge issue with it at MARC (good thing the FMS lost power so I had like 15 minutes of waiting to fix it, the robot crashed about 4 times in a row before working)
A few plusses:
I really liked the use of wireless on the practice field at IRI. They gave us our radio encryption code to program into our existing DS radio, and let us use it without changing the radio on the robot.
A few suggestions:
-
Don’t re-build and re-download everything every time. I look at what it does, and it re-builds the ENTIRE WPIlib every time I make a change to one file. I already talked to you about this one at the championships. Just reminding you.
-
Some sort of cRio emulator for LabVIEW (I heard the C++ guys get one) would be really nice. I mean, I know I can write it myself, but then I have to worry about what happens if the WPIlib changes next year.
-
Dual-booting Linux and XP on the Classmate could provide a locked down environment, plus it could be optimized to boot fast and run Driver Station. The XP portion could run the development environment.
This causes two problem:
First, you can’t use WinXP drivers for gamepads and other stuff (the blue DS had this same issue so I wouldn’t worry about it)
Second you have to build the Dashboard for Linux and Windows. -
Give us the patch for the Cypress issue.
Please remind us what you are referring to here.
Thanks,
-Joe
Thank you to Joe and Greg for asking for this feedback.
After working with the FTA’s (Mark Koors and Rob Jenkins) at IRI, I would like to add this to the critique of the Classmate:
- Failure for the Classmate to connect to FMS is a bigger deal than Mark McLeod eludes to above. I will try to explain.
If the Classmate fails to connect to FMS (and it is only in Driver Mode), then the FTA has two options: a) run the match without that team participating, b) swap out the Classmate. The first option frustrates the team, of course (and causes a 1-2 minute delay* while making this decision). The second option creates at least a 4 minute delay*.
- Driver and Developer mode:
If a team shows up to the field while logged on to Driver and Developer mode at the same time, the Classmate again does not connect to the field. In this case, a 1-2 minute delay* occurs (and that is only if it is diagnosed correctly).
-
- We ran 104 qualification matches at IRI this past weekend, with a top notch field staff. Due mostly to these delays, we ran about 45 minutes behind schedule after running 104 matches.
Sincerely,
Andy Baker
There is a third fix for the Classmate failing to connect to FMS that also takes a couple of minutes. It involves the Restore stick and was documented in the FMS error sheet by FTA Pete Kieselbach after week 1.
I haven’t seen trouble with having both Driver & Developer accts logged in and have run successfully with them during the events I worked. Is there any more detail on that particular problem?
Do you know if the Classmates that exhibited the FMS-failure to link problem had been:
- rebooted in the field queue
- been awakened from sleep mode
*]or had been kept constantly running?
One of the troubleshooters would probably have taken note of this.
If someone has anything, I’d like additional symptoms or triggers on the inability to connect to the FMS. The code reads UDP from the port with a one second timeout, processes anything it received, and loops around for more.
Any idea if a laptop that fails to connect has ever connected? Is it the laptop settings or other issues?
Any idea if it tends to happen after a suspend, a reboot, or ???
To try and reproduce this, was this FMS lite or the real deal?
Greg McKaskle
Here’s what I know…
I worked on about a dozen of these particular FMS-fail to connect cases at official events with the full FMS. That’s why I’d like to hear more reports from other areas of the FRC world to see what patterns others saw.
Under competition circumstances we’re just trying to get them operating again, so we don’t get much opportunity to study the problem. We just try to make it go away as fast as possible. I never had the time to try it against FMS-lite. I meant to do so in Atlanta, but my laptops were all tied up when the opportunity arose.
- It seemed to manifest when the Classmate woke from sleep, but only a small percentage of Classmates suffered.
- Once it started, rebooting did not improve the problem.
- It can be fixed by restoring the Classmate to KOP as-delivered (~2 hours).
- It can be fixed by a bizarre use of the restore stick (~2 minutes).
I can’t say that restoring prevented the problem from coming back. Standard advice quickly became “don’t ever let it go to sleep.”
When some of them did go to sleep again the problem did return, but that’s a very subjective observation and I wouldn’t put any faith in it without collecting less-subjective samples.
There were a lot of cases at the week 1 NJ Regional, but I didn’t see a single one at the 4th week SBPLI Regional. One big difference is that most teams at SBPLI had them on constant power or shut them down completely while waiting in the queue. By week 4 the sleep problems had been widely reported and teams weren’t letting them sleep.
You saw one of these in Atlanta with team 192 in the Curie pits, I think it was.
Taking 192 as an example, they reportedly worked fine with FMS at their regional, but failed to connect to FMS during the Championship Thursday practices. It was fixed at the field and tested fine with FMS. We rebooted their Classmate from scratch and it still connected fine.
Unfortunately, they did not have an inverter to keep the Classmate powered and instead put it to sleep for the long walk and wait in the queue. When it woke up it no longer recognized FMS.
What interests me most about Andy’s report is the attribution to conflicts between the Developer and Driver user accounts. I often had teams running both accounts so some of the sleep problems could be quickly fixed. The only FMS problem with having both accounts open that I knew about was early on when users could accidentally run two copies of the Driver Station software, but you guys prevented that with one of your updates.
Thanks for the info. Anyone else with observations?
Greg McKaskle