FRC 95 The Grasshoppers 2019 Build Thread


Thanks Sam! We weren’t planning on using a browser for streaming, but your link really confirms that decision.


i have no link to back it up, but a former mentor of our team who is now with 4533, implemented a rasberry pi system that ran on a browser. idk how it worked, idk how any programming works. but i do know how to drive a robot real well and there was like zero delay.


Just have to say, just being able to look at your team’s CAD model helped our team see an easy way to solve one of the problems we needed to solve with our L3 lift system.


Awesome! Glad to hear it helped. Please feel free to post a picture or screen shot here if you’re so inclined.


CAD is nearing completion, although our electronics/pneumatics layout effort is wildly out of date.

Now to continue fabrication and assembly! We’re a little bit behind where we were last year, but with a clearer path forward. Hopefully we can pick up the pace and have a couple good weeks of drive practice.


So many gearboxes to assemble… here are two of the many we’ve been putting together.

Practice robot assembly has started.

After some additional research we found that the ‘star topology’ with which we had wired our CAN Bus is counter-indicated by CTRE. :persevere:

I take the blame for this arguably bad call. Although this topology worked flawlessly first try during the off-season it is likely not worth any risk for a competition robot. I stripped out the CAN block and started soldering all of the CAN leads together.

We’re struggling with a controller- or code-related issue at the moment, which is preventing us from driving. I anticipate some progress tonight with the presence of experienced programmers, which we didn’t have last night. When we attempt to command a drive throttle the apparent command value sent to the SRXs seems to fluctuate at random.

Edit: these are the errors that we’re seeing, presumably related to some of our Talons not being properly addressed (which we know). In the past this hasn’t caused any errors like we’re seeing now. I’m sure we’ll figure it out, eventually, but it’s not optimal for our timeline.

In better news we have ostensibly finished machining on our elevator rails, are continuing to machine our elevator bearing blocks, have found a new volunteer to make our practice field elements (our normal guy has changed jobs and is no longer available), and have our last few critical orders placed.


Without going into a huge amount of detail, could you explain why the star topology is not good, other than it is “not robust”.

Edit: I went to the site, where we bought our splitters last year, and saw their guide to topology. It looks like CTRE lifted their graphics and just put a red X over them.


I think you need to direct that inquiry to CTRE.


Me simple mechanical man.

But I’ll give it a shot.

My tentative understanding is that in a star topology the network can be quite sensitive to the lengths and types of wires used to construct the network. It can be executed, of course, but as more devices are added the network becomes more prone to instability/noise and failure.

With a ‘daisy chain’ (really Line/Bus) topology the 120Ω resistors in the Athena and PDB at each end of the CAN Bus rails keep ‘signal reflections’ and other errant signals to a minimum.

More discussion here: Mindsenors CAN Splitter

And yes, it looks like CTRE used the MindSensors figure directly.

The short story is: we had significant control systems issues, double-checked choices we had made, found one to be counter-indicated by the speed controller/electronics manufacturer, and decided to change it to avoid any possible issues related to that choice.


Its all due to how CAN works. It needs to be terminated on both ends in order for reliable communications. Please have a look a this:

The high speed ISO 11898-2 CAN standard defines a single line structure network topology.
CAN bus does not support star or even a multi star topologies. The nodes are connected via
unterminated drop lines to the main bus. The bus line is terminated at both furthest ends with a
single termination resistor(characteristic line impedance)


We were seeing a similar issue when we were telling a motor controller to move that did not exist, this seems like a change from last year as I do not remember this happening previously.

I’m not sure that I like this change too much if it acts this way if a motor control that was instantiated properly goes out during a match and similar calls get made. It was also not incredibly easy to diagnose as the errors, in theory, should not have been able to affect other motor controller’s output.

I think that some more testing would be in order to see if this is the case and if so include some checks to make sure that a call is never made to a motor controller that does not exist.


Thanks! @golf_cart_john It’s nice to know we’re not the only ones.


Do you stack your motor controllers? I can’t tell in your electronics layout photo. If so, have you seen any downsides to this - access to buttons, heat, etc? We’re always looking for more efficient layouts.


Is there a reason why you are using 2x 3:1’s and not one 9:1?


Each set is triple-stacked. We have encounter no issues so far, but haven’t worked it out much yet.

We didn’t have any 9:1s left!


OK, that makes sense now.

|HAL: CAN Receive has Timed Out|
||Error at frc.robot.Main.main( HAL: CAN Receive has Timed Out|
||edu.wpi.first.hal.PDPJNI.getPDPTotalCurrent(Native Method)|
||Watchdog not fed within 0.020000s|
||Loop time of 0.02s overrun|
||Warning at edu.wpi.first.wpilibj.IterativeRobotBase.printLoopOverrunMessage( Loop time of 0.02s overrun|
||Changes to the code|
||Removed all subsystems (and their associated commands) other than drive system and vision co-processor|
||Removed shifting functionality|
||Commented out LiveWindow|
||Deleted the save.xml file|
||Things checked|
||Can see all 6 talons for the drive base, the PCM, and the PDP; they also all have reasonable firmware and distinct device IDs|
||Re-flashed firmware of PDP|
||Removed drivebase and basically everything else (or start from scratch with simple code that has Athena and PDP) DID BOTH AND NEITHER HAD THIS PDP ERROR BUT NOT SURE IF THEY were just so simple they didn't use the same part of the hardware/firmware|
||Connected a different PDP to CAN network and shorted CAN network so it was only RoboRio and that secondary PDP; secondary PDP powered by its own battery, but primary battery still powering everything else, but they're not part of CAN network (confirmed w/ Phoenix)|
||Used a recent-ish commit (32361fccc094c7a9c87b0b6b2bb2e1dc5a213aee? not 100% sure) and still had that PDP error (in addition to a bunch of others b/c other items not on CAN)|
||Used commit from just after JDW updated codebase for VSCode early this build season (a89e8efdcf726c9181dcd327e06a6f5bf00351a1) and still had that PDP error (in addition to the ones b/c of missing Talons); this commit was from before we switched from IterativeRobot to TimedRobot|
||Used debug_galore first commit (where removed basically everything), and DIDN'T see this issue, but nothing else was really happening so hard to say|
||Used second commit of debug_galore branch where added back in OI and drivebase where drivebase pointing at a single talon that was connected to the secondary PDP, and DID see the PDP issue|

Still having significant code-related or CAN-related issues. Any thoughts on the above error and notes is appreciated.

Help with Pathweaver

Sounds like the same problem they’re having here. There’s no solution yet, but you might want to follow that thread.


We had this issue recently and went though similar debugging steps. We found that we we initializing CAN-based devices that don’t exist. I.e. when our practice PDP’s CAN bus module was fried, we got this error because the RoboRIO couldn’t find a device with the specified ID. I know it sounds a bit silly, but make sure you don’t initialize any CAN based device that does not exist or has a different CAN bus ID. Please let us know what fixes it for you!


We are also receiving this issue with the watchdog not being fed, we have checked multiple times that we are not initializing any extra CAN devices and that each of our devices has a unique ID. I’ll be interested to see what the root cause of this is.