We’ve also been having weird issues that look like CAN or network problems as well as we’ve seen watchdog errors sometimes, we don’t know if it’s related or not but we’ve noticed that for some reason our packet loss has gone way up after flashing our robots to the 2019 RIO and and radio images. We haven’t had enough time yet to try going back to the 2018 images to see if it remedies our connection issues. Any chance your driver station reports packet loss?
We got the CAN/control issue sorted. We were able to intentionally reproduce it by commanding a talon that did not exist after carefully paring down our code to eliminate any unknowns, even commands that ‘shouldn’t’ have been called, but referenced absent CAN devices.
We were also able to reproduce the issue by simply ‘sleeping’ or freezing the main thread for 500ms (a rectal-plucked value), so if the main robot loop is taking too long to run (over 20ms maybe?) this error/bug/stupid/error can be triggered.
You can see the code in the video below, and the errors/symptoms. The test code is in our GitHub repo which is linked elsewhere in this thread. The following commit is our test code to illustrate the problem: 9b2e009266a4d6727e7c0fbca31ed2f2fd898e37
We believe that something in the newer firmware or programming environment triggers these issues when an absent CAN device is referenced, even if it’s not necessarily actively being called. We’re a little fuzzy on that details
Yes it did. Especially when multiple missing devices were being reported.
Loop time of 0.02s overrun + Watchdog + etc
Loop overrun warning -- can't figure out or even drive the robot
SIGNIFICANT DISCOVERY: if you try to call/talk to/access the PDP from more than one line of code this triggers the time-out issue. It is possible that calling other devices from multiple lines can also cause things to spaz out.
Our fix can be found in this commit. It involves making a singleton class, which is designed such that only one instance will ever exist. This results in only ever instantiating one
Drives and auto shifts nicely. Phew.
Weighs 61.8 lbs.
Interesting. We also use an (effective) singleton to manage PDP calls as a result of different testing. Last year I wrote my high frequency logging framework and did benchmarks of a few sensors like NavX and a Rio-wired 63R. The PDP took 4ms to read the current of all 16 ports. Oddly enough, with multi-threading and concurrent calls it took 7 ms to read all 16 ports. I haven’t tested again with 2019 code though.
This was a known issue with the old PDP class. It was fixed over the summer to only be about 0.45 ms to read all PDP data. The cause of this was each can read takes about 0.15ms, and they were not being cached, so each value read was taking that long. Now the status packets are cached, so only 3 can calls are needed per iteration.
This is also the reason creating multiple new instances doesn’t work in 2019. Although it should be throwing an exception on construction of more then the first one, so I’ll check that out to make sure it happens.
So, @Thad_House if what you are saying is correct then is that theoretically not the cause of the issue?
That is the cause of the issue. I just checked the source code, and it is allowing multiple PDP instances when it shouldn’t be. We will fix it for the next wpilib release. Only once PDP instance is supported, like most other physical devices.
Ended Saturday with the practice bot starting to get assembled:
And the competition robot having some real functionality:
Auto shifting and the hatch mechanism work nicely. Next items on the list are the hatch ground intake, elevator drive, and cargo mechanism.
Here is a video of our first real practice driving.
Precision driving this year will be… challenging… much more so than many other games that I recall. I hope our vision-aided driving works out…
Wow that’s a really eye opening look at cycles for this year. The lineup and precision required at both ends of the process seem finicky, and both of those aspects are for the most part out of the drivers’ convenient line of sight.
I love the sound of your hatch mechanism.
I think this video should be enough to start changing people’s minds about defense in this game. I was surprised by how many people didn’t think there would be any value in it. This videos does a good job showing that it won’t take a whole lot of effort to keep an opponent out of small scoring sweet spot. Though you won’t be defended at pickup anyways, it’s worth noting that your collection position tolerance seemed to be pretty good.
As always, thanks for sharing!
Thanks for sharing a video that shows the real time it takes to complete a cycle, very interesting.
I agree. There are many aspects of the game that you initially overlook or expect to work in a certain way. I was concerned about alignment onto those hatch locations, and it looks like it’s every bit as difficult as I thought it would be. Perhaps even more so, especially for teams who don’t have this style active extend and retract system. Having to drive the robot in and out to get the hatch placement right will be an enormous time sink.
A diagram that many people will want to consider is this:
It shows scoring locations where driver depth perception or camera-aided driving is utilized to score. It’s most of the scoring locations! Using depth perception is hard. Even great drivers can struggle with it. I opine that cycle times will be remarkably longer than most tems are estimating, and Rocket RPs will be less common that Face The Boss in 2018, and (maybe slightly) more common than Fuel RPs from 2017.
Looks like placement on the cargo ship will get even harder once you have bumpers on.
Make no mistake, I think the number of cycles that good teams will do is going to be much lower than most people are thinking right now.
However, am I the only one that watches this video and thinks that placing hatches is EASIER than I thought it would be? Lining up didn’t look difficult at all. Definitely easier than placing gears in 2017. I’m also assuming the drivers hadn’t had a ton of time to practice when this video was taken.
I really like your hatch mechanism’s ability to handle misalignment. That will be key in competition.
I felt that lining up the hatches to place was harder than 2017’s gears. This is a comparable practice video from 2017:
Without mechanism help the driver had to line up ±4in or so, with no real need to be square to the element. Hatches seem like ±2in, with poorer driver visibility, and the need to get both sides of the hatched placed at the same time (more or less).
They did not.
Thank you! We already have a design revision cooked up to improve it by a factor of 2 or so.
With regards to the issue you had with CAN:
There is nothing in the implementation for Talon SRX/Victor SPX that should cause the issue you’re seeing - attempting to talk to CTRE CAN devices that aren’t present on the bus will cause DS error messages but should never cause unexpected behavior in other devices or motor controllers.
We took your robot code in GitHub from the commit you referenced and attempted to reproduce the issue as you recorded in the video using a test robot we have here. We have not been able to reproduce it.
Can you do the following:
 Make sure you’re connected to the robot via USB
 Deploy the code that reproduces the issue, and attempt to reproduce the behavior from your video.
 When the motor controllers ‘pause’, do a self-test of one of the motor controllers using Phoenix Tuner. This gives a lot of information about the state of the motor controller.
 Observe the PDP & PCM LEDs when the motor controllers ‘pause’. What are the LEDs on the PDP/PCM doing?
 E-mail us at support@ctr-electronics with the following pieces of information:
- zip file containing your exact project (as opposed to us copying the code from github).
- Screenshot of your CAN device information (in the Phoenix Tuner ‘CAN Devices’ tab)
- A copy of the self-test text from 
- A copy of the Driver Station log when reproducing the issue. These logs are typically located in C:\Users\Public\Documents\FRC\Log Files
- The exact version number of Phoenix you’re using. The vendor json file in the zip of your project works for this (as long as you zip up the entire project directory).
I realize it’s the middle of build season, so I appreciate any time you can take for these steps.
When you go through these steps, in particular go through the driver station logs for when the issue occurs. Look for the watchdog indicators similar to the picture below:
Note that these are completely separate from the “Loop Overrun, watchdog not fed after 0.02000 seconds” messages that appear in the DS when your code takes too long to run - the events in the above DS log are NI NetComm Watchdog events that disable actuator output from the roboRIO. These events occur seemingly at random, anywhere from every couple of minutes to every hour, and can occur even in a robot program that’s only using PWM motor controllers. This was reported during Beta this year but so far doesn’t seem to be resolved.
These events do cause all motor controller/actuators to stop for about 2.5 seconds. It may not be what you were seeing during your video, but it’s worth checking the DS.
POLL: How Many Hatch Covers will be on the Ground Per Match (Weeks 1-3)
The low visibility is going to make a huge difference from Steamworks. You could see for the most part on the airship - and you had a pilot very close to the target to help you line up.
I expect a lot of time wasted on lining up to get both hooks and loops connected to keep the cargo in. Beginning of the season we thought 10s cycles were good, now 15-20 will be optimistic for most teams. Won’t take much defense at all to starve a rocket RP.