Spark Max and Neo's troubleshooting (stalling motor)

We have a team at the Great Northern Regional. They have a 6 neo brushless with spark max. When they run them with a slow acceleration it all works fine. When they run it with a faster ramp the spark maxes will send the proper signal, but the neo will be in essentially go into a locked state. They tested it in a default program and the same event occurred. Anyone experience this before or have any idea what could cause this.

1 Like

Can you expand on what you mean by a “locked state?” We are competing at GNR, and we are running a 6 NEO drivetrain, so we may be able to help out tomorrow.

When this occurs the motors will spin very slowly but the led indicator on the motor controller is trying to send a 1. It will crawl at .5 to 1 ft per second.

I haven’t seen this specific issue, but I will say that our experience with the Spark MAX ramp rate function hasn’t been particularly good. We do use current limiting as an alternative, but I might try totally removing the onboard ramp rate.

Can ramp rate be set through API and the PC client?

1 Like

It can. However, the best practice with Spark MAXs is to reset the settings during initialization (through a built-in method) and set them through the API.

We are the team that this thread is about. We had a ramp rate of 0.2 at first and went up to 0.3 for experimenting purposes. We took the ramp all the way out for the testing that was explained above. We use current limiting right now.

kCurrentLimit = 40
kPeakCurrentLimit = 45
kPeakCurrentDurationMills = 100


leftMaster.setSecondaryCurrentLimit(kPeakCurrentLimit, kPeakCurrentDurationMills);
leftSlave1.setSecondaryCurrentLimit(kPeakCurrentLimit, kPeakCurrentDurationMills);
leftSlave2.setSecondaryCurrentLimit(kPeakCurrentLimit, kPeakCurrentDurationMills);
rightMaster.setSecondaryCurrentLimit(kPeakCurrentLimit, kPeakCurrentDurationMills);
rightSlave1.setSecondaryCurrentLimit(kPeakCurrentLimit, kPeakCurrentDurationMills);
rightSlave2.setSecondaryCurrentLimit(kPeakCurrentLimit, kPeakCurrentDurationMills);

These are all called in robot init

1 Like

To clarify, are you still having issues after removing your ramp rate? I could be wrong, but ramp rate should be a “sticky” setting, so unless you are explicitly removing it (either setting it to the default value or resetting the Spark settings during robotInit), it might not actually be gone.

We believe the issue is not in the code. This is the test project we created and ran and experienced the same issue.

A fix that we could try is an extremely aggressive ramp rate either in the api or on the java code side before joysticks are sent. The one thing that i dont understand is why it would be behaving like that. We can set them up with a fix on ramprate but i would like to not limit the top speed or acceleration of the robot if at all possible. This could be a bandaid patch to possibly get them working for match 1, but long term not the best solution. We can get a video of it tomorrow when pits open.

@Moexx399 can confirm maybe, I think at some point we set it to zero? Can’t remember for sure but I am pretty sure the robot drive train didn’t look like a ramp was being applied

To my knowledge a ramp rate was not on the robot on our final test for the night but i am not 100% sure. We can test in the morning with conformation on the ramprate. We will probably do a factory reset on the motor controllers to be safe about all settings.

No settings are being applied in the test code that you linked, which is my concern. My recollection of the way Spark MAX settings work is that if you do not explicitly set something (or clear something), you are going to have the same settings as before. This would mean that even if you are using the example project, the settings from the last time you set them (so theoretically, your regular robot code) would be carried over.

I do believe that is correct. We will factory reset all motor controllers to take that factor out of the equation. I am not super familiar with the spark max, but the SRX has a couple of settings that can send them haywire and i would assume the spark is no different. Is there a config all function for the spark that we can utilize like the srx or does each individual one have to be done manually?

Each setting needs to be configured individually. It is good practice to call the spark.restoreFactoryDefaults() before doing any configuration so that you don’t have any sticky settings.

awesome! swing by the pit in the morning and see what you think! not mine of course. 4239 they are in the moved “water pit” area

We will be there around 7:50 am. I’ll try to find you guys before we go in. We are the first match tomorrow morning which means we queue at 8:15. Hopefully we will be able to work on stuff during opening ceremonies in line. Thanks for your help!

We figured out it was the fact we were setting a secondary limit. Don’t know if we were using that function wrong, but took it out and it fixed our problem.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.