pic: Inverted CIM 2-Speed Gearbox

Bet I can name one… :stuck_out_tongue:

Once your wheels slip under load, gearing any slower doesn’t do anything to increase your pushing force, it just decreases your power consumption. You’re traction-limited rather than torque-limited.

For a 150 lb robot with roughtop tread, that “magic number” is around 6 feet per second, depending on your efficiency, exact weight, wheel design, etc.

I’ve been punching some numbers into JVN design calculator, and I managed 16.85fps high, and 6.76fps low.

This will be a new design (but I’m keeping the inverted CIMs), since to do this I had to have the dog gears be the driven gears rather than the driving gears in the second stage (shifting on upper shaft). Also, in order to have the gear space in the gearbox, I’m most likely going to keep the idlers:

  1. The idler gears are easily replacable in case they wear out
  2. Using my logic, the idler gears are made of steel, so I doubt wearing down is going to be significant

As for a belted gearbox, I think our team wants to move away from belts. It’s a bit complicated, but even if it is better than idlers, I’d rather work on the current style I have going.


Those speeds are much better, you should be able to withstand a pushing match for a least a little while (I’m getting about 50A current draw with 4.2" wheels, 150lbs, and 1.3 CoF).
Removing the idlers can only do you good, regardless of wearing out. Reduces complexity and weight. If you have to keep them, that makes sense, but I would remove them if at all possible.
What are you guys moving away from belts for? The epitome of flipped-cim gearboxes (192 in 2014) used belts for their first stage and it reportedly went swimmingly. Saves a lot of weight and space in these things IME.

I might be completely wrong about my team and belts, but we’ve been using belts for a while on our DT, and this offseason we’re experimenting with chain DTs.

I mean… this only applies to the chassis, so perhaps I would attempt a belt system for the first stage. However, I’m using a large ratio for that stage, so I’m not even sure if there are the right pulleys for my configuration.

-Is there an online store that specializes in pulleys? It’s been a while since I last been there, and my memory is fading away… -_-
-Also, is it possible to make your own pulleys? If so, is there a link to how to do that?

SDP-SI has them, but seeing that in this case you have a very large reduction you wouldn’t be able to pull it off and still maintain adequate belt wrap without idlers.

We are moving away from belt in drivetrains because the tenioning was too much work for the benefit. However, tensioning similar to 192’s 2014 gearbox is feasible, and in fact, 649 has cadded a similar idea already. It might be tough to get such a large first reduction however with belt. SDP-SI is basically the go-to for pulleys, and 649 already does make their own pulleys, either through our laser cutter or through our wire-edm sponsor.

I must bring back this discussion to ask some questions regarding this design, mainly because I’m considering making another inverted CIM gearbox with more desirable high/low speeds.

1a. How can I provide enough space in between the CIMs for the gears without using the original idler design, but not go into belts (This is something I’ll consider later on) or custom gears?

1b. So what if I decide to stick with my first first stage design? Because I don’t think that anything bad will happen if I do it. For one, the gears are made of steel, and even if it does wear out, the design is made so that those gears are easily replaceable.

  1. How can I bring down the weight to a minimum? I came across a topic on CD regarding delrin gearbox plates, and I was thinking about polycarbonate plates beforehand, but I’m not sure if either of those are good ideas.

We’ve used 0.25in polycarbonate for gearboxes before with decent success. So long as you have a solid way to mount the gearboxes and watch out for over-tightening screws you should be fine. You’re already using VexPro gears from the looks of it so you should save a decent amount of weight just by doing that instead of using steel.

The biggest thing is that I would use bigger idlers. I don’t think it makes any sense that you’re using 14 tooth pinions on your motors and 12 tooth gears as your idlers. 14 tooth pinions are a pain to use with a gearbox because you can’t install them in advance - they are bigger than the hole size needed to fit around the CIM boss, and it’s best to design for a tight slip fit around this boss rather than a bigger clearance hole. 12 tooth pinions are great for this reason. The downside is that the gears are small so the tooth loads are higher. Using them as idlers is just getting all of the drawbacks and none of the benefits.

I would use something like a 20 tooth idler that you can put small (1/4" ID) bearings into, then mount them on a shoulder bolt or something like that. You really do want a ball bearing in your idler or you’re just throwing efficiency away, and you ideally want them mounted to a shoulder bolt instead of just a screw or something so you have a simple robust round shaft for them.

  1. How can I bring down the weight to a minimum? I came across a topic on CD regarding delrin gearbox plates, and I was thinking about polycarbonate plates beforehand, but I’m not sure if either of those are good ideas.

I wouldn’t do either of those things, especially since you are cantilevering the first stage. You don’t want the extra flex that these materials will add to your gearbox. Spend the few ounces of weight where it matters; I guarantee you something else will be on your robot that you would rather cut weight from than your gearbox plate. If you really want to minimize gearbox weight, pocketing the gears is where you could start shaving a few ounces.

Also, don’t forget to add some fillets to your lightening pattern - even if you waterjet these plates, fillets avoid the stress risers of sharp corners and also just look better. If you mill these plates, obviously you can’t do interior hard corners.

(Apologies, just noticed this comment)
It seemed to me that the better defense for 2014 was a ‘pillaring’ technique. Pillaring is a tank warfare term, where the tank drives back & forth perpendicular to the cannon’s aim. It requires planning & setup, but it makes the tank much harder to hit while making it relatively easy for it to maintain sighting on a target. This is prevalent in the Battlefield series of games.

This same concept works for defense on the FRC field. Sprint into position, then pillar back/forth and force the other team to either push you sideways or drive fast enough around you to get to their goal. The likelyhood of them pushing you is high - yet it’s time consuming and usually not as effective as one would thing since it still doesn’t solve the problem of them getting to their desired spot for an open shot.

Faster low gear speeds on an open field also give more opportunities to clip/turn a corner of a shooting bot - much more effective than raw pushing.

While this is a great defensive technique for many years, 2014 included, it was a better technique for years such as 2013, when one had to navigate the length of the field, around obstacles, and pushing a robot into a specific area was a liability.

In 2014, pushing was much less risky as there were no safe zones. T-bone pinning a robot had a bit more risk than “pillaring” but a lot more benefit - the robot is essentially immobile for the duration of the pin. I don’t think “pillaring” was definitively better in 2014 just because of the T-bone pin and the relatively wide space to drive around. It is an important part of a defensive strategy but not the end-all.

When determining where to set low gear when executing defense is a consideration, is an effective T-bone pin mutually exclusive of an effective pillar defense?

Do (e.g.) sailcloth bumpers change this consideration at all?

This plays into the original topic a bit - shaft spacing is usually determined by the gear availability and the desired difference between high gear and low gear (e.g. the dog gear choices). School A wants a larger gearing difference, School B wants a smaller gearing difference.

To clarify my previous comment, you CAN use polycarbonate for gearboxes generally speaking, but you would probably not want to use your current design if you went this route.

A quick redesign using solid (not pocketed) plates and eliminating cantilevered shafts (by simply supporting them on both sides) might still be a worthwhile weight savings.

Not really.

Do (e.g.) sailcloth bumpers change this consideration at all?

Somewhat, particularly if you’re using solid core pool noodles with sailcloth. In that case, your bumper doesn’t deform or grip enough to effectively T-bone anyone (or be T-Boned, which is the point), so you can only really play pillar defense.

This plays into the original topic a bit - shaft spacing is usually determined by the gear availability and the desired difference between high gear and low gear (e.g. the dog gear choices). School A wants a larger gearing difference, School B wants a smaller gearing difference.

I don’t really get school B. The advantage in short acceleration is usually so minor that it’s not important, and I feel like using low gear for more precise movement is using hardware to solve a software / controls problem. While pushing matches should be avoided, low gear to me exists so offensive teams have the option to push through defense if necessary and defensive teams can themselves push all day long against the strongest drivetrains. So it doesn’t get used all that much, but it’s more of a safety net.

In our experimentation with a “school A” drivetrain, we’ve actually found that we like to t-bone and play the “pillar defense” both in high gear. We found that teams can create separation from the t-bone when we are t-boning in low gear. When we are playing “pillar defense”, high gear allows us to keep up with opposing robots to effectively block the field. We now teach drivers to only use low gear in head to head pushing matches.

Mind giving traction details? I would expect that sufficiently high traction & gearing, in combination, would lead to popped breakers. I have no experience here myself, but I’ve read plenty of it on CD from 2014.

The robot was geared for 20FPS @ 100% efficiency, had 4 CIMs and 2 550s, and had 2" wide roughtop traction wheels. The robot never stalled while t-boning, which probably helped in preventing blown breakers.

To give an example where breakers DID pop…

Team 20’s 2014 drive train had 3 CIM WCP dual speed shifters, with 4" colson wheels, and was geared for about 5.5 fps and 16 fps free speed (theoretical).

Here is video of our second match of the season: http://www.thebluealliance.com/match/2014nytr_qm13

In our pre-match strategy, we adopted the role of post-auto hounding of any opponents that missed their auto shots while our partners cleared missed auto shots of our own alliance. At the very start of teleop we go to play defense on 116 and set an open field T-bone pin on them which they fail to break free from for 26 seconds. The pin ends because we popped our main breaker.

Post-match after discussion with our drive team and some napkin math in the pits, we decided the following events likely led to the issue:
-The driver switched to low gear after the pin was initially set
-The shifting cylinder did not have sufficient force to shift the dog from high gear to low gear under the traction limit condition, so the dog remained engaged in high gear throughout the pin
-Our driver did not let up full throttle on the pin (we wanted to pin at full throttle without worrying about popping breakers as a design objective)
-We would have been pulling around 400 A or something crazy through the main breaker in this condition, which should only last a max of about 8 seconds according to the breaker spec sheet, so I am surprised we lasted this long before popping the breakers.

To mitigate the issue, we had the drivers always let up at the best opportunity early in the pin so the dog could shift. The very next match we popped the main breaker again, and after replacing it never saw a tripped main breaker the rest of the season (they tend to become easier to trip after tripping the first time).

When hounding teams on defense we could almost always maintain a pin once we set it, regardless of the fact that we were in low gear. The only exception that comes to mind is the Killer Bees being able to slip away well due to their drive train and driver skill.

Side note: I am unsure of whether we had changed this yet or not in the above scenario, but at one point early in the season we switched from 6 CIMs in the drive train to 4 CIMs and 2 MiniCIMs to up the torque in our catapult gearbox due to an increase in the pre-load of the torsion springs.