FRC 5987 Galaxia | 2024 Build Thread | Open Alliance

Welcome to FRC 5987 Galaxia’s 2024 Open Alliance thread! We’re happy to be back for our third year sharing our successes and failures with the greater FRC community. We’d greatly appreciate any constructive feedback on our progress throughout the season, and hope that our efforts can help inspire other teams.

A little about the team, we are an extracurricular activity within the Hebrew Reali School in Haifa, Israel. This year we have 34 members: 2 captains, 21 mechanical, 5 programming, and 6 COM (Community, Outreach, & Media). The team has students in grades 10-12, split exactly 50:50 boys and girls. We have six* dedicated adult mentors, as well as a large number of alumni and parents who help out.

We’ll be posting more as kickoff approaches about what we’ve been working on over the summer and how we’re preparing for this upcoming season.


You can find our team resources and social media at the following links

Website: www.galaxia5987.com
OnShape: Onshape
OnShape FeatureScripts: https://cad.onshape.com/documents/​3872562361eef3fa28b83e84/​w/​7b238e20915affe58e64da2a/​e/​1aeb1b988ff7662392a30e9a
GitHub: Galaxia 5987 in memory of David Zohar · GitHub
Instagram: https://www.instagram.com/galaxia_5987
Youtube: https://www.youtube.com/channel/UCh15pKUuJz5bBrCU1yH-IwA
Facebook: Galaxia 5987 in memory of David Zohar | Haifa

 
* Now five, as one of our mentors sadly passed only a week ago

25 Likes

I loved your build blog last year, exited to read it again this year!
I’m very sorry for your mentor who passed away.

11 Likes

In order to prepare for the upcoming season, conduct trainings, and volunteer within our community, the team met twice a week during the summer and three times a week once school started. . With 15 new members, all of the returning members pitched in to make sure everyone is up-to-speed in time to be productive team members by the time the season starts. And like every year, we want to improve our skills over the off-season so that we can be ready for whatever kickoff throws at us.

The biggest change for the mechanical team this off-season has been switching from Solidworks to Onshape. After a number of years of struggling with having many students working in parallel on SW and seeing the process optimizations possible with Onshape, we decided to make the switch early this offseason. Not everyone was on board at first, but after some time to adjust the general feeling on the decision is one of satisfaction. By the end of the summer, team members were just as proficient with Onshape as they were previously with Solidworks. Some members have even learned to write custom FeatureScripts to make the process even smoother. We are following a one-document-per-subsystem structure in order to keep the load times and accidental changes to a minimum. We will be sharing all of the robot documents for the season once they’re created.

Our big project for the year was a robot to shoot 2020 power cells. We treated this project like a robot we would design during the season: splitting into mechanisms, prototyping, CADing, and integrating the mechanisms together into a cohesive robot. It features a custom flipped swerve drive, four bar intake, indexer, conveyor system, turret, and hooded flywheel shooter. You can see the robot’s CAD here.

We then went about building the robot we designed. As it stands, the majority of the parts have been manufactured. Here are some of our favorites:

Swerve base plate

Turret plate

Intake assembly

The swerve modules are assembled, save for a few motors we’re waiting to arrive.


We’ve been working on assembling the chassis, intake, and conveyor systems this week with the students off school for Hannukah.


This week is our “Hannukah seminar” where we meet for most of the day four days in a row in preparation for the start of build season. Two days down now, two to go. We’ve gotten a lot of work done and hopefully we’ll make even more progress to come. Our goal is to have the robot at least mechanically assembled by kickoff so that the programming team can use it as a test bed. More to come on what they’ve been working on in the offseason as well.

To everyone celebrating, Happy Hannukah from the entire Galaxia family!

10 Likes

In addition to our weekly meetings during the off-season, the team spent a lot of time volunteering within our community. Among other places, we volunteered with other FIRST teams, in hospitals, and in a shelter for abused women.

The COM (Community, Outreach and Media) subteam has a somewhat different offseason than the mechanical and programming subteams. In addition to training students in media and writing, they also focus on volunteering and starting new community projects. In that way, it’s really a year-round effort.

At the start of the offseason, we spent time training our new team members in the Adobe apps. Now that they are proficient, they’re working on designing our annual kickoff countdown animations that will be uploaded to Instagram and TikTok. As a subteam, we’re also preparing to write our Impact submission while thinking up new ways to volunteer and start new community projects.

Over the summer we had our annual kindergarten robotics festival in which the kindergartners participate in STEM activities. As the offseason progressed, we participated in more projects and events, our Sukkot and kindergarten fairs. In those events we hosted a total of over 300 people and exposed them to the world of STEM! The fair was full of STEM themed activities, including building robots based on WeDo 2.0 and EV3, designing LEGO vehicles, and building various toys from craft materials. It also featured a showcase of our 2023 robot, Scorpio.

After those events, the war in Israel started. It was now very hard to volunteer in some of the institutes we previously had attended. So we looked closer to home and found that the Haifa university dorms were hosting many families who were displaced from Israel’s northern border due to the attacks from Hezbollah. These families include dozens of children, whose schooling has been unfortunately interrupted. So we started volunteering there weekly, teaching the kids STEM and helping inspire their creativity.

As the build and competition seasons come near, we’ll be continuing our volunteer work and proceeding to do our part in improving our community.

8 Likes

In programming, we’ve also been putting time in this offseason to improve their skill set for the upcoming season. We’ve been focusing on a few things.

Computer Vision & Localization

Our overall goal for the offseason was to create a vision system that could be accurate at any relevant location on the field. For example in 2023, it doesn’t need to be particularly accurate in the middle of the field, but near the grid, it should have high accuracy in order to generate a path to score on the grid.

We did a lot of research into the options for coprocessors. Following Anand’s Coprocessor roundup, and a few tests of our own with our Limelight 2+, Raspberry Pi 3/4, and Orange Pi 5, we learned a few useful facts:

  • The Limelight, though easy to set up, lacks accuracy. The maximum performance we could muster was 30 fps at a resolution of 320x240. This would work for some close-range vision, but it couldn’t accurately detect targets at a far enough range.
  • The Raspberry Pis gave some more promising results. The measurements were more stable but were still limited by low framerate and high latency.
  • The Orange Pis seem to be working a lot better. We can run them on 800x600 resolution with 20 fps and 50 ms latency. We bought two, and plan to buy two more for the season.

For a camera, we went with the Arducam OV2311. The global shutter is critical for avoiding tearing while the robot is in motion. These specific cameras had a lower latency than other cameras we looked at and came highly recommended by team 6328.

Using this new hardware, we set up each Orange Pi with one camera and PhotonVision. We mounted the cameras onto the robot facing in opposite directions and also mounted the Limelight as an additional vision system. Each coprocessor works individually to detect AprilTags, and their position information is sent over NetworkTables to the roboRIO. From there, we use a WPILib swerve pose estimator. Here is a video of our pose estimation (of course with the duck bot in AdvantageScope):

Robot Simulation

In past years, we have used a few simulations to help test and debug our code. In 2022, we attempted to simulate the shooter, climber, and vision systems, but all ended up being too far removed from the real systems to be useful. The simulated mechanisms were fairly arbitrary, difficult to implement into our code structure, and didn’t end up helping improve the final robot.

After being inspired by team 6328, we decided to implement an IO code structure, using AdvantageKit. Instead of our code being separated simply as subsystems and commands:

The code is now structured such that each subsystem refers to an IO layer to set its outputs:

The IO layer contains only the hardware API calls, essentially acting as a wrapper for the robot hardware. Each subsystem class has an object called IO, which represents either the real or simulated hardware depending on the code state. In order to have the same code control either a Falcon or NEO swerve, we implement one IO class for TalonFX and one for SPARK MAX, and pass the proper one to the same swerve subsystem code. The subsystem sends commands to this IO layer indifferent to the “realness” of the robot, and the IO class directs the commands to the proper hardware, whether it is running in real life or simulation.

This method allows us to find different kinds of bugs even before receiving a physical robot, such as:

  • Unit conversions and kinematics miscalculations outside of the IO classes.
  • Subsystem, command, and command groups logic.
  • Logging incorrect units or simply the wrong fields.

Additionally, the simulation allows us to visualize the robot both in 2D and in 3D. In theory, this means that we can see when the code doesn’t work without any physical hardware to test it on. For example, when creating paths for our 2023 arm to move between setpoints, the visualization would allow us to see whether the path planner succeeded in creating a path that the real robot can follow. Here is an example:

With this setup however, there are a few things that can’t be solved in simulation: tuning PID gains, real system problems, and usage of smart motor controller API are difficult to simulate. To simulate smart motor controllers, we need a simulator class that imitates the API and unit system used by the real motors, as well as running the controller’s internal PID controller. For this, we developed TalonFXSim and SparkMaxSim classes. Of course, the PID simulation isn’t perfect, but this method has worked to help us find a starting point for empirical testing when tuning constants. In the future, we plan to add more motors and use a more accurate model.

If you want to learn more, I recommend looking into team 6328’s code, and the AdvantageKit framework, where this idea originated.

We are also working on a simulation that incorporates vision. Ideally, this will help us find the optimal place to put our cameras on the robot during the season. We’d like to do some kind of statistical analysis, so recommendations for such tools are appreciated.

Autonomous

We’ve had problems with our autonomous for years now. In 2022, our auto paths seemed slow and inconsistent. This improved somewhat in 2023, but not significantly. In order to solve this problem, we took a hard look at exactly what our autonomous routine was doing wrong, and how other teams got their auto routines to work correctly.

First, we realized that the most important criterion for a successful autonomous routine is consistency. It doesn’t matter if you can score 20 points in auto if you only do so a third of the time. Second, the accuracy of the training field is crucial. In 2023, the grid we built was crude and inaccurate, and the measurements of the field were not very good. This strongly negatively affected our ability to tune our autonomous routines.

To solve the first problem, there are a few things we plan to implement. After talking with a few teams, we realized how important vision-based localization is to their auto routines. A few teams even said they struggle running in auto without vision. This is great because we had already started working on localization in the off-season.

When it comes to improving the accuracy of the field, the only thing we can do is change how much importance we assign it. Having an accurate field is only a matter of mindset, time and resources spent, and the tools we use to measure it. This year we will get a dedicated practice space in a large tent outside of our workshop for the first time, so that will help us keep an accurate field for testing autonomous routines.

The greatest problem with the consistency of our auto is the development process, which up until now we didn’t really have. We have always been focused mainly on the teleop period, which is much simpler programmatically. We would tune and perfect the systems only to the point that the drivers could use them. But this is not enough, the solution is to plan ahead to develop requirements for the systems before tuning and continue tuning until those requirements are met. For example, in 2023, our arm would need to have an accuracy of about 2 cm to consistently place the game pieces on the grid. Additionally, the autonomous required the arm’s acceleration to be below a certain value, because otherwise it would mess up the drivetrain odometry. Creating these guidelines in advance for each system would allow us to optimize the operation of the robot for auto, as well as for teleop. We hope that the usage of simulations as described above will also allow us to start fixing bugs earlier in the season so that we can spend more time focusing on tuning and perfecting the systems once the programming team gets the robot.
 


Additionally this year we decided to divide between our robot code and our common libraries. Now our swerve code, our vision code, and all utilities are in a separate repository, implemented using JitPack. Doing this will allow us to implement the same code in this repository across all projects and branches, reducing Git inconsistency. This means we’ve had to learn quite a bit about Gradle and publishing code. Currently, we have a problem where the code is published with our libraries, creating two instances of libraries. If anyone has an idea on how to publish the code without the libraries, your help would be appreciated.

Finally, we’ve spent a lot of time this offseason getting accustomed to new libraries, specifically AdvantageKit, Phoenix Pro, REVlib, and PathPlanner. In the past we haven’t taken advantage of all these libraries have to offer, and we want to change that for the upcoming season.

10 Likes

This is some great info. On the subject of coprocessor performance, I have accuracy-based test results in my Photonvision 2024 Beta results: Coprocessor - Google Drive
The tl;dr is that the LL3 offers excellent pose accuracy, even if it’s more affected by motion blur. The overall FPS is comparable to an Opi 5 if you want similar range and accuracy.

There’s some issues with my testing, so I need to verify that the pose stability is being read properly, but I encourage you to check the stability and accuracy vs. distance and do a CalibDB calibration on your camera instead of using the stock PV one.

2 Likes

Seems like you had a great off season, my team and I went for a similar solution to you when it comes to localization. I wanted to ask, why did you go with the OV2311 and not the OV9281? It seems like the only advantage it has is it’s resolution but the OP5 doesn’t support 1600x800 on 20+ fps like you mentioned.

I think that number is innacurate, I don’t remember exactly what it is. I’ll check it out when I can and get back to you.

We decided to go with the OV2311 because we thought we might get to a good fps at 1600x800, and we thought we might use team 6328’s software for vision to get there if needed.

In preparation for kickoff, we’ve been reviewing our plan for analyzing the game and developing a cohesive robot concept. It currently stands as follows:

DAY ONE

  1. Watch the kickoff stream as a team. (Due to time zones, the game is revealed around 8pm local time)
  2. Read through the rules in groups of 4-5 students. The main focus at this point is on three sections: Arena, Game Rules, and Match Play. Guided question worksheets are used to help the groups review the rules critically. (Google translated: 1 and 2)
  3. Reconvene and each group presents a summary of their section. Members are encouraged to ask questions about rules they don’t understand.
  4. In parallel, a group of team alumni reviews the rules and makes notes of important rules, penalties, strategies, etc. These notes are given to the team leadership to help ensure the student-led process doesn’t miss anything important.

DAY TWO

  1. Another short review of the rules to make sure everyone understands the game
  2. List all of the possible abilities / features a robot could have in order to play the game. This includes things like driving in all directions, crossing *insert obstacle*, scoring in *insert goal*, etc.
  3. Break into groups of 4-5 students. Each group comes up with 2-4 robot archetypes to play the game. They choose which tasks they achieve, what abilities from the list they need, and how they divide up the match time.
  4. Reconvene and each group presents their robot archetypes. Expecting a lot of crossover between groups, combine ideas into several overarching archetypes.
  5. Debate which archetype to choose. (We usually try to avoid full team debates because they quickly devolve to side-conversations, but here there’s no other option)
  6. The goal is to end the day with a strategy chosen for how the robot will play the game.

DAY THREE

  1. Based on the archetype chosen, place each of the listed abilities in one of four categories:
    – Must have (necessary to fulfill the chosen archetype)
    – Nice to have (will help the robot perform better but not strictly necessary)
    – Explore (improvement we’ll take if they’re easy but not at the expense of other abilities)
    – Skip (not useful or harmful for the chosen archetype)
  2. Divide the tasks into several mechanisms. According to the strategy and abilities chosen, decide on basic requirements for each mechanism. (e.g. reach *insert target*, shoot with *insert degree of accuracy*)
  3. Brainstorm design ideas for each mechanism. No bad ideas at this point.
  4. Whittle the design ideas down to 1-3 options per mechanism. (depending on complexity, resources, etc)
  5. Assign each mechanism team (decided before the season) one design to prototype

DAYS FOUR TO SEVEN

  1. Prototyping each of the mechanism designs in teams
  2. Separate team of students lays out the field markings and builds the field elements

DAY EIGHT

  1. Decide which mechanism designs to use on the robot
  2. Assemble all mechanisms into a “blockbot” (what some refer to as Crayola CAD) to layout the general robot design and designate space for each mechanism

CONTINUED

  1. Further rounds of prototyping if necessary. Finalize geometries, critical dimensions, etc.
  2. Design and CAD each mechanism individually in separate Onshape documents
  3. Integrate all mechanisms together into a cohesive robot
     

Additionally, we had a team-wide discussion about our expectations and goals for the season. Here are some of the items the students listed:

  • Qualify for the Championship
  • Win a district event and/or finalists at DCMP
  • The robot should work reliably
  • Ease of access/repair on the robot
  • Build a robot everyone is proud of
  • Reliable autonomous routines
  • Win robot awards at both district events
  • Submit a good Impact award entry and present well for EI
  • Win a non-robot award
  • Have a fun work environment where everyone contributes
  • Make friends with other teams
8 Likes

Our offseason robot is fully assembled, just in time for the season to begin.


The programming team got their first chance to test it, and the swerve drive seems to be working well. A bit more fine-tuning and hopefully it will be driving perfectly.

Our new robot has also gotten a chance to play around with its brother from last year.

We haven’t had a chance to test the other mechanisms yet, but if any of the mechanisms on this robot end up being relevant for this year’s game then that may be one of the programmers’ tasks at the beginning of the season.

Good luck to everyone in 2024!

9 Likes

Yesterday’s meeting was all about deciding how we want to play Crescendo. Our game analysis was based around our desire to control our destiny throughout the competition. This means we want to maximize RPs in order to seed high during quals and become an alliance captain, rather than playing a secondary role and hoping to be picked.

For teleop, in order to maximize both points and chances of securing the Melody RP, we decided that the ideal cycle would be repetitions of 2 notes in the Amp followed by as many notes in the Speaker as possible in the Amplified 10 seconds. The ability to be opportunistic and steal half-field cycles while Amplified will be a big difference-maker. In order to decide from where we want to shoot, we divided the wing into five sections: the subwoofer, and sections 1-4.

For the endgame, we looked at the minimum combinations that would give us the necessary 10 climb points for the Ensemble RP. The options are:

  • 3 robots climbing separately + 1 spotlight
  • 2 robots climbing together + 1 climbing separately
  • 2 robots climbing together + 1 spotlight
  • 2 robots climbing separately + 1 trap

We didn’t want to rely on the human player’s ability to earn a spotlight in order to get the RP, and at the district level there is a low chance of having three alliance partners able to climb. In order to need only one climbing partner for the RP, we would either need to be able to lift an alliance partner, or be able to score in the trap. We decided that the trap was more feasible given the extension restrictions and challenges of an uncertain CoM hanging from the chain. Given the high point value of the trap compared to the time needed, we may decide to end the teleop period early in order to score in multiple traps.

With our general strategy decided, we listed all of the abilities a robot might want to have and then categorized them according to Must Have, Nice to Have, Explore, or Not Attempting.

Ability Importance
Chassis
Driving regardless of robot orientation (swerve) Must Have
Driving anywhere on the field (low robot) Must Have
Leave in autonomous Must Have
Park endgame Must Have
Speaker
Scoring from the subwoofer Must Have
Scoring from wing section 1 Must Have
Scoring from wing sections 2-4 Nice to Have
Scoring from outside the wing Explore
Scoring from the stage Explore
Scoring while moving Explore
Scoring regardless of chassis orientation Explore
Shooting from the source to the middle of the field Explore
Amplifier
Scoring in the amp Must Have
Scoring from outside the amp zone Explore
Trap
Scoring in the trap Must Have
Scoring without climbing Explore
Intake
Ground intake Must Have
Intaking while moving Must Have
Intaking from multiple sides Nice to Have
Intake directly from the source Explore
Intake from the air (catching) Not Attempting
Outtaking (placing a note on the ground) Nice to Have
Climber
Climbing by ourselves Must Have
Buddy climb (lifting an alliance partner) Not Attempting
Second robot in harmony Nice to Have
Moving along the chain Not Attempting
De-climbing Must Have
Auto-balancing climb Not Attempting

Breaking that list down, we have the following marked as Must Have, meaning abilities that are a requirement to play the strategy we chose:

  • Driving regardless of robot orientation (swerve)
  • Driving anywhere on the field (low robot)
  • Leave in autonomous
  • Park endgame
  • Scoring in the speaker from the subwoofer
  • Scoring in the speaker from wing section 1
  • Amp
  • Trap
  • Ground intake
  • Intaking while moving
  • Climbing by ourselves
  • De-climbing

This is a pretty long list, but we are confident that the team has the manpower, resources, and technical experience to get it done.

A few things were set as Nice to Have, meaning they would be helpful to get more points and broaden our strategy, but aren’t strictly necessary:

  • Scoring in the speaker from wing sections 2-4
  • Intaking from multiple sides
  • Outtaking (placing a note on the ground)
  • Being the second robot in a harmony

And we categorized a number of abilities as Explore, meaning we’ll try to do them if we have the resources and it works well with our general robot concept, but we won’t stress if we can’t do them:

  • Scoring in the speaker from outside the wing
  • Scoring in the speaker from the stage
  • Scoring in the speaker while moving
  • Scoring in the speaker regardless of chassis orientation
  • Shooting from the source to the middle of the field
  • Scoring in the amp from outside the amp zone
  • Scoring in the trap without climbing
  • Intaking directly from the source

Today, we will discuss various design ideas for each of the six mechanisms in order to accomplish these tasks, and narrow them down into a few that look promising. Ideally there will be some crossover between some of the mechanisms, so we will pay attention to where we can simplify the robot by combining mechanisms without sacrificing efficiency. The first round of prototyping starts tomorrow and will go until Friday. We’ll be sure to post updates with pictures and videos as everything gets built and tested!

9 Likes

I’m in love with this swerve module design. Nicely done - let us know how it performs.

3 Likes

With some mechanisms having more viable options and needing more prototyping than others, we split the mechanical team into five groups:

  • Intake
  • Amp
  • Shooter: dual side wheels
  • Shooter: single side wheels
  • Trap/Climber

We will also want to prototype a shooter with top and bottom rollers, but don’t have the manpower to test everything in parallel.

Intake

Having seen the success of many Ri3D teams using an over-the-bumper intake, and not wanting to risk an under-the-bumper intake being hampered by saggy bumpers, we pretty quickly decided on an over-the-bumper intake. The general is to use a few rollers, and possibly belts between the rollers, to lift the Note off the ground and carry it into the robot. Our first prototype was adjustable in order to find the best roller geometry. We found this to be a successful concept and relatively forgiving of roller placement.

With these measurements, we made a more finalized prototype with fixed geometry. It worked well, but often shot the Note directly up rather than into the robot. So we added a carbon fiber cross-bar after the second roller in order to guide the Note down and into the robot. This seems to be working well, though we’re still playing around with the exact placement to get the best angle.

Amp

We wanted to try the most basic mechanism for the Amp to see how it would work and what we could do to improve it. Our prototype consists of two rollers with compliant wheels to grab the Note and drop it into the Amp. When testing with the rollers pointing about 45° down, we found that the prototype had a pretty wide range of heights and distances from the Amp in which it still scored.

When pointing up though, it was much more selective about heights and distances. With a 2” gap between the wheels, it holds on pretty securely to the Note.

We also tested it as a floor intake and it worked decently.

Up until now we tested this mechanism as a passthrough; the Note enters from one side and goes into the Amp on the other. We also wanted to test the possibility of using it more like a gripper, where the intake and output both happen from the same side. We added a makeshift backstop on one side of the rollers to see if we can reliably intake, hold, and score the Note all from the same side.

Shooter: Dual Side Wheels

We first built a basic prototype out of some bearing blocks and spare tubes. This kind of worked, but often misfired.

Our next prototype is a bit sturdier and works more reliably, but still didn’t shoot so far. The hand drills powering it just weren’t fast enough to transfer enough force to the Note. However, we were able to get a sense of which types of wheels worked better than others and how much compression we need.

From there, we went to a fixed-position prototype with blue 50A 4” compliant wheels, powered by two Falcon motors. This provided the best results so far, though we’re still playing around with gear ratios, inertia wheels, etc.

Shooter: Single Side Wheels

We are also testing a shooter with wheels on one side and a fixed wall on the other. This would be simpler than having wheels on both sides, and would naturally impart stabilizing spin on the Note. We haven’t had great results from our prototype so far, but it has the same problem as the first dual wheel prototypes where the hand drills aren’t powerful enough. We’re now working on a new prototype driven by a Falcon motor which will hopefully provide better results.

Trap/Climber

So far, we’ve been considering the climbing and trap as a single mechanism, but prototyping them separately. Once we have a concise robot concept and a Stage to test on, we’ll work on prototyping the two combined.

For the trap, we wanted to see if the Note would be able to be pushed into the the Trap door with enough force to open it. We built a prototype to test this, and it seems like the answer is yes.

We also began investigating how an elevator climber would interact with the chain, specifically how the robot center of mass affects its stability. For this, we hung a temporary chain and attached some hooks to uprights on a spare chassis. By varying the position of the hooks and the location of weights on the chassis, we could see how stable or unstable the robot was.

Field Elements

One additional mechanical group has been focused on building the field elements on which we can test our prototypes. In general we’re using the REV designs cut on our CNC router, with some minor adjustments based on the wood we have available and in order to get everything to fit. So far we’ve built the Amp, Speaker, and a janky version of the Trap. We’re working on building the Stage now, which should hopefully be ready to test on soon.

Swerve Drive

While the mechanical team is working on prototypes, the programming team has been exploring some of the new features in Phoenix Pro, which we just recently purchased for the season. We modified our existing swerve code to use FOC on the SwerveX drive Falcons, and the robot is noticeably faster. Compare the robot’s speed in three videos:

Last season with non-FOC Falcons and a 7.8:1 ratio

This summer, still with non-FOC Falcons but drive ratio now 6.55:1

Now, with FOC Falcons and 6.55:1 drive ratio

8 Likes

Be sure to limit the trap door opening to ~3" (equivalent to running into the highlighted tabs)

Those are only for shooters with wheels on Left/Right?
Top/Botton wheels shooters may have a role here too.

That’s a good point. This Trap setup was just something temporary to hold us over until our Stage build is finished (hopefully tomorrow). If the real one isn’t ready soon we’ll definitely work on making the temporary one a bit more accurate to spec.


Coming soon to an OA thread near you

We had a little mishap with our electronics while prototyping. See if you catch it.

 
Luckily the safety systems worked as designed and no one was hurt. The only casualties are a SPARK MAX and one PDP channel.

3 Likes

A. The odds of catching that on camera…
B. What connector is that? We use PowerPoles so the chance of this happening are minor

One of the students sent me this as part of a longer video asking why the SPARK MAX wasn’t powering on…took me a few watches to realize what was going on.

Those are XT90s. We never pinpointed the cause of the issue, but since we know a high current was traveling into the SPARK MAX I have to assume there was a short somewhere inside/after that.

1 Like

Wdym? It seems like the black wire is going into the red and vice versa

10 Likes