FRC 6328 Mechanical Advantage 2020-2021 Build Thread

Where do you get your flexible electrical cables?

1 Like

For the flexible welding cable, it’s pretty readily available on Amazon; we’re currently using this: https://www.amazon.com/gp/product/B019O1MANK
…but there are certainly other options.

For other wire, we’ve got a mix of PowerWerx (which is very good quality, but higher priced + costly to ship), Amazon-sourced (make sure what you’re ordering is pure copper, as there’s a lot of CCA listings), and locally-bought wire spools.

2 Likes

I’ve done a season of electrical, and the speed controller wires are fairly flexible. However, the battery cables are not, and we’ve never attempted to fabricate our own. We always used the ones from KOP or sourced from AndyMark. How did you put the battery clip on?

1 Like

All the wiring changes look great! We used the JST RCY-series in 2017 but ran into a lot of quality control issues and problems ensuring the CAN cables were plugged in correctly. Since then, we have used locking JST SM connectors and they have been a god send for us. I’m not sure how they compare to the locking Molex, but I thought I would provide the recommendation if y’all wanted to stay on the JST line.

5 Likes

@Aaron_Li Crimping the SB50 connectors requires a hefty crimp tool; fortunately the same crimper works for both those and the 6awg ring terminals needed on the other end. We have a long handled crimper that produces decent crimps (and the students get a workout using it!) but I’ve considered getting a hydraulic style one.

You make a good point about battery cables. I think we may eventually start to use the flexible cable on battery harnesses too. In fact, AndyMark now sells a flexible version - am-4483. For the moment, though, we have a LOT of the standard battery harnesses from KOP, FIRST Choice, etc. so we’ll probably use those up first.

1 Like

Here is a little message from the Game Design Team:

Hey everyone,

Team 6328’s Game Design Challenge team has been working very hard over the past few weeks and has reached a point where we can share our created game with the FRC community. Included in the post is the required documentation (game overview, notable field elements, expected robot actions), ELEMENT description, field images, CAD models of some of the field structures, and parts of what we plan to include in the supplementary information. We also plan to include in our submission a game video, additional supplementary information, multiple field images, and an entire CAD field. We would like to hear feedback and be able to clarify information present in the hopes of creating the best game possible. So here it is:

Game Overview:

In MALWARE MAYHEM, two alliances of 3 robots each work to protect FIRST against a malware attack from the anti-STEM organization LAST (League Against Science & Technology) and ultimately save future FRC game files including those for the top-secret water game. Each alliance and their robots work towards collecting lines of CODE and deploying them in the INFECTED CORES in the CPU. Near the end of the match, robots race to share CODE into a shared cache with the opposite alliance, and at the end of the match robots collect firewalls and install them into the CPU.

During the 15 second autonomous period, robots must follow pre-programmed instructions. Alliances score points by:

  • Moving from the initiation line
  • Deploying lines of code into the infected cores of the CPU
  • Deploying lines of code into the uninfected cores of the CPU
  • During the 75 second tele-op period, drivers take control of their robots. Alliances score points by:
    • Continuing to deploy code into the infected cores of the CPU
    • Continuing to deploy code into the uninfected core of the CPU
  • During the 30 second first endgame period, robots score points by:
    • Continuing to deploy code into the infected cores of the CPU
    • Continuing to deploy code into the uninfected core of the CPU
    • Deploying 5 lines of code into the shared cache to achieve stage 1
    • Deploying 10 lines of code into the shared cache to achieve stage 2
    • Deploying 15 lines of code into the shared cache to achieve stage 3
  • During the final 30 seconds of the match, robots score points by:
    • Continuing to deploy code into the shared cache
    • Installing firewalls into the CPU
    • Hanging from installed firewalls

The alliance with the highest score at the end of the match wins.

Game terms:

AUTO - The first 15 seconds of the match where the robot moves by using code
CODE - An 18 inch long chain that is the main game piece, ELEMENT
CORE - One of 7 scoring are on each side of the CPU
CPU - The central scoring unit that divides the two halves of the field
DATA BUS - The tunnel under the CPU
DEPLOYMENT PERIOD - the last 30 second period following POSITIONING PERIOD
FIREWALL - A suitcase-shaped object with a handle
INFECTED CORE - Scoring areas on the CPU that are highlighted with LEDs
INITIATION LINE/DEFENSE LINE - The line X feet from the CPU
LAST - League Against Science & Technology
MALWARE MAYHEM - The name of Team 6328’s concept game
PLAYER STATION - the station where a human interacts with the game piece to give to the robot
POSITIONING PERIOD - The 30 second period following TELEOP
PROTECTED ZONES - Areas on the field in which defense is penalized
ROBOT - the mechanism used to complete the tasks
SECURITY CONTEXT - A trapezoidal platform which stores the FIREWALLS prior to deployment
SHARED CACHE - Scoring location for the Coopertition mission
SHARING CODE - This is used to allow the alliances to cooperate
TELEOP - The time after auto where players control the robot (2:15)
UNINFECTED CORE - Scoring areas on the CPU that are not highlighted with LEDs

Notable Field Elements:

Shared Cache: A scoring location on the opposite alliance’s driver station wall. The scoring location is a single window the same size as the windows on the CPU and is directly over the center of the alliance wall. CODE can only be scored during the POSITIONING PERIOD and DEPLOYMENT PERIOD periods, and teams will have to reach different scoring tiers to receive equal points for both alliances.
Security Context: A platform X feet high that houses 3 FIREWALL units: One directly in front, and two angled on either side. The Security Context is located in front of the alliance station walls of the same alliance color.
CPU: A large structure that separates the two halves of the field, consisting of seven CORES on each side and a central opening (DATA BUS) for interaction between alliances. A total of 5 goals span the upper level of the CPU, and there are an additional 2 goals on either end of the CPU near the edge of the field. Throughout the match, different CORES will be randomly highlighted with LEDs, marking them infected. The three high cores in the center have a slot below the scoring opening for installation of the FIREWALL. An additional scoring area for the FIREWALL is located on each side of the CPU near the edge of the field.
Player Stations: There are four player stations located in the four corners of the field. There are blue and red player stations on the blue alliance station wall and the same for the red alliance station wall, for a total of two per alliance. The player stations on the opposite side of the field of the alliance station have PROTECTED ZONES around them, while the stations on the same side do not.

Expected Robot Actions:

Auto:
During the Autonomous Period, teams are tasked with moving off of the INITIATION LINE such that no part of their ROBOT is over the line. Teams may score in CORES, but they must collect the code through the PLAYER STATION.

Tele-Operated Period:
During the 75 second Tele-Operated portion of the game, the team’s main task is to shoot lines of code into the CORES of the CPU. The primary locations that teams will be able to receive code is from the PLAYER STATION on their own side or on the opposite side of the field.

Positioning Period:
During the POSITIONING PERIOD, which spans the second-to-last 30 seconds of the match, teams may retrieve FIREWALLS from the SECURITY CONTEXT, and prepare for the DEPLOYMENT PERIOD. Teams may also race to the other end of the field and deposit CODE into the SHARED CACHE. However, any robot that passes through the DATA BUS and does not return by the end of the Positioning Period must remain on that side for the remainder of the match.

Deployment Period:
During the DEPLOYMENT PERIOD the final 30 seconds, teams are no longer allowed to pass through the DATA BUS. During this period, teams are expected to deploy FIREWALLS by inserting a FIREWALL into the CPU and pulling themselves completely off of the ground. Teams, if on the opposite alliance’s side, are expected to play defense on climbing robots, deploy code into the shared cache, and/or try to prevent their FIREWALL deployal. However, teams must be careful not to cross the DEFENSE LINE.

ELEMENT Description:

The ELEMENT in MALWARE MAYHEM is the main game pierce in our game. The ELEMENT represents lines of anti-malware code, and robots must deploy the code into the cores, infected cores, or the shared cache to earn points. The ELEMENT itself is a 18” plastic chain with 2” links. Each chain will have a magnetic component on one end, allowing it to be automatically scored as it passes through scoring locations. All scoring goals will have a magnetic detector that will detect any chain that is scored through that specific goal. The ELEMENT enters the field through the four player stations on the field. During auto, chains will not be located on the field, instead all chains will be either pre-loaded into the robot or collected from the player stations.

Field Zones:

Initiation Line: At the start of each match, each team’s robot must start behind the auto line consisting of a long strip of white spanning the entire width of the field on their alliances side of the field. Once each robot of a teams alliance crosses the plane of this line, a ranking point is earned for the team. During the Endgame period of the match, this plane acts as a defense line, where, in the last 30 seconds of the match, robots of the opposite alliance are not allowed to cross this line so as to not interfere with gameplay (climbing).

Collection Line: This is the line that spans the width of the field at the end of the no-defense zones on each side of the field.

Collection Zone: This is the area of the field that spans from the COLLECTION LINE to the alliance wall. While in this zone, robots cannot attempt to deploy CODE. A robot must move completely outside of the zone to be able to deploy CODE

Scoring Zone: During TELEOP and the POSITIONING PERIOD, this is the area of the field that is in between the two INITIATION LINEs on either side of the field. During the DEPLOYMENT PERIOD, this zone is in between the COLLECTION LINE, and the INITIATION LINE on either side of the field.

Protected Zones:

Retrieval Zone: These zones are the marked areas on the field surrounding the human player station on the opposite side of an alliance’s driver stations. At all times during the match, defense is not permitted on robots that are in these zones.

Climbing Zone: This zone becomes active only during the DEPLOYMENT PERIOD. This is the area of the field in between the two INITIATION LINEs on either side of the field. During the DEPLOYMENT PERIOD, no defense is permitted on a robot in this zone.

CAD Models:

CPU

SECURITY CONTEXT

FIREWALL
image (4)

Point Values:


100% 75%50%

SHARED CACHE Stages:

Points from shooting in the SHARED CACHE come in 3 stages. This means that each alliance in a match will only receive points after a certain number of lines of CODE has been deployed into the SHARED CACHE by both alliances. Once both alliances have deployed 5 lines of CODE, stage 1 is achieved. Once both alliances have deployed 10 lines of CODE, stage 2 is achieved. Once both alliances have deployed 15 lines of CODE, Stage 3 is achieved.

Before the due date we plan to have a complete game video, more supplementary information, including game specific penalties, robot construction rules (starting height, endgame height, bumpers, weight, etc.), even more CAD models and an entire field, role of the human player, and more game piece details. Also, we are planning to create some modified FIRST logos.

Feel free to ask any clarification questions and thanks for helping out!

28 Likes

What did we say about virus games?

Edit: joke guys come on

9 Likes

This is a different type of virus…

1 Like

I assume this is stage 1, stage 2, and stage 3

2 Likes

I love the League against Science and Technology!

" Deploying 5 lines of code into the shared cache to achieve stage 1"
" Continuing to deploy code into the infected cores of the CPU"

If I had one suggestion to give, it would be to rename some of these actions. That’s a mouthful! You need something short and sweet enough that strategy teams, drivers, coaches etc can make themselves clear quickly. Maybe use acronyms for LOC (lines of code) and ICs/UCs (infected cores)?

5 Likes

Oh thank you, I am going off of what the game design team is saying and I would assume it’s that, but I will get back to this ASAP after discussing it with them.

1 Like

did you guys find that the acceleration wheels were necessary? My teams new, and are trying to talk ourselves out of the additional motors…

2 Likes

You will gain more from simplicity than from accelerators.

841 didn’t need them for everywhere up to the front of the wheel - if you’re shooting from beyond it, YMMV.

Accelerators will let your overall volley get a couple tenths faster - 95% of teams have better things to do than chase tenths of seconds in individual game actions.

3 Likes

Hey everyone,

Team 6328’s business team has been staying active these past few months and we wanted to share some updates about our outreach and awards submissions. With COVID-19 there were a number of limits set on how we could go about outreach, but our switch to virtual formats has still allowed us to reach different audiences and track over 1660 outreach man hours.

Outreach:

Since kickoff, both students and mentors on our team have been helping mentor our sister rookie Team 8604 Alpha Centauri. Some of our team members are regularly attending 8604’s different meetings - such as Innovation Challenge/Girl Up Advocacy, Game Design Challenge, programming, CAD and assembly, social hour, and all-team meetings - to join in on discussions and share our ideas, suggestions, equipment, and experiences.

6328 are excited to announce that we are partnering with 8604 to host an Innovation Challenge Practice Interviews Day for teams participating in the challenge on Sunday March 21. (Note: Our team has opted to not participate in this specific challenge.) There will be 10-12 adult judges paired up to listen to teams’ 15-minute presentation/Q&A and then together provide written feedback. Judges are provided with a rubric to evaluate teams’ performances, and the event is completely free. Registration can be completed at www.littletonrobotics.org/practice . Here is the flyer:
https://frc6328.slack.com/files/UJCCJUUN8/F01Q2NBLGTD/ic_practice_interviews.png?origin_team=TJ155E80G&origin_channel=CRK9D7F0B

This coming week on Wednesday March 10 at 7 pm, 6328 and 8604 are also hosting a free and virtual college counseling seminar together. After a ton of positive feedback last year, Nancy Federspiel of College Consulting Services is returning to discuss the college admissions process, including changes over the past year. We have several teams participating from New England and New York. There are still some slots available for anyone interested, but attendance will be capped so sign up as soon as possible! Register at College Admissions Information Webinar – Littleton Robotics . Here is the flyer:
https://files.slack.com/files-pri/TJ155E80G-F01MM8BV52M/dark_blue_and_orange_college_poster.png

Back in mid-December, we gave two livestream presentations as part of the 24 Hours of STEM initiative (www.24hoursofstem.org) organized by teams across the globe. Our software and programming team members presented “Software Training with Zumos” to talk about the in-house, summer-to-fall training program for students to build or strengthen their foundation in software. Our outreach team members presented “Moving from Rookie to Veteran” to talk about team sustainability and common issues that can come with the different life stages of an FRC team. The presentations were held in a Zoom meeting that was broadcasted on YouTube. Links:
Software Training with Zumos: https://youtu.be/R3yaI9fQui4
Moving from Rookie to Veteran: https://youtu.be/LGhGRWOPB1U

Bolton FLL has been running 3 teams since September with 19 kids, and will be competing in the upcoming Qualifier in a few weeks. In advance of the competition, Bolton FL is hosting a small scrimmage. Littleton’s FLL Challenge team of 6 students and 1 mentor has been meeting since late October, with team members ranging from 5th grade to 8th grade. Due to COVID, we paused the in-person meetings in November but plan to start back up soon. In the meantime, weekly 30-minute online project meetings have kept the students busy, totaling over 80 hours of meetings and preparation. 50% of the team is new to FLL, showing the continued expansion of our family tree.

Back in November, one of our founding mentors gave a presentation at the Beach Blitz virtual event, “Helping Girls Gain Confidence in STEM.” At the offseason event, many workshops/panels were streamed on Twitch and there were technical competitions including the Minibot Competition and Scouting Hackathon.
Girls in STEM presentation: https://youtu.be/RheIxH-LRns
Beach Blitz event: https://beachblitz.ocra.io/

In the past several weeks we have set up Zoom meetings with Team 811 The Cardinals to discuss outreach, marketing, social media, and how we’ve been working during COVID. Glad to be of help!

Awards:

For our Chairman’s Award submission, we decided to continue the theme of family through describing our team as one big family tree with several branches growing in different directions. We separated the essay into three parts to talk about the different aspects of our team: the roots, branches, and trunk. Under each part, we covered different topics including our background, team structure, sponsorship, outreach, sustainability, and how the pandemic has impacted us. The pdfs for the submission will be uploaded soon.

Please feel free to ask any questions and thank you for reading!

10 Likes

I don’t think they’re absolutely necessary and it will depend on your hopper and ball path to the shooter. You can check out our initial prototypes without an accelerator wheel earlier in this thread. These didn’t really get high fidelity enough to be a fair comparison to our final shooter but it can show you our evolution.

We decided to go with the accelerators on separate motors for controllably so that the actual flywheel only had contact with one ball at a time. This helped make it easier to maintain a constant RPM for accuracy of the shooter. The accelerators were a lot less sensitive to variations so those could probably be combined into the hopper if that works for your design. In our case with the V hopper we needed some kind of additional motor to feed the balls up to our shooter so having those wheels serve as accelerators made sense. If you have any more questions feel free to reach out and we can walk through more of our thought process and findings.

9 Likes

Here are copies of our Chairman’s submission essay and executive summary questions.

2021 Chairman’s Essay.pdf (59.3 KB)
2021 Chairman’s Executive Summary Questions.pdf (62.5 KB)

(edited because I originally forgot to attach the second file. Oops!)

6 Likes

Hi I had a few questions about your code. I currently have been banging my head at trying to implementing your code. I have got it to the point where it will run the paths, but the distance and angles are always off and never consistent.

My questions are,
how was kS, kV, and kA calculated? it is commented as volts and volts second per meter.
How did you guys calculate your velocities and acceleration? I see that some accels and veloc are the same, but for our robot the acceleration is greater than our velocity.
How did you guys calculate PIDF for this? And why is that D is an extremely high value?
How do the Ramsete values B and Zeta effect how the robot interacts with the path?

Here is a link to my github repo

Thank you for your time,
Connor Shelton
Team 68 Programming Subteam leader

2 Likes

Not 6328 but I know these will be of great help:

Introduction to Robot Characterization — FIRST Robotics Competition documentation (wpilib.org)

Trajectory Tutorial Overview — FIRST Robotics Competition documentation (wpilib.org)

Trajectory Generation and Following with WPILib — FIRST Robotics Competition documentation

2 Likes

If you haven’t taken a look already, the resources linked above will all be very helpful in answering your questions. I’ll try to address them more specifically as best I can.

kS, kV, and kA are all retrieved from the robot characterization tool (see above). It’s definitely important to be careful about the units here. The variables in our code are commented as using meters, but this is because updateConstants converts them from inches to meters. I’d suggest either setting them as inches in updateConstants and letting it convert or removing the method altogether and doing everything in meters. There is an option in the characterization tool to select your units. Also keep in mind that these values are just used for the voltage constraint, which keeps acceleration within the limits of your robot’s electrical system. In general, they shouldn’t have a major effect on the quality of your tracking. The characterization tool will also give you a value for your empirical track width (which is often slightly greater than the true track width because of wheel scrub).

You can determine the robot’s theoretical maximum velocity using kV. The profile’s maximum voltage is limited to 10 volts, so dividing 10 by kV in volt seconds per inch will give you inches per second (or meters/second when working in meters). For example, our kV is 0.0722 so 10/0.0722 is ~138 inches per second. For acceleration, using the same value is a good starting point but certainly not required. Driving the robot around manually, you probably can get a good sense of what is or is not reasonable (for example, how many seconds does it take to reach top speed?). When in doubt, I’d suggest starting with low velocities and accelerations then working your way up until the profile can’t track accurately anymore. Keep in mind that the voltage constraint will also act as an upper limit on your acceleration.

As for the PID gains, I’m assuming you’re referring to the velocity PID on the drive? Those constants are here in our code. We tune these by running at a target velocity for short distances and logging the actual velocity via NetworkTables. Using Shuffleboard, we can graph that value to check the response curve. We wrote this command to help with that process, which uses our TunableNumber class to update gains based on values sent over NetworkTables.

This page provides a good explanation of the gains for the Ramsete controller. However, these values are robot-agnostic so they shouldn’t require any tuning.

I don’t see any immediate issues with the code you’re running, so my best guess is that the issues you’re seeing might be caused by a poorly tuned velocity PID. This page also provides some useful troubleshooting steps. If the velocity PID looks OK, the next thing I’d suggest is logging odometry x and y over NetworkTables to ensure that it’s tracking the position accurately (that doesn’t have to be with a profile, you can just drive in tele-op or push the robot by hand).

We’re happy to answer any other questions you have.

6 Likes

We’ve talked a little bit about the work we’ve been doing for the Galactic Search challenge, and we’d like to share some updates on our progress. The general process we’ve defined for completing the challenge is this; we first select a path and place the robot in its starting position. While the robot is disabled, the operator pushes a button to run the vision pipeline. The robot determines which path to run based on the visible power cells and puts the selection on the dashboard. The operator then confirms that the path is correct before enabling (so that we don’t crash into a wall). Since there are no rules requiring that the selection take place after enabling, we thought that this was a safer option than running the pipeline automatically. The profiled path for each profile is defined ahead of time from a known starting position, which means we can manually optimize each one.

We also considered whether continuously tracking the power cells would be a better solution. However, we decided against this for two reasons:

  1. Continuously tracking power cells is a much more complicated vision problem that would likely require use of our Limelight. Making a single selection can be done directly on the RIO with a USB camera (see below for details). Tracking also becomes difficult/impossible when going around tight turns, where the next power cell may not be visible.
  2. Motion profiles give us much more control over the robot’s path through the courses, meaning we can easily test & optimize with predictable results each time.

The Paths

These are the four paths as they are currently defined:

A/Blue:

A/Red:

B/Blue:

B/Red:

These trajectories are defined using cubic splines, which means the intermediate waypoints don’t include headings. This is different from the AutoNav courses, which use quintic splines. For this challenge, we don’t require as tight control over the robot’s path and so a cubic spline is more effective.

The starting positions for each trajectory are placed as far forward as possible such that the bumpers break the plane of the starting area. Given the locations of the first power cells, we found that starting the red paths at an angle was beneficial (just make sure to reset the odometry’s starting position accurately :wink:).

You may notice that our trajectories don’t perfectly match up with the power cells. This is for two reasons:

  1. The trajectory defines the location of the robot’s center, but we need the intake to contact the power cells (which deploys in front of the bumper). Often, this means shifting our waypoints 6-12 inches. For example, the path for the second power cell in b/blue tracks to the right of the power cell such that the intake grabs it in the center during the turn.
  2. Our profiles don’t always track perfectly towards the end of the profile, meaning we end up contacting power cells on the sides of our intake. Shifting the trajectory to compensate is a quick fix for those types of problems.

The other main issue we had to contend with is speed. The robot’s top speed is ~130 in/s, but our current intake only works effectively up to ~90 in/s. Since most power cells are collected while turning, our centripetal velocity constraint usually slows down the robot enough that this isn’t an issue. However, we needed to add custom velocity constraints for some straight sections (like the first power cell of b/blue). There were also instances where the intake contacted the power cell before the robot began turning enough to slow down. Through testing it was fairly easy to check where it needed a little extra help.

Here’s an example of the robot running the B/Red path:

The code for each path is available here:

Vision!

Running the vision pipeline on only single frames vastly simplifies our setup, as opposed to continuously tracking power cells. Rather than using any separate hardware, we can plug our existing driver cam directly into the RIO for the processing. The filtering for this is simple enough that setting it up using our Limelight was overkill (and probably would have ended up being more complicated anyway). Our Limelight is also angled specifically to look for the target, which doesn’t work when finding power cells on the ground. When the operator pushes the button to run the pipeline, the camera captures an image like this:

b-red-lowres

Despite this being a single frame, we quickly realized that scaling down the full 1920x1080 resolution was necessary to allow the RIO to process it. Our pipeline runs at 640x360, which is plenty to identify the power cells. Using GRIP, we put together a simple HSV filter that processes the above image into this:

b-red-threshold

To determine the path, we need to check for the existence of power cells at known positions. Since we don’t need to locate them at arbitrary positions, finding contours is unnecessary. That also means tuning the HSV filter carefully is less critical than with traditional target-tracking (for example, the noise on the left from a nearby power cube has no impact on the logic).

Using the filtered images, our search logic scans rectangular regions for white pixels and calculates if they make up more than ~5-10% of the area. This allows us to distinguish all four paths very reliably. The rectangular scan areas are outlined below, with red & green indicating whether a power cell would be found.

a-red-overlayb-red-overlay

a-blue-overlayb-blue-overlay

It starts by searching the large area closest to the center, which determines whether the path is red or blue. The second search area shifts slightly based on that result such that a power cell will only appear on one of the two possible paths. This result is then stored and set in NetworkTables so that the operator knows whether to enable.

Here’s our code for the vision selection along with the GRIP pipeline.

The “updateVision” method handles the vision pipeline, then the command is scheduled during autonomous. We update the vision pipeline when a button is pressed, though it could also be set to run periodically while disabled.

This command can be easily customized for use with a different camera. Depending on where power cells appear in the image, the selection sequence seen below can be restructured to determine the correct path. Similarly, any four commands will be accepted for the four paths.

if (searchArea(hsvThreshold, 0.1, 315, 90, 355, 125)) { // Red path
  if (searchArea(hsvThreshold, 0.05, 290, 65, 315, 85)) {
    path = GalacticSearchPath.A_RED;
    SmartDashboard.putString("Galactic Search Path", "A/Red");
  } else {
    path = GalacticSearchPath.B_RED;
    SmartDashboard.putString("Galactic Search Path", "B/Red");
  }
} else { // Blue path
  if (searchArea(hsvThreshold, 0.05, 255, 65, 280, 85)) {
    path = GalacticSearchPath.B_BLUE;
    SmartDashboard.putString("Galactic Search Path", "B/Blue");
  } else {
    path = GalacticSearchPath.A_BLUE;
    SmartDashboard.putString("Galactic Search Path", "A/Blue");
  }
}

Overall, we’ve found this setup with vision + profiles to work quite reliably. We’re happy to answer any questions.

14 Likes