Need an FRC Example - Confirmation Bias

I’m taking 4926 through a Science and Critical Thinking training program this summer. One of the topics we are covering are cognitive bias’ including Confirmation Bias. Who has a good example of confirmation bias based in the FRC program that I can use as an example?

A basic definition of confirmation Bias is: the tendency to interpret new evidence as confirmation of one’s existing beliefs or theories

I’m looking for something like: A student with a desire to build a swerve drive system observes all the robots in a particular District or Field finals are swerve based, and concludes “we MUST build a swerve robot to compete at a high level.” When in actuality, SOME tank systems DO compete at a high level.

But I’d like an improved or more obvious example. The whole thing is a work in progress…

3 Likes

Yelling “ROBOT”? (ducks and covers)

Snark aside, this could be an example. Somebody hears it, thinks that’s the way to do it, hears it again, thinks that must be the way to do it, and starts yelling ROBOT.

12 Likes

Something that I’ve seen is that there is an assumption based on limited observation that “higher number” (read: newer) teams won’t have very good robots, and are often the first to be asked to play defense.

I’m definitely guilty of this, although I’m actively trying to combat this by utilizing quantitative data before making a judgement.

It’s awesome that you’re doing this as a team training- do you have any resources (presentations, recorded videos) that you’re willing to share?

19 Likes

Common opinions about how mecanum drives work?

9 Likes

I like to think of testing exposing confirmation bias. We fired 2 shots and they looked good. It’s works. when I work programmers on dialing in shots. I like to tell them (2022) 5 cycles of 2 shots, 8 have to go in and no single ‘cycle’ can be 0-2. Once get there, later on we can stretch and include how long it took to shoot, or going 17 out of 20

Edit: This is even more likely if the person who built it is the one testing it.

12 Likes

Testing is a wonderful place to find confirmation bias. Thanks Brian!

1 Like

This happens all the time in scouting, especially in pre-scouting. You watch a handful of matches from a team’s previous event, get a good understand of their abilities and then struggle to properly incorporate new match data into your opinion of them. For example, if a team was top tier based on five matches from their previous event, but only scored 7 cargo in this match, it’s extremely easy to just think ‘oh, that was probably just a bad match for them, but they are still top tier’. Or the flip side, pre-scouting says a team have a really inconsistent traversal climber, but they just did a flawless 12 second traversal climb, the natural thought is ‘Well, that was one of their good climbs but they are still inconsistent’. There are several examples of this from Hopper this year and I’m positive IRI will have many more, especially with new drivers and teams making changes to their robots.

7 Likes

I literally did that with Jaeheon this past season. If you watch out shooting from the 1st comp until the last comp you can see where improved testing resulted in our performance going up. A key example was our first comp. dialing in ranging with the limelight. Our programmer would dial in HSV and get a few shots working between matches. Then on the field we’d miss. We did that several iterations. I asked we seem to be good on the practice element but not on the field. And we keep looping. Are you confident in tuning, would you like me to find someone to help you? The programmer said yes. I found a mentor from 1771 who help the programmer out. The assumption was the HSV values needed to be tweaked to fix the issue. The issue that the mentor showed the programmer was the camera settings allowed too much noise in to the picture and made HSV tuning near impossible to remove false positives.

In this case it was confirmation bias where the error confirmed what they thought the problem was.

1 Like

You need to at least include the whole “Water Game Confirmed” meme in your presentation as a tongue-in-cheek example.

5 Likes

How about Team X has a mentor-built robot every year, that’s why they do so well.

1 Like

Furthermore, they had mentors in their pit, touching their robot, and then they won the match!!! QED!!!

2 Likes

Almost any sentence that starts off with “Well, the poofs…” probably has some confirmation bias.

5 Likes

The same teams always win chairman’s/win events.

We have to have X strategy this match because we could never beat Y team.

The refs are out to get us.

Better resourced teams perform better.

Most of the rules never change.

Competitions are always high stress.

5 Likes

Kidding aside, the case I see most commonly in prototyping is picking and choosing which testcases to call “valid” or which ones to ignore, based on what outcome you want the overall test to have. A lot of times, even subconsciously, and especially by the owners/designers/originators of a particular design.

5 Likes

Just because you’re paranoid… doesn’t mean they aren’t. /s

The team number/robot performance is a good one, particularly because of 2007 (and some strong counterexamples happened that year too).

Teams that yell robot typically aren’t good enough to make eliminations, and therefore I advocate for not drafting them in picklist meetings.

15 Likes

Our driver doesn’t need to practice with the swerve which we are using for the first time; he’s been playing video games all his life. We won “finalist” at our first competition; he drove pretty well; definitely no need for practice.

But my take - I could see him improve with each match and if he had driven as well in the first competition as in “state” we might have had a blue banner and gone to “world’s.”

1 Like

How about teams with lousy bumpers?

2 Likes

There were two CD threads. One said “stop yelling robot.” The other said “it’s Ok to yell robot.” The latter thread was closed by the forum admins, not merged into the pre-existing discussion.

Conclusion: “Stop yelling robot” is not just an informal request from 8 years ago, it’s the law.

2 Likes

Let’s say you’re trying to make a part with a dimension of 1.000" +/- 0.025". If you make the part and measure it with your measuring tool to be 1.014", you’ll hand it to the assembler confident that you built it correctly. On the other hand, if you make the part and measure it to be 1.037", you’ll probably suspect that your measuring tool isn’t calibrated correctly. Nobody ever suspects the measurement tool is out of calibration if it gives you the answer you wanted or expected.

9 Likes