Experienced Judges vs Unbiased Judges

Conflict of interest controls apply to all judges, including RCA judges. EI and RAS as well. No need for them to be treated any differently.

Totally agree with the comment about volunteer retention. I know the LA regional has many judges that were experienced when I started 10 years ago and are still there judging.

Depends on the JA and the Judge. If possible I’ve removed myself from the room (in both roles) but often I can get away with simply being quiet and being VERY careful not to let my face show any reaction.

As mentioned above, and as Rich well knows, the CoI form applies to everyone in the judge room. It’s taken fairly seriously in my experience.

The big places I’d look, and thoughts:

  1. Local tech workers/engineers/businessfolk
  2. Mentors and Volunteers from other FIRST programs
  3. Mentors from FRC teams not at the event
  4. Alumni from other areas, or older alumni

1. Local tech workers tend to be awesome judges; (I believe) they’re the largest population of Judges at Silicon Valley Regional, and while at first they may not have gotten FIRST, they have a good rate of return, and a few of them have even become mentors of teams, after a few years of judging. Bonus points if you get some of the more senior folks at Tech companies, because they may end up sponsoring a team, or event! Basically, their expertise is invaluable, and they should be a significant part of your judge pool. (I know Haifa and Tel Aviv have a lot of tech presence, but I’m not sure about your area. I imagine you could draw folks from both, since it doesn’t look absolutely that far to either.)

2. Mentors and Volunteers from FLL and FTC are awesome choices, because they get FIRST, but are usually not affiliated with FRC teams. There’s a smaller training cost than random tech workers, but they’ll still have to learn the intricacies of FRC. There’s a good chance they have expertise relevant to a few awards, and if they judge other programs, they will understand the process super quickly. I suspect most Judge Advisors will try to mix them in with the industry judges, to get both perspectives on each panel.

3. Mentors from other FRC teams may still have affiliation/bias, but it’s much less of an issue, and easier to manage, because there obviously aren’t any discussions involving their own team. The issue is geography; since you’re in a district, with more small events, you may be able to find folks within a reasonable range of more events than the ones their own teams attend. Obviously, most of these judges will only really need to learn the process, which a good JA can do easily. This is a really popular option with offseason events that do judging.

4. And alumni! This may be a bit harder in Israel, since I’m guessing most of your alumni are from the older Israeli teams, and have complex histories, and biases. I’d be careful with anyone who was on an Israeli team in the last 5-6 years, although anyone past college who you really trust to be responsible and mature enough is worth considering. They bring a good perspective, having that closer connection to what the kids are doing, as well as being able to connect with them a bit better. I’d evaluate these folks on a one-off basis, how well you know and trust them, and how they’ve performed in volunteer roles previously, that require being responsible.

I’ve always considered having judges who have never seen FRC before to be an essential part of the FIRST program.

A first-time judge can listen to a student talk about some things that we’d consider simple in FIRST, like why they have a drop-center drivetrain, and see what all these students are learning and how awesome the program is.

Sometimes, after 6+ weeks of build and a few competitions, students can forget how remarkable some of the things they’ve accomplished are, especially relative to their peers who are not involved in FIRST.

Now I’d like there to be experienced judges in the area as well, and avoiding bias is a necessary part of that, but I also want the inexperienced judge in the room talking about how inspired they were when the tiny 9th grader started explaining motor curves or PID tuning to them. That’s how you get volunteers to come back and do more - give them a reason to talk to the students about the competition and for the students to talk back.

The flip side of this is that technical awards can end up being given for features that are utterly standard, simply because whoever was explaining the robot spent some time on them and the judge was not familiar with their ubiquity.

Back in 2008, we (449) won an award for having a potentiometer on our forklift to measure angle. No joke. To make matters worse, the use of the potentiometer in question was a terrible engineering decision that added no useful functionality (we could have done just as well running the forklift open-loop between limit switches) and ended up being a critical failure point that possibly cost us a regional victory and a trip to worlds.

Which is why having all inexperienced judges is a mistake, and including FRC savvy judges in the mix is a good idea.

The judges also receive some technical training on challenges in the game. This alleviates SOME of this issue.

Poise and clarity while explaining are a large part of the reason that teams win awards.

Referees and inspectors watch the robots to ensure a fair competition, which determines which team is playing the game best.

Judges are not really there to watch robots; they are there to look at the impact that the program has made on teams, and that teams have made toward culture change. The robots are mostly a bunch of McGuffins to them.

This is not necessarily a good thing, because detailed explanations of simple and ubiquitous robot components thus may sound just as compelling, if not moreso, to a judge as a high-level explanation of a complex and elegant engineering solution.

Moreover, detailed explanations of simple robot parts are not necessarily indicative of “impact.” Often, a truly complicated system simply can’t be effectively explained at that level of detail, because there is too much of it - you have limited time to talk to the judges, and discussing the workings of a particular (totally ordinary) sensor in detail probably compromises one’s ability to describe what about their robot is actually important, from an engineering standpoint. The students who have learned and done the most may actually spend most of their time talking in terms of abstractions - not because they have not learned/do not understand the intricate details of how the individual components work, but rather because that’s the only way to effectively describe a sufficiently complicated system.

Thus, I suspect the incentives here may well end up the wrong way around.

Often a truly complicated system is the wrong solution.

Certainly true of Chairman’s, Entrepreneurship, Dean’s List, Engineering Inspiration, Rookie Inspiration, and to some extent Industrial Design. For the other engineering focused awards (creativity, excellence, quality and control), dedicated observing judges can provide relevant feedback on robot performance that can (and in my limited observation) do influence final award decisions. No argument that good interviews sway judges. But I can attest that robots are not complete MacGuffins to all judges.

Certainly true of Chairman’s, Entrepreneurship, Dean’s List, Engineering Inspiration, Rookie Inspiration, and to some extent Industrial Design. For the other engineering focused awards (creativity, excellence, quality and control), dedicated observing judges can provide relevant feedback on robot performance that can (and in my limited observation) do influence final award decisions. No argument that good interviews sway judges. But I can attest that robots are not complete MacGuffins to all judges.

And often it isn’t, especially depending on what your standard of “complicated” is.

In my experience, the years in which our students have spent a lot of time describing to a judge how our potentiometer or a quad encoder works have been years in which we haven’t actually done all that much of technical interest. Admittedly, in 2008, we did have (mostly) a very simple and effective design - but if we won an award for the elegance of that design, it was by proxy, because the highlighted feature was neither elegant nor robust.

While complexity for complexity’s sake is not a good thing, I don’t think it’s particularly contentious to suggest that robots tend to become more complex as a team becomes more technically able. While I agree that simplicity is a virtue, what constitutes “simplicity” changes greatly depending on your technical aptitude and resources, and I don’t think that a judging system that rewards in-depth explanations of basic components is a particularly good way to incentivize parsimonious design.

This is especially true of controls - a lot of our team’s progress (and it has been huge progress) these past few years has been in controls, and I’ve found it slightly frustrating that there doesn’t seem to be any great way for our students to effectively talk to judges about what we’ve learned to do, because a lot of it (a coherent software design philosophy, persistent reusable code base, config-file-based dependency injection, loose coupling between high-order and low-order robot functionality) is highly abstract and hard to see on any given robot.

I think what I’m ultimately trying to get at is that I don’t think the judging of engineering awards can really be that effectively done in the current format, especially with judges who lack FIRST experience and thus can’t tell a genuinely impressive robot feature from a good explanation of something that everyone does. I don’t know what the best solution to this is - perhaps, a more formal submission of design materials (for example, judges could tell a lot by glancing at a team’s code!) would be a step in the right direction, though I am unsure if the resulting time costs to the judges would be feasible or not.

In New England District many judges are team mentors.

As judging is a team effort, the most straight forward way to get better results while still retaining the benefits of recruiting new inexperienced judges would be to better balance them with more FRC savvy recruits. Evidently not an easy task at present. Sorry about repeating myself.

Ideally inexperienced judges get paired with experienced judges, although it doesn’t always work out that way.

As to winning an award for a very basic feature, if it is explained very well sometimes that makes the difference. Typically each team will talk to multiple judges and all of the judges for any particular award have to agree on which team will receive any award.

Having been a technical judge, asking the students about the robot design and game strategy only to have them not talk to the excellently designed engineered feature that is obvious to the informed observer can be painful. Even asking about the specific features and getting a very mediocre response, where the student is obviously not informed. In my experience it does not matter how great your robot/feature is if the students cannot talk to it. The interview is part of the process of the awards.

I think this is the crux of what many of us are getting at. I want the team that has the simple “standard” feature that explains it the best and clearly knows what they’re talking about to win the awards and to talk to the inexperienced judges. I want the inexperienced judges to see the kind of impact the program has on the students, not the effect the program has on the robot.

If you’re a judge, and you talk to the little team in the corner with just a gear mechanism and a climber, but their students can talk in-depth about the effort that went into optimizing their gear mechanism and optimizing their drivetrain to travel the distances it needs to to play the game effectively - I think that’s more impressive and more deserving of an award than the team that has a swerve drive and a turret for shooting balls, but their students don’t think to explain the systems they have or why they did them.

I don’t think these students don’t know about their robot, I just think they don’t think to talk about these features, or they’re more worried about getting back to their work than talking to the judges sometimes.

The award system’s purpose is twofold. One purpose is to reward teams for excellent engineering, the other purpose is to force the interaction between inspired students and professionals who volunteer as judges. That interaction can inspire both sides of the discussion.

Having both things come together for an award (recognition of excellent engineering coupled with relevant explanations from inspired students) is what you aim for. What we’re hearing here are some of the concerns that arise when the second of these two goals appears to take priority. Should an award really be called excellence in innovation, creativity etc. if the engineering itself isn’t also exceptional?

I think sometimes the first one takes priority when it shouldn’t as well. I’d also argue the second is more important.

And just because one team did something far more complex or advanced doesn’t mean the other team didn’t have excellent engineering. 1011 was a world champion this year with one of the simplest and also best engineered robots of the year. I would award them with engineering awards over teams like my own that tried to do something more complicated that wasn’t as effective, especially if they can properly articulate the decisions they made to judges.

I have no argument with any of this. Far less complaints when obvious excellence in engineering receives awards. More when it’s less obvious because people aren’t privy to info gleaned from judges interviews. Engineers (judges included) admire simple elegant engineering. Judges routinely make hard calls when the engineering is outstanding, but the interviews aren’t. It’s really handy to have some judges with FRC savvy when crunch time comes, because they do help distinguish the every day from the unique. And judges, most of whom unfortunately don’t get to see much on field play, don’t completely ignore demonstrated functionality.