View Single Post
  #2   Spotlight this post!  
Unread 11-04-2016, 15:43
philso philso is offline
Mentor
FRC #2587
 
Join Date: Jan 2011
Rookie Year: 2011
Location: Houston, Tx
Posts: 938
philso has a reputation beyond reputephilso has a reputation beyond reputephilso has a reputation beyond reputephilso has a reputation beyond reputephilso has a reputation beyond reputephilso has a reputation beyond reputephilso has a reputation beyond reputephilso has a reputation beyond reputephilso has a reputation beyond reputephilso has a reputation beyond reputephilso has a reputation beyond repute
Re: Judge Consistency Between Events

Quote:
Originally Posted by popnbrown View Post
The way FRC controls this is by training. Judge Advisors are all trained by HQ and Judges are all trained by HQ-trained-Judge-Advisors.

This discussion becomes really difficult at this point, neither of us know what training involves. We haven't been through the process of FRC judging. Frankly we don't even know what the judges at each event you went to thought. We're going to get down to anectodal evidence of one perspective. If we were to have an honest and complete discussion, I think we really owe it to figure out all perspectives before trying to change how things are done.
I would like to propose that FIRST add "The Judges are active advocates for the teams, not passive observers" in a prominent place to guarantee that the Judge Trainees are sure to see it. If the Judges do not advocate for a team, it is the same as a teacher loosing a students term paper or final exam.

The "Robot Design Judging Pre-Tournament Preparation Pack" I received this year did not contain the word "advocate" or any phrasing like it. It had plenty of information about the mechanics of the judging process but only had two sentences about the deliberation process and some boxes that say "Determine Top Teams Seen by Each Pair" and "Review and Discuss Top Teams" that only imply that the Judges should advocate for the teams that they saw.

Perhaps a short video might be appropriate since the document I received was already 34 pages long. It can also cover concepts such as "Gracious Professionalism" and "the kids do all the work (in FLL)" that may not be familiar to the volunteers who are sometimes being trained on the day of the competition and have not had time to have seen these concepts before.


Quote:
Originally Posted by popnbrown View Post
Consistency in subjectivity is really really hard. Unfortunately, the awards are subjective.
Quote:
Originally Posted by Sperkowsky View Post
One thing I recommend is to specifically try not to be forgotten. Say something that's wows them and leaves them thinking of you.
The issue of subjectivity and what "wows" the Judges is irrelevant if

Quote:
Originally Posted by Xavbro View Post
the judges that judged us didn’t advocate or mention us in the meeting when they were deciding on the awards.
I was not a Judge at this event but my wife and I have served as Judges at many FLL tournaments and I can tell you that this does happen. I don't think that the Judges who don't advocate for the teams they see have any nefarious intent. Frequently, they are first-time Judges who were trained the morning of the event and did not know they needed to advocate for the teams they saw. Some might have been doing it just so the event can take place and have no real interest in what is going on. I have also worked with Judges who are naturally shy and quiet individuals and are overwhelmed when a (group of) loud, enthusiastic and outgoing Judge talks up the teams they saw, the shy and quiet one stays quiet. There have been instances where I have politely asked quiet Judges to describe the teams they saw so that those teams got a fair chance.


Quote:
Originally Posted by popnbrown View Post
This isn't to say that I don't entertain the thought of how to continue to improve the award system. I think a program that doesn't self analyze and seek to improve will stagnate.
Yes. FIRST is about changing culture in our society. That take a lot of hard work. The hope of winning the awards given out at these events are the carrots that lead the people doing the work to continue doing the work. If there is the perception that the awards process is arbitrary rather than based on merit, the awards stop being incentives.


Quote:
Originally Posted by Andrew Schreiber View Post
Rule 1 in the Judge Room is "what happens in the Judge Room stays there" There's a lot of reasons for this, but this thread is exactly one of them.
If there is something broken in the Judge Room, it will never be fixed if it is not discussed openly and honestly. Neither the OP nor myself is trying to get them an award after the fact. We are hoping to improve the process.


Quote:
Originally Posted by TheMilkman01 View Post
In regards to them just not bringing up your name, that seems a bit strange. Given that not many rookie teams will attend the same regional, it appears off that they would completely overlook your team in that area. I would say better luck next year, but you won't be rookies then! It's good to see how you're congratulatory of the award winners and already looking forward, though. Who knows, it sounds like you have a good shot at qualifying for Worlds next year if the cards fall in the right place.
Yes, it is very strange. There were 11 rookie teams at Lone Star. Many of them were struggling to put a robot on the field that moved and were unlikely to have been in consideration for any awards. This is not meant as a criticism of those teams. Only a few would have had the foresight to do the work that 5829 did. Probably only 3 or 4 of the those rookie teams would have had the extra resources, bandwidth and foresight to have done any of the things that Judges would have been looking for.


Quote:
Originally Posted by JeffB View Post
This is a fair question to ask only if everything is the same from event to event.

Bayou: http://www.thebluealliance.com/event/2016lake
Lone Star: http://www.thebluealliance.com/event/2016txho

We can look at a variety of stats here to see the two events were a bit different. The highest OPR at Bayou would have been valid only for 4th at Lone Star. 233, one of the winning teams at Bayou, increased their OPR by 10% between these two events. This helps to show the teams you're competing against have also improved during this time. There were 10 more teams at Lone Star. They were different. Even "consistent" judging will appear different when the thing being judged changes. What you're asking for isn't consistent. "We received X award at this event and didn't win at the following event" doesn't show a lack of consistency. You need to step back and evaluate the rest of the events to see if the decisions were really all that different.
I do not think the tougher competition at Lone Star is a factor. One of the Guidelines for the Rookie All-Star Award is "This team has built a robot appropriate to the Game's challenges." The OP's team was ranked 20th at Bayou and 16th at Lone Star, a tougher event as stated by JeffB. They also made it to the Semi-Finals in both events. I think results show that they did a better job of building "a robot appropriate to the Game's challenges" than most that attended either event, including my team.

For full disclosure, the mentors for the OP's team left our team, on good terms, to help a local school start a new team. They (and 3847) very graciously allowed us to work at their facilities when the school we normally work at was closed for Spring Break. We celebrated Bag and Tag with them. You see their facility in our reveal video and the reveal video for another well established Houston area team. It is very likely that we would not have done as well this season if they had not allowed us to test our robot at their facility. I do not feel that our relationship is coloring my views about the inconsistencies in judging at these events. There is now another thread in a similar vein.

Last edited by philso : 11-04-2016 at 15:50.
Reply With Quote