![]() |
Judge Consistency Between Events
After a very busy, exciting and fun weekend at the Lone Star regional, I’ve been able to settle down and collect my thoughts. I want to start by saying that the event overall was amazing. The volunteers were awesome (as usual), the teams were all great to work with and the competition was top level like always. We had a lot of fun and can’t wait to be back there again next year!
What my post is about now is the Rookie All-Star award. Before I get started, I want to clarify a few things. I know the judges decisions aren’t easy ones and I fully respect the judge’s choice for the award. High Energy (5892) definitely an amazing rookie team worthy of the award. They worked hard and their accomplishments definitely deserve recognition. My concern is the information that we received after the event was over. Of course, our students were upset about not winning RAS. We had went to Bayou 3 weeks before and came away with Rookie Inspiration, so we felt we had a real shot to win RAS. We didn’t win either award so we went to the judges for some insight on what we could do better because we plan on competing for Chairman’s in the future as well. When asking the judge for this information and why we didn’t win, they informed us that they felt we were very deserving of the award but our name just was not brought up in the judges meeting. They told us that the judges that judged us didn’t advocate or mention us in the meeting when they were deciding on the awards. Of course that news was heartbreaking for the kids to hear. They had worked so hard for the award and that was the goal as a team we had set. They were also concerned how the judging process was done seeing how at Bayou, there was clear indication that the judges were working together and discussing which teams were deserving of awards. We understand that at the event, they were short on judges and judging isn’t easy, but shouldn’t there be some sort of consistency between the events? Our season is done and even though we didn’t achieve our goal of RAS but we still had an amazing, successful season this year and we cannot wait to get back at it again next year. In no way am I saying the teams that won those awards didn’t deserve to or that we should have won RAS. We respect the judge’s decision and congratulate High Energy on their RAS win. I just want to shed some light on the judging process and get some thoughts on it. |
Re: Judge Consistency Between Events
First of all, great job of keeping the tone of your post constructive and respectful, even while describing events that you, naturally, are a little let down about. I don't have much experience with that particular award, but I can give some insight for Judges feedback concerning Chairman's.
A couple years ago, FIRST had the judges fill out feedback sheets, essentially scoring teams on the Chairman's Award. Overall, it seemed teams found them helpful in deciding where to expand and dedicate more outreach to. A year ago or so, FIRST discontinued use of those sheets as they (to my knowledge) felt they made the Chairman's Award seemed more like an award to be won than to be earned. Essentially, they wanted it to be based on individual teams and not a checklist. This was met with a mixed response, as teams understood FIRST in their rationale behind their decision but also appreciated the insight they would receive with each feedback sheet. After that, not all teams got feedback, but some still did if they were well known or knew the right people. In this aspect, it is a little unfair to the rest of the teams who don't have these connections. Additionally, throughout the course of time, it appears there may be some consistent inconsistency or trends. For instance, FIRST wants to award as many teams as possible, so at any given regional it is unlikely one team will win more than one award that isn't drastically different than the first one. Look at the Gracious Professionalism Award and the Chairman's Award. Both are honoring similar characteristics, so it is rare for one team to win both at the same regional. I've observed this a few times, that when talking with judges they make their decisions loosely based on previous awards won and awards they will win that regional. Overall, it's not a bad practice––more teams get more awards and the awards diversity goes up. The downside, of course, is that there will be some inconsistency and confusion in feedback. In regards to them just not bringing up your name, that seems a bit strange. Given that not many rookie teams will attend the same regional, it appears off that they would completely overlook your team in that area. I would say better luck next year, but you won't be rookies then! It's good to see how you're congratulatory of the award winners and already looking forward, though. Who knows, it sounds like you have a good shot at qualifying for Worlds next year if the cards fall in the right place. |
Re: Judge Consistency Between Events
First off, congratulations on getting through your first season! I hope it wasn't too exhausting and your team is excited for a second season!
I want to preface this by saying, I've never been a judge for FRC so I don't know the process. I have been both a student and a mentor. I do have extensive experience with judging for FLL and FTC, so I'll try to draw that in as well. Consistency in subjectivity is really really hard. Unfortunately, the awards are subjective. I base that on the description of the awards themselves, the competition for that award (ie. The teams at the event), and most importantly the ever changing human component. Not only the judges as the human component, but the students and mentors also involved. I understand the students are upset and that you see that there should be more consistency. But even if we were able to get over the logistical nightmare of having judges noted be communicated across events, there's still subjectivity of the event itself. And I think it would not be in the program's interest to ask judges to value other notes over their own observations. I hope I'm not coming off aggressively. I think you're being very calm and collected so I want to approach the same way. The problem you present is basically this, how can we turn something inherently subjective into something more objective? It's something I've been interested in for a long time, especially as a participant. But what I've come to realize is to solve that problem we really have to ask why does the answer matter? That's been a question for me for the past two years as I've transitioned into becoming a Lead Mentor...the lead "guidance" for a bunch of teenagers. And what I've discovered is that while my students may be motivated to win awards, it's my responsibility to ensure they stay motivated, if they win and award, if they don't win, if they get screwed out of one. Rather than trying to figure out what is the fairest way to give the award, my highest priority is ensuring my students continue to learn, work hard and be proud of their failures and successes. This isn't to say that I don't entertain the thought of how to continue to improve the award system. I think a program that doesn't self analyze and seek to improve will stagnate. But my priorities are not on figuring out awards. Anyways, sort of back on subject I highly suggest you give a whirl with judging if you have not already to see how you can improve consistency. |
Re: Judge Consistency Between Events
Quote:
Second - the practice you're talking about is known as spreading the wealth. It makes sense from a team experience perspective. Third - you're right there is a fair bit of overlap in awards, a lot of the tech awards boil down to articulating design process and intent. Fourth - not bringing up name can mean a lot of things in a lot of contexts. But I have some slightly bigger concerns - Rule 1 in the Judge Room is "what happens in the Judge Room stays there" There's a lot of reasons for this, but this thread is exactly one of them. |
Re: Judge Consistency Between Events
Quote:
|
Re: Judge Consistency Between Events
Quote:
To your point about making sure the kids learn and work hard, that's the main principle of our team. We did a lot of teaching this year and the kids learned a lot. They worked hard on their own and I think that's because of the FIRST community and the teams they interacted with. Seeing what those teams and other teams accomplished, they knew they could attempt to do the same thing and it really showed. They were proud of the success they made and everyone of my students said they enjoyed it all. They are excited about next year and are already looking forward to off season events. Also, I would love to judge. I just love mentoring too much. :) Quote:
|
Re: Judge Consistency Between Events
[Sorry, this post is very all over the place but I have a lot of thoughts on this topic]
I think the main issue with judging, and I have thought about this a lot, is just how subjective it is, and then, with that, how can it be less subjective but still effective? We know that checklists/rubrics aren't really the answer, but how do we find a happy medium? One thing that I had never seen at a regional before, but saw this year at one event, was two Chairman's judging rooms. I've always been bummed about multiple judging rooms (there are always multiple at Champs), but there is really nothing to do about it since there are just so many teams. The event where there were two judging panels had ~30 teams applying for the award. The problem is that this makes the process a lot muddier. Not only does the Chairman's team have to advocate that they are the best role model to one set of judges, but they have to connect with those judges so well that their judging panel, in turn, then has to advocate for the team. It adds a layer of subjectivity to the whole situation; something, like starting a robotics camp for special needs students, for example, may connect better with a judge who is a special needs teacher than a aerospace engineer (totally just an example I made up). In my time in FIRST I have been a FRC Judge Assistant, FLL Project Judge, FTC Dean's List Judge, and general FTC Judge. Like many of you have mentioned, it is really tough, and there are issues like the ones that have been mentioned in all levels of FIRST and in many other competitions outside of FIRST. As a team, I think the only thing we can really do about it (besides mentioning it in the post-event survey*) is to make yourselves so good that subjectivity is a non-issue. As a mentor, I think the best thing we can do is encourage our students to understand that there is a difference between not winning an award and "losing" an award and to utilize moments like these to become more motivated. A final thought, as your team continues its Chairman's journey, is that the Chairman's Award was put in place to encourage teams to do Chairman's-like activities. When you are signing up for outreach events, running tournaments, starting teams, etc., it is important to understand what the truly positive and life-changing results will be on the team, school, community, and larger world. And, the truth of the matter is that only one team at each event can earn the Chairman's Award, regardless of how many outstanding teams are applying, but each team still has that outstanding work they have done behind the scenes that is truly making a difference!! *I don't mean that any disagreement with the judges should go in the survey, just any evidence-based concerns that could negatively effect you or other teams FIRST experience. |
Re: Judge Consistency Between Events
Quote:
Quote:
Now for the meat: Quote:
I understand what you're trying to get at, but the fundamental problem is you're seeking to make a subjective process more objective. It's a hard transition. The way FRC controls this is by training. Judge Advisors are all trained by HQ and Judges are all trained by HQ-trained-Judge-Advisors. This discussion becomes really difficult at this point, neither of us know what training involves. We haven't been through the process of FRC judging. Frankly we don't even know what the judges at each event you went to thought. We're going to get down to anectodal evidence of one perspective. If we were to have an honest and complete discussion, I think we really owe it to figure out all perspectives before trying to change how things are done. Again, I have to implore that you have to try being a judge to really see what the process is. With the expansion of districts, it's also something I have to try to be honest. Do you have any specific proposals in mind? I wish I could even say which were adopted and which were not. But if you do have any, I'd propose sending them to frcteams@usfirst.org or perhaps if you can get in touch with your local planning committee and the Judge Advisor there. |
Re: Judge Consistency Between Events
Quote:
Bayou: http://www.thebluealliance.com/event/2016lake Lone Star: http://www.thebluealliance.com/event/2016txho We can look at a variety of stats here to see the two events were a bit different. The highest OPR at Bayou would have been valid only for 4th at Lone Star. 233, one of the winning teams at Bayou, increased their OPR by 10% between these two events. This helps to show the teams you're competing against have also improved during this time. There were 10 more teams at Lone Star. They were different. Even "consistent" judging will appear different when the thing being judged changes. What you're asking for isn't consistent. "We received X award at this event and didn't win at the following event" doesn't show a lack of consistency. You need to step back and evaluate the rest of the events to see if the decisions were really all that different. |
Re: Judge Consistency Between Events
Quote:
|
Re: Judge Consistency Between Events
Especially with a rookie award how hard is it to go down a list of all of the rookies in attendance. There's never too many of them. That way stuff like "you weren't mentioned" doesn't happen.
One thing I recommend is to specifically try not to be forgotten. Say something that's wows them and leaves them thinking of you. |
Re: Judge Consistency Between Events
Quote:
|
Re: Judge Consistency Between Events
Quote:
The "Robot Design Judging Pre-Tournament Preparation Pack" I received this year did not contain the word "advocate" or any phrasing like it. It had plenty of information about the mechanics of the judging process but only had two sentences about the deliberation process and some boxes that say "Determine Top Teams Seen by Each Pair" and "Review and Discuss Top Teams" that only imply that the Judges should advocate for the teams that they saw. Perhaps a short video might be appropriate since the document I received was already 34 pages long. It can also cover concepts such as "Gracious Professionalism" and "the kids do all the work (in FLL)" that may not be familiar to the volunteers who are sometimes being trained on the day of the competition and have not had time to have seen these concepts before. Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
For full disclosure, the mentors for the OP's team left our team, on good terms, to help a local school start a new team. They (and 3847) very graciously allowed us to work at their facilities when the school we normally work at was closed for Spring Break. We celebrated Bag and Tag with them. You see their facility in our reveal video and the reveal video for another well established Houston area team. It is very likely that we would not have done as well this season if they had not allowed us to test our robot at their facility. I do not feel that our relationship is coloring my views about the inconsistencies in judging at these events. There is now another thread in a similar vein. |
Re: Judge Consistency Between Events
Quote:
■ This team seems like a “Chairman’s Award team in the making.” (Community activities, leadership, vision, spirit, etc.) ■ The team is a true partnership between school or organization and sponsors. ■ The team understands what FIRST is really trying to accomplish – realizes that technical stuff is fun, challenging, and offers a future. ■ This team has built a robot appropriate to the game’s challenges. What you are describing, to me, is the Highest Rookie Seed Award, which I assume they won if they were the highest ranked rookie. The Rookie All-Star, from my understanding (and the Awards section of the manual), is more of a rookie version of the Chairman's Award. (I wasn't at the event, obviously; I just looked this up to find out whether Rookie All-Star was that different at a regional than at our district events.) |
Re: Judge Consistency Between Events
Quote:
I can, rather freely explain the general process that goes into judging though. [1] Most of this could easily be gleaned by a careful observer, for other things I'm going to be intentionally vague. Training Pit Interviews Short List --- I tend to add a another round of Pit Interviews and Short Listing just to get another set of eyes on everyone More Detailed Interviews Deliberations [2] Award Script Writing - NO FREAKING POEMS I'll work on seeing if I can get a more detailed walk through of the process added to the manual for next year[3]. I don't want the process or what Judges are looking for to be a mystery. Here's another fun piece of info for you, that deliberations stage is the single hardest part of judging. Know why? Because there's only a handful of awards and number of teams >>> num of awards. Spoiler - we want to give every team an award. Heck [4] worked with a judge who had only heard stories of how awesome FIRST was from Jess [5], she came in and judged at Dartmouth. Well, guess what? her company NOW sponsors a team. Just from talking to students. Look, I wanna make judging as transparent as possible. I want teams to feel they understand what went into an award decision. But I've been on the other side of someone leaking info from a Judge room. I argued for team B to win over team A. Team A found out... only they only heard "he was arguing against giving you the award". Long story short - it was a crappy experience, I stopped volunteering for a while, nearly quit FIRST it was so crappy. It HAS to be a protected space so that judges can argue without fear of repercussions. I'm not trying to keep the process a secret, only the details. I'll close with some tips on how to maximize chances of getting a judged award. - Read https://frcdesigns.com/2015/07/21/5-...n-more-awards/ Kristine is a former Judge Assistant, current Event Chair, and generally awesome person. - Be prepared, know the award criteria, know what you want to win. Ok, you built some baller vision processing code? Sell the crap outta it, and don't be shy. Go into details! Did you have an issue with a particular filter not working that you worked around? Talk about it. Just remember - some of the judges don't know as much as you do. Explain it to them like they are 5. Plus, that demonstrates you know it. - Listen to what they are asking you. If the judge is asking about your intake mechanism and you start talking about your FLL teams you are wasting everyone involved's time. Now, if you work in "well, our intake was actually based on the intake our FLL team did last year, I was a mentor on the team. We thought back to that problem and .... " That's bonus points right there. Because now the judge has in their mind that not only is it cool, but when they are discussion RAS/EI they can go "wait, they learned from that and it impacted their performance as a team" THAT is a cool memorable story. - Have cool memorable stories. How much time do you spend with judges? Ok, now realize they talked to 10 other teams that afternoon. They are overwhelmed with feet per second, shot percentages, OPR, or whatever other technical details. These are people. Talk to them like people. You know what? You have a cool story, you have a favorite part of the bot. Talk about it. - Don't hand them a binder of crap. A) they have to carry it the rest of the day B) They have to worry about getting it back to you C) Dude, distill this to something I can understand quickly. You know what, it's great you have a record of every shot for any given parameter of your shooter, really, that's cool. But distill it down to an NBA style shot map and it'll stick in the mind a lot better than tables of numbers. - Talk to them like human beings. No, seriously, MOST judges are just normal folks at the end of the day [1] Caveat - every JA runs things slightly differently. I'll point out where I differ from what I've seen most folks. There's a lot of good reasons room processes differ but the biggest one is that each group of judges is different. [2] This is the part I refer to as "chair throwing time" [3] I don't make the rules, I just make a lot of noise and sometimes things change [4] And this happened outside the judge room so I can tell this story! [5] Who is STILL totally at fault for 2Champz /s |
| All times are GMT -5. The time now is 11:58. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi