[FRC Blog] "Standard District Points System for 2015"

Blog Date: Tuesday, February 3, 2015 - 15:01

As I’ve said a few times, the Standard District Points System will be made an official part of the FRC Manual this season, and in future seasons. It will come as a new Section 7 of the FRC Administrative Manual that deals broadly with advancement, not just for Districts, but for Regionals as well, covering such topics as Wild Cards. That new section is under final review. I apologize for the delay. In the meantime, folks are understandably anxious about what the points system will look like for this season, with our move away from Win/Loss/Tie in Recycle Rush, so I want to provide you with those details even though the manual section itself isn’t ready yet.

The updated system was developed by a team of two volunteer representatives each from all five Districts, along with some FRC staff members. We started work a few weeks before the game was released, so all volunteers from the Districts were asked to sign Non-Disclosure Agreements (NDAs). They never saw the full details of the game in advance of Kickoff. They knew that ranking would be based on average points, not Win/Loss/Tie, and that it would not be a low-scoring game, but knew very little else about the game itself. We asked them to sign NDAs because we considered even the fact that we would be basing rank on average points to be very confidential information.

The folks below were members of the team:

  • Indiana District representatives – Chris Fultz, Liz Smith
  • Mid Atlantic Robotics District representatives – Ben Ng, Ed Petrillo
  • Michigan District representatives – Dan Kimura, Jim Zondag
  • New England District representatives – Bruce Linton, Jamee Luce
  • Pacific Northwest District representatives - Leo Conniff, Kevin Ross
  • FRC Staff representatives – Lori Burkhamer, Danny Diaz, Miriam Somero, me

I want to thank everyone for their work on the team. This, in my mind, was a ‘dream team’ – everyone came to the table with an interest in doing the best job possible for the community as a whole, and our discussions were open, honest, friendly, and productive. Best of all, I think we ended up in a really good place! We recognized that the current points system works, and we were determined to make minimal changes in adapting it for a game not dependent on Win/Loss/Tie.

You can find the details of the 2015 FRC Standard District Points Ranking system here. Note that Michigan will be following a slightly modified points system at their Championship, because of the large number of teams they will be inviting. This modified system has not yet been finalized. FRC has agreed to Michigan’s verbal description of the system, but we are awaiting a written summary so we can review carefully before giving final approval. I would expect we’ll be able to release the system that will be in use at the Michigan District Championship within a few weeks.

The rather complicated-looking formula showing in the summary document was developed initially by FRC’s own Danny Diaz, FRC Systems Engineer, and modified with the support of the more mathematically-minded volunteers in the group. The intent was to come close to emulating the points distribution at events we had seen with the traditional Win/Loss/Tie system formulaically. Danny has put together a summary of how this formula works, along with representative points values for different size events, here.

Hope everyone is having a great build season so far!

Here’s the equation for input into Wolfram:

erf^-1((N-2®+2)/(1.07*N))*10/erf^-1(1/1.07)+12

So much for something uniform and easy to understand.

While this is pretty confusing at first glance(and second and third probably), for a given event size the point values are predetermined for each rank. So, the equation can be used to generate all the values for each event, and everyone can just look at a nice simple chart.

It’s a little confusing at first, but it’s really not that complicated when you understand what is happening.

Kinda best way to simplify it down is it’s like when a professor decides to grade on a bell curve. Same concept, just a little more complicated equation to work with the data parameters FRC requires.

Wow. That took a few reads to understand. It’s almost like they’re trying to make it difficult.

I laughed a little bit while reading some categories, like “Teams on Alliances Advancing Playoff Level” (sounds like it’s missing a word) or “Alliance Selection Results After Alliance Selection is Complete”. These names seem very complicated and over-descriptive for what they describe.

Also, the use of the inverse error function is just ridiculous. It doesn’t need to be this complicated! I’d rather just have a table of values that I can look up, or a much simpler, way more understandable polynomial function that approximates the function in the document.

A requirement for game design is easy to understand and explain. The inverse error function is neither simple nor easy to understand. Most high school students have no clue what it is, so why has it been chosen?

At what point is it unreasonably late for FIRST to release the criteria that will be used to determine a team’s advancement to the next level of competition? Honest question here- there’s no deadline, and they don’t seem to be in any rush to give us information. It doesn’t seem unreasonable to think that Michigan teams will be competing before their point system is established!

First takeaway: Why was the QS capped at 22 pts instead of the previous 24? Also, why is the minimum 4 instead of 0?

Not that it matters, but it does strike me as odd.

So max pts this year is now 83 w/ Chairman’s, 73 w/out at District events, instead of 85/75.

This is the backend. Yes, teams can look at it, but realistically I’d anticipate most will just be looking at the chart for their event size and not worrying about the math behind it.

Would it have been nice to see at least that points would be based on rank earlier? Yes. But lacking knowledge of the timeline and how much effort went into this, I’m very reluctant to claim FIRST should have done anything differently.

A question for Districts teams: If you had this information at the beginning of the season, would it have changed your strategy for this year’s game?

Here is a quick calculation script I threw together. It requires Python 3. The inverf implementation is not mine and comes from here: http://johnkerl.org/python/normal_m.py.txt

I might extend it to generate tables in the future, but it works well enough for now. Let me know if you find a problem with it. It is a .txt because CD doesn’t allow .py files.

district_points.txt (4.42 KB)


district_points.txt (4.42 KB)

The answer to the first question is because, even in a WLT system, perfect 24s are pretty rare. In MAR, according to this, 23’s have only been done twice, and in theory a 24 should have only happened once at this point. (Though it would be interesting to see what a difference a WLT would have on this game on the rankings). More matches per team make it harder to score a perfect record.

And regardless of the reason for the upper cap, the lower cap probably exists to keep the average score at 12. If the average score from qualifying becomes less than 12, teams that win judged awards, eliminations, or are a new to gain a slightly greater advantage in their 5 or 10 points, and re-adjusting the point values in that respect would make the system even more complicated.

Here’s a spreadsheet that uses an approximation method of InvERF(). It’s not 100%, kind of a hack actually, so let me know if you have any improvements.

Ranks.xlsx (29.2 KB)


Ranks.xlsx (29.2 KB)

Agreed.

What, you mean like in Frank’s Blog on Kickoff Day?

http://www.usfirst.org/roboticsprograms/frc/blog-changes-to-district-points-2015

This Wolfram Alpha Widget: http://www.wolframalpha.com/widgets/view.jsp?id=24107b5c48ce7876152fb3bb85e071b4 should calculate it if I put in all the equations properly (I’m not sure if I completely understand how the district ranking works, so there may be errors). It doesn’t require any other programs to run (I know some students who don’t have Excel so I decided not to try and figure out the equations in it). Please let me know if you catch any problems in it and I’ll try to fix them.

You both forgot the Ceiling Function.

So I just did this for us, but it seems like this would be a small net increase for the teams at the top of the rankings. We went 11-1-0, and 10-2-0, and 10-2-0, with a 1 seed, 1 seed, and 3 seed. So we got 22, 20, and 60, for a total 102. With this structure, we would have gotten 22, 22, and 63, for a total of 107. So if you were constantly at the top seeds, this most likely will be a net increase in points, especially at tougher events, where the 1 seed might only have had 10 wins. However, the top teams are not usually the ones where 5 points makes a difference, so it most likely would not have much of an actual effect on the top.

Thanks for the catch–I believe it’s fixed now, but I don’t know what the results should look like, so I can’t really check. Please let me know if you catch anything else.

The same link should work: http://www.wolframalpha.com/widgets/view.jsp?id=24107b5c48ce7876152fb3bb85e071b4

Thanks for catching that. Like I said, I’m not in a region with districts, so I tend to miss some of this stuff. Frankly given that, I’m even more confused by some of the complaints in this thread.

In the GameSense interview, Frank mentioned that only 0.6% of all teams scored 24 points last year. I guess it would have been nice to have a perfect comparison to last year, but ignoring the 0.6% doesn’t seem devastating to me.

In the summary document, it says “typically-sized district events” would have a minimum of 4 points. Though, in the explanation document the tables show that at about an event size of 55 the minimum goes from 4 to 3. Using Rachel Lim’s district points widget, it looks like 3 is the minimum going all the way up to a tournament size of 1,000. If you look at the explanation document, the range scale in the formula is 10, but the gain factor at the end is 12; that says to me the formula is built to never go below 2 points, and because of the nature of the Inverse Error Function (going to infinity at the limits and all) and the use of the Ceiling function, you will never go below 3 points period.

Though, I must admit, it would be a crushing blow for a team to show up at an event and get no points whatsoever. Even if they don’t have aspirations of making it to the District Championships, getting a goose-egg is deflating.

I think a one sentence version of Danny’s white paper should really ought to go into the manual with a graph (and a link to the white paper!). This function is going to be needed to be explained several thousand times this season and teams/volunteers/the public seeing math as a scary black box is good for no one.

Something along the lines of "This function assumes teams are approximately normally distributed, and awards points based on rank. The #1 seed will receive 22 points, the middle of the pack ~12, and the lowest ranked teams ~4.

Then a plot for a 40 team event, with rank on the x axis and points awarded on the y.

To clarify, 23’s have only been done twice. 22’s are a little more common.

Oh, and I made a spreadsheet showing how the QS points will be distributed by rank for every MAR event. The MAR events are all similar in size this year, so it doesn’t change much event by event.