Graph of ETCs, EMCs, and ERCs through week 3

Here’s a graph I just generated of the various contributions.

Please note that they do not allign to teams, but rather the whole data set. I look forward to any insights you may have from the data.

blue is EMC
green is ERC
orange is ETC

A few insights of my own. Minibots are rare, we all knew this, but the EMCs follow a fairly linear curve after 200.

The best robots are rarer that minibots.
After around 100 on the ERC graph it skyrockets and really follows an exponential curve. Snagging a robot in the top 50 or 75 of robots looks pretty critical.

Just as I’ve suspected all along, come Einstein it will be the robots that differentiate the winners from the losers. The minibot race will be crazy, but I wouldn’t put all my eggs in that basket. There will be many top notch minibots, not as many top notch robots.

Again these are just my opinions on numbers I created. Feel free to ignore them or criticize them. All I have to say is, I’m glad I’m not a driver on Einstein this year. They better have a defibrillator on scene :).

Can we get labels?

each node represents the value of one of the various ranking algorithms I had

ETC: Estimated Team’s Contribution
ERC: Estimated Robot’s Contribution
*both of those are more for ranking at this point than an actual contribution

EMC: Estimated Minibot’s Contribution
*this is actually calculated by trying to figure out which teams finished where in the race. (I can tell definitively if you had 1 or 2 minibots score at what places they came in, then its a bit of guess work to figure out which one to attribute which points to)

the height on the blue is minibot
height of green is robot
and the height of orange is robot.

scale on the left is estimated points…

bottom is just various team’s contributions (ordered by the appropriate metric)

gives a picture of robots, minibots, and teams in FIRST.

please note that the robot contribution is a little inflated right now, so its not quite a true value, but for ranking purposes it seems to work well.

1000+ teams have competed so far, and this is a snapshot of all of them.

so Teams (x) vs Ranking (y)

Out of curiosity, how are you gauging what the minibot is contributing?

I.e., how are you telling which robots are deploying minibots and when, and how they placed in their minibot race? I wasn’t aware that data was being collected anywhere (other than manually scouting the event).

Okay well the twitter feed FRCFMS includes the minibot bonus in the posts after each match.

One really cool thing is that the only score that can happen 2 ways is 30

either first place, or 2nd and 4th. so if you see a 30 you just check to see if the opposing alliance got 30+ (45) and then you know which place that alliance finished.

so based on this I can see which alliance finished in which positions. Then based on a running sum of all of a team’s minibot points, i give the alliance’s first place points to team with the most minibot points per match, and then if there is a second place for the alliance, they get the other portion.

Its an educated guess as to who actually won the race for a given alliance. I have run the data against some actual team’s at west michigan, and will be looking to run it against other teams.

Also this is only for data taken from qualification matches, as that’s the only appropriate place to look for individual contributions.

ERC is total score - minibot score.
right now erc isn’t as accurate to a point value as EMC, but you can tell some things based on the combination. For example 148 had a weak minibot but a killer robot, where as many of the best minibots, don’t have much of a robot to back them up (not necessarily a bad thing)

again its just another metric… my hope was to eventually help teams who simply have no scouting division. So everyone can make a somewhat educated decision if they get called to the alliance selection block.

Instead of using ETC rank on the x-axis of the graph, can you use team # and make a scatter plot with a trendline for each of the 3 metrics. I attached one I quickly make with all the OPRs versus team number.





Cool! This basically shows the lower the team number the higher the OPR.

For the record the x axis is teams and y is etc…

The reason I chose to go the way I did, is it paints a picture of scoring in FIRST as a whole rather than by team number.

Another thing that is interesting about your graph is point density.

The OPRs for the 3000+ teams are much closer together (less outliers) than the other 2500,
but it looks like for the 3000+ teams OPR links closely to teamnumber.

Can you do the same graphs (team #) with your 3 metrics that I did with OPR? More than anything I thing the EMC would be interesting

unfortunately trendlines aren’t as easy in AS3 as they are in excel, and I have to head down to mass for the night. I’ll try to to figure it out when I get back tomorrow.

but here’s the minibot plot at least:

It’s very harsh to look at a single statistical measure and say that anyone has a weak anything. A low minibot score could be because of a weak minibot, or a weak deployment system, or because they have the best minibot, so with 30 seconds left, all 3 opponents stop doing anything else except stopping the team from scoring the minibot, or because the team has such a large tube score that they purposely descide not to deploy a minibot.

Really, the only conclusion you can draw is that 148’s minibot (at Alamo) did not score as many points as other minibots. I’m glad that you are working through the math and providing data. However, speculating as to the reason for the data (as you did when you called the minibot weak) is drawing a false conclusion.

You’re absolutely right. I’m still getting used to making appropriate comments.148s minibot score was so low that the minibot had little value in alamo qualifying matches. This could certainly be because they wanted to maximize qualifying points.

Minibot and deployment are grouped together for me.

The comment was not.meant to.be harsh but rather a compliment. If I had a.decent minibot I’d pick 148 over pretty much any other team.

Also weak was meant to be relative to other top.ten teams. In my book 148 is the most impressive team. Not the most valuable but the most impressive

Very cool Graph.

As others have noted, the curve you see here is very similar to last years, and the 2008 curve for OPR distributions (2009 was quite bit different). This shape is very useful for years with “robot” based scoring. We used this type of shape in order to figure out what scoring potentials would be for this year.

If you want to ensure that you win more than 50% of your matches without relying on partyners, you need to be able to score 3X the average opponent or around 24 pts this year. This will put you somewhere in the top 100-200 of this graph, or about the top 15%. At a 40 team district, this would be about 6th. At a 50 team Regional, 7-8th as far as offensive ability goes.

If you want to be able to beat that team with 2 average partners by yourself, then you need to be at 24+2*8= 40 points. This looks to be around top 50 or so on this graph, or Top 5%. In theory you would be one of the top 2 at a District of 40, and top 3 at a regional of 60-ish. This would give you a good shot at making it into the finals at most events (but not all).

Now for some sobering news. Top 5% potentially the best or second best scorer at your event is around teh Top 100 in FRC. At the Championship, 244 teams make it into eliminations or 96 teams. On top of that, for many games, only 2 scorers are required, and the 3rd position is a specialist role. This means there are potentially 416 main scorer slots or 64 total positions or roughly the top 3% of scorers in FRC, and the other 32 slots are often filled with a mix of scorers or specialists.

Going back to the chart, the top 3% looks to be in or around the mid 40s.


When doing your strategy after kick-off, if you decide to go the scoring route (as opposed to specialist) you can use these groupings as guidelines.

The average team value will be 1/3 of the average match score. The average match score this year seems to be around 24 pts. With the team average then around 8 pts.

To make elims at a district or regional as a scorer, you need top 16 or upper 40% which is about 1.5X the average team contribution, or around 50% of the average match score.

The top 10-20% should be able to do this average score on their own, and should be in the running to be an alliance captain or first round pick into the upper 4 alliances. (24 points)

To make the regional/district finals (as a scorer), you need to be planning to achieve top 5%-10% which is 5X the average team contribution or 5/3 your predicted average alliance score. (38 points)

To be confident in making Elims at the Championship, you need an additional contribution or 6X the average team contribution or 2X the average match score. (48 points) Teams will make it into elims with less than this, but if you want to be reasonably certain, this is where you will need to be.


Estimating an average score. I find it is easiest to estimate an average score early on by estimating an entire match score (both sides) and then divide by 2.

Auto: Early in the season expect only 1 uber tube to go up per match on average during qualifying. After week 3, expect this to be closer to 2 uber tubes on average. 6-12 pts.

Teleop: Most matches will have 1 logo on one end and a partial on the other. 18 pts for the logo-end. 6 points for the other. Add another 3 pts as 50% of the time that logo will have an uber under it (6pts/2=3). Thus expect tele-op to garner. 27 total points.

Bonus: Weeks 1-3: 1/4 about 2/3 of the time, and later in season 1/4 ever match, and 2/4 about 50%. Thus early = 30*2/3 = 20pts. Late: (30+50)/2 = 40 points.

These predictions would lead you to:
Early: (6+27+20)=53 total or 26-27 pts/alliance
Later: (12+27+40)=79 total or 39 pts/alliance

These are before penalties. When you read the word penalty multiple times in the manual, assume at least 1 per alliance per match. Thus reduce those scores down to 24 and 36.

We were pretty good with the 24. We will see with the 36 (late season).


Last year, the average alliance score was just under 3 points. Which meant that the average teams score netted just under 1 pt./match on average. The leading scorers on einstein had teams that averaged around 10X.

If that value is accurrate again this year, teams will need to be able to put up 80+ points.

How do you know which team has the most minibot points per match to make this assignment if all previous assignments were done similarly (using past results to make the assignment). How do you make the initial assignment (a team’s first minibot deployment), if they have no past data to base an assignment on? It seems to me like a self-fulfilling prophecy where if you score a first place minibot in each of your first couple tries, you will receive first-place credit for any future situations where combined points have to be split between two teams.

Also, I assume you’re including all regional data for these results and not just the most recent regional for each teaim?

Let me know if I’m misunderstanding this.

Wow that was a great analysis IKE. I’ve never thought if breaking something down like that.

Yes, this was very interesting. We will see how it plays out. The logic looks sound though, nice job!

As a match gets added the contributions for all matches are recalculated. So at first you are right. But come Saturday there should be reasonable numbers for each team.

There are some downfall to.the system though. Some teams will.steal mini points just by having the luck of the draw.

All calculations are based off the latest regional as that the true judge of a teams current state.

The way it winds uo working is every team has a running mini score. So the higher your running score the more credit you get. It’s far from perfect but seems to work well enogh.