Not an exact answer to IKE's question, but a move in that direction. His point that even if the mean robot scores 5 points, the median (or 50 percentile) may score significantly less if there are outliers to skew the high end of the field is a good point.
I don't have any actual data for how many points robots score per match. So I used OPR, as that should do a decent job approximating the real distribution. All data is from BAE in 2011, OPR was calculated from Bongle's OPR program.
First up, a histogram of OPR. OPR can be negative, just as robot contribution can be negative (more penalties than points). The mean was 10.1, the median was 6.7. This certainly supports the hypothesis that the median robot is not as good as scoring as the mean robot.
I then wrote a short script to simulate BAE using the OPR predicted scores. Interestingly, it did a very good job at simulating the top 50% of the field (the mean and third quartile barely moved), but the bottom 50% of the field was not as great. You can see it in the dotplot, but in real life teams tended to score 0 points or 30 points more often than they did 15 points. In the simulated matches, they scored 15 points more than 0 or 30. You can also see the significant movement (6 points) in the 1st quartile between the real and simulated matches. Meanwhile the median and third quartile moved by .2 and .1 points respectively.

From this one regional analysis, it appears IKE is right. Even using OPR to predict alliance scores (and I have a feeling that is less skewed than the true distribution) the median robot scored about 30% fewer points than the mean robot. As an interesting side note, it looks like in 2011 OPR did a suprisingly good job at predicting scores of the top 50% of the field, and was less good with the bottom 50% of the field.