View Full Version : FIRSTStar(TM) Rating
Joe Johnson
09-04-2002, 12:05
In thinking about how many teams won multiple regionals this year (and last year actually), I would like to have a one stop shopping list of the various regional & national award winners.
Does anyone have a spread sheet or a database with every award given at every regional FIRST has held?
How about just a list of Championships?
I am thinking about pulling together a sort of "Hall of Fame" document (perhaps as a white paper or perhaps a section on this website).
Maybe a better name would be a "FIRSTStar(TM) Rating" something like MorningStar does for mutual funds.
The idea would not be to rate teams or robots in any one year but to let folks get the picture of a teams historic performance.
I am thinking that one way of looking at the data would be to click on a team number and see the awards that that team has won over the years.
Another way would be to allow you to pick a criteria and compare teams based on that criteria (e.g. list the top 20 teams in terms of winning the Leadership in Controls Awards or list the top 20 teams in terms of winning regionals, etc.).
Once we have a database, it may be possible to have build your index type querries (e.g. list the top 20 teams that have been Rookie All-Stars and have been Chairman's Award winners or Finalist).
I think that this would be a worthy thing to keep up because FIRST is growing so fast that many really inspirational teams are popping up all around the country and we really have very little way of learning who they are and in what way they are great.
What do you think?
Joe J.
P.S. (I am addicted to these silly P.S.'s ) This came up at this time because my management was interested in knowing how many regionals our team has won over the year -- in fact, I didn't know and had to think hard about out to get the sums to come out right (7).
The next natural question is "Who else has won a lot of regionals and how many have they won?" I have no answer to this question. Shooting from the hip, I think Kingman (60) has won 6 (maybe more), Beatty/Hammond (71) has won 6 that I can count, I think Cheesy Poofs (254) have one 6 as well. Beyond that my brain gets even more foggy. There may be more teams that have won 6. How many have won 5 or 4 or 3 or 2 or even 1? We (or at least I) don't know! This is where the FIRSTStar Rating service would be a very useful service to have around.
What do I think?
I think I am going to eat lunch now.
Oh, about FIRSTStar? I think it would be a very cool and useful tool. Yay databases.^_^
I think this is a good idea. But there are so many great teams out there who haven't won, either because of bad luck, or whatever. I think there are lots of teams out there with stories that need to be told. Is there anyway that attached to this there could be a brief bio about a team? That way the teams who aren't lucky enough to win 7 (wow...) regionals can still get their name out there.
I mean, why limit these team histories to just winners?
(of course, you could also make this searchable by awards/wins/whatever)
Could a white paper do it justice? What about a type of database?
I dunno, just throwing ideas out there...
~JVN
Strategy Head
Team 229 - Clarkson University
"Close but no cigar"
3rd Place Alliance - Buckeye Regional
4th Place Alliance - Canadian Regional
Don Taylor
09-04-2002, 13:45
I agree that this is a great Idea. I'll start with the stats from our Team, Team 343 M-n-M. Our first year was 2000 where we played in only 1 regional at KSC, and were first runner up losing in the 3rd match of the Finals to Baxter Bomb Squad and Heat wave.
In 2001 we played in 2 regionals KSC and Langley and won both of them.
In 2002 we played in 2 regionals KSC and St. Louis and won both of them.
We didn't do particularly well in the competition at the nationals in 2000 or 2001, however we did win Rookie of the Year in 2000 and the Inaugural Kliener Perkins Caufield and Byars Entrepreneurship aware at the Nationals in 2001.
Don Taylor
M-n-M
PS If other teams will post their stats here I would be willing to put all of the data in a spreadsheet, the only question then is where do we store the spreadsheet at, I guess we can figure that out later.
rees2001
09-04-2002, 14:19
WOW,
I thought we were doing pretty well for a 3rd year team. You guys have been rockin'. Here's our list. If you get one of these from each team you'll be working till next season. Maybe have a place we can go in & input our own data. I don't think anybody would lie.
2000 NJ rookie of the year
2000 NJ finalist
2001 spbli #1 seed
2001 spbli imagery award
2001 Galileo Division Champs
2002 NYC Delphi award
Originally posted by JVN
I think this is a good idea.
...
(of course, you could also make this searchable by awards/wins/whatever)
...
What about a type of database?
...
~JVN
I'm sure we can make this available at some point.
SOAP Team,
Team 108 - SigmaC@T
www.soap108.com
team222badbrad
09-04-2002, 15:13
This website may help in looking up information.
http://www.waybackmachine.org/
Go to that website and type in www.usfirst.org
This website will take you back to the year of 1996 and it shows you what the website used to look like (laugh) ;) and it might just tell you who won and who got what awards.
Brandon Martus
09-04-2002, 16:10
Originally posted by soap108
I'm sure we can make this available at some point.
ditto...
wouldnt be that hard to do a little database to keep track of every team & their awards for every year, etc.. and then have custom queries.
( # of chairmans awards winners who won 3 regionals, etc)
wait a sec .. what am i getting myself into :) [j/k]
Mark Pierce
09-04-2002, 16:14
I'd sure like to see this also.
I've combined all of this year's seeding and award results into one spreadsheet, and will try and format it enough to post as a white paper later this week. It seems to me this might be useful for teams heading on to National or just curious. It's a lot of cutting and pasting for everyone interested to have to do.
Hey Joe. Tjē came up on this problem last year, and my solution was to start writing everything down in one long document.
I publish it on our homepage, and if this project gets off the ground, I will glady add the data to it.
http://www.tj2.org/accomplishments.htm
I don't really see this database as a useful tool for the teams themselves at competitions, but I think it would be highly useful for teams to keep track of themselves, and to allow teams to compare themselve to others.
For instance, Tjē has never won a regional.
Ever.
We have finished 2nd in NJ 3 times, and have finished in the top 10 one time. We have had the number one seed at flordia before, and the inaugreal winner of the Woody Flowers award went to our team leader, Mrs Calef. We have won or placed second at numerous out of FIRST events (Rumble at the Rock, BattleCry), and have won the several of the Judged awards.
I'm not going to list every award and accomplishment we've gotten, that's the purpose of that webpage, and besides, it'd be very long.
On average, we average out very high. Please ignore 2001, we can't explain it. :-)
Mike
Webmaster of Team 88
Mike Carron
09-04-2002, 19:22
I hate to correct my fellow Metal-In-Motion teammate, but we have a few more awards to throw into the mix. Here is the complete list for our three years of existence so far....
2000 KSC Finalist
2000 KSC Motorola Quality
2000 National Rookie All-Star
2001 KSC Champion
2001 KSC Judges Award
2001 Langley Champion
2001 Langley Industrial Design
2001 Langley Incredible Play
2001 National KPCB Award
2002 KSC Champion
2002 KSC KPCB Award
2002 St. Louis Champion
2002 St. Louis Chairman's Award (our favorite!)
2002 National .......
Originally posted by mpking
Hey Joe. Tjē came up on this problem last year, and my solution was to start writing everything down in one long document.
I publish it on our homepage, and if this project gets off the ground, I will glady add the data to it.
http://www.tj2.org/accomplishments.htm
I don't really see this database as a useful tool for the teams themselves at competitions, but I think it would be highly useful for teams to keep track of themselves, and to allow teams to compare themselve to others.
For instance, Tjē has never won a regional.
Ever.
We have finished 2nd in NJ 3 times, and have finished in the top 10 one time. We have had the number one seed at flordia before, and the inaugreal winner of the Woody Flowers award went to our team leader, Mrs Calef. We have won or placed second at numerous out of FIRST events (Rumble at the Rock, BattleCry), and have won the several of the Judged awards.
I'm not going to list every award and accomplishment we've gotten, that's the purpose of that webpage, and besides, it'd be very long.
On average, we average out very high. Please ignore 2001, we can't explain it. :-)
Mike
Webmaster of Team 88 Yea, about that page .. didn't you guys go to the NE regional? :confused: :p
Hey,
I went to the WaybackMachine.com website (way cool, thanks a lot Brad!) and downloaded a bunch of the FIRST material. I have the list of teams for every year (1992-2001) the winners, and award winners from every regional (1995-2001) compiled into one .zip file. It's pretty overwhelming, a lot of history there. Now who wants to sort it into a database? haha.
It is posted here-->http://www.clarkson.edu/~vielkije/First%20Archives.zip
That is my Clarkson FTP space, so it should be safe surfing for those who worry...
If anyone has any of the regional results from way back in the day (pre-'95). Or if anyone has the results from 1995 New England (for some reason I couldnt get those) send em, and I will update the file.
~JVN
Strategy Head
Team 229 - Clarkson University
PS - Is this something that should be in whitepapers?
SharkBite
09-04-2002, 21:16
this is a really great idea...... ive been meaning to dig into my teams history and this gives me an excuse..... we have over 20 trophys from the past 8 years and i think it might be time to track them all down........ we have had a lot of success this year and we finally talked to principal into removing random chorus football trophys from 10-20 years ago to give us our own trophy case..... to a lot of teams this might not seam that exciting, but our team doesnt get a lot of recognition in our school (i know theres teams out there that know what that feels like too)
Joe Johnson
09-04-2002, 22:51
There may be a way to get the data straight from FIRST.
If we did, we need to think about what type of system we would like to set up.
Any more ideas for useful rating systems?
For example maybe a Chairman's Award Rating which would include some combination of Winning THE Chairman's Award, Regional Chairman's Awards, and being a Chairman's Award finalist.
Any other thoughts?
Joe J.
Originally posted by Joel J.
Yea, about that page .. didn't you guys go to the NE regional? :confused: :p
Umm..... Yea.
Umm.. It there, honest. Just don't look at the file date.
Hehe. Darn topigraphical errors :-)
Aaron Vernon
10-04-2002, 13:55
I obviously have too much time on my hands, but here is some of the information that you asked for, Joe. I took the results from the FIRST website from 1998 through 2002. I didn't add anything before 1997 because the team numbers changed between '97 and '98, and I have yet to sit down and match old team names to current numbers. Either way, this is a snapshot of the last 5 years of FIRST.
The spreadsheet has 5 worksheets:
Totals - by Award - This lists all of the winners of every award from '98-'02. (Note that the most of any award that a single team has won is 6. Teams 47 & 254 have won 6 Regionals, and team 67 has won 6 Leadership in Controls Awards.)
Totals - by Event - A list of the winners of every award by event. This allows you to see how often a certain team has won an award at a specific event.
Totals - by Team - A list of every award that each team has won. Note that three teams are tied with 23 awards (teams 16, 47, and 67).
FIRSTStar rating - This is the pivot table with all of the information you could want. This totals up the number of awards each team has won. Also used to calculate the FIRSTStar rating (see below for equation).
FIRSTStar ranking - This ranks all of the teams who have won awards (excluding scholarships, Autodesk awards, and single-person awards) by their FIRSTStar rating.
FIRSTStar Equation
The FIRSTStar rating is a weighted average of a team's awards divided by the number of years that team has competed. Thus, a team who has 10 awards in 3 years would rank higher than a team with 10 awards in 5 years.
The key, however, is the way that I took the weighted average. Using some guidelines from how FIRST calculates eligibility for the Championship I came up with an equation.
FIRSTStar Raw Score = 5 * (Group 1 awards) + 4*(Group 2 awards) + 3*(Group 3 awards) + 2*(Group 4 awards) + 1*(Group 5 awards)
Group 1 awards = National Champion, Chairman's Award Winner
Group 2 awards = Regional Chairman's Winner, Chairman's Award Finalist
Group 3 awards = Regional/Division Champion, National Finalist
Group 4 awards = Regional/Division Finalist, Technical awards (Creativity, Quality, Controls, Tomorrow's tech, Ind. Design), #1 Seed, and Highest Rookie Seed
Group 5 awards = everything else
After you have the raw score, you divide that by the number of years that a team has competed. This gives you your FIRSTStar rating.
FIRSTStar Rating Top Ten
1. 16 (Even if Bomb Squad had competed last year and won nothing, they would still be in 1st place!)
2. 47
3. 67
4. 254
5. 71
6. 343
7. 175
8. 365
9. 111
10. 945
FIRSTStar Stars
I added a relative score from 1 to 5 stars to the spreadsheet. The calculations are as follows:
5 Stars- Greater than or equal to 5.0 FIRSTStar rating
4 Stars- Between 2.5 and 5.0 FIRSTStar rating
3 Stars- Between 1.0 and 2.5 FIRSTStar rating
2 Stars- Greater than 0 and less than 1.0 FIRSTSTar rating
1 Star - Zero FIRSTStar rating (i.e. no awards)
This should be a good start for discussion as to which teams are the most successful. We could use this as our own FIRST RPI poll to use in "handicapping" teams during the season. Luckily, we won't base who wins the Championship on what the computers say.
-Aaron Vernon
Team 224 - PSGA/Piscataway HS
Jason Morrella
10-04-2002, 14:32
Aaron,
That clearly took a lot of time and work - you did an excellent job! I can't believe that a first attempt could be so good. You included many things I would have overlooked. After looking at the criteria, there are only a couple things I could think of which might be worth discussion. Let me know what you think of the following observations/ideas.
Items I might add/change:
Should the average be further broken down by number of events attended in addition to years in the competition?
As the USA Today does with High School Sports Teams. Maybe have a National top 10 or 20, but also have it further broken down with a regional top 10 or 20 so that more teams can be recognized?
Personally, I think ALL awards should be included in group 4. I think rookie all star awards, sportsmanship, judges, spirit, etc... are of the same value as the technical awards and that those teams should be given the same credit. The qualities of those awards are very much responsible for making FIRST a "different" type of competition and really set FIRST apart - on a level unrivaled - from other programs.
I would also include the other other technical awards FIRST has and is starting to offer, such as the animation. These are important aspects of FIRST and the students on those sub teams learn valuable skills and put in just as much time as those building the robot.
Joe - great idea. All the sports teams at high schools around the country check the USA Today to see how they compare in regional and national rankings with other teams, now FIRST teams can do the same.
Aaron - great job putting these ideas into a working system.
Joe Johnson
10-04-2002, 14:37
A very good start.
Now we need the older data and to get some multidimensional metrics. I like the 5 star rating as a general idea but I have to think about the specifics of your forumula.
In addition to the overall metric, I would also like some "Chairman's oriented" metrics and some "robot oriented" metrics.
It may even be nice to have some goofball metrics like "Greatest looking -- non-winningest robot" metrics, or most "judges" award index. I don't know, we've got time to think about it.
Getting the data is a good part of it.
Joe J.
P.J. Baker
10-04-2002, 15:22
It would be a little more work, but I think that the data is there. Because the data is normalized to the number of years competing rather than the number of events, the rankings are a little skewed towards teams that have consistently competed in 3 or 4 events per year. Otherwise, a great job!
P.J.
Jim Meyer
10-04-2002, 15:31
What a great compilation of information!
Kudos to Aaron!
p.s.
I did notice one tiny error (not that anyone would expect such a large collection of information to be void of errors) Team 67 did not win the Rookie All Star award at the Great Lakes Regional in 2002, we were awarded the Engineering Inspiration Award.
Aaron Vernon
10-04-2002, 15:48
Thanks for the heads up about the mistake. I just realized - and it makes more sense that 903 got the Rookie All-Star and not 67.
Let me know if there are any other mistakes and I'll update the file. I'll also make it a bit easier to update (I was in a hurry so I didn't really perfect it).
Luckily, it didn't change anything (not that I could tell).
-Aaron
Adam Krajewski
10-04-2002, 18:16
A very interesting start.
I agree with basing it more on the number of competition events, rather than years in competition. Also, after season competitions could be interesting to add.
For my own personal amusement, I calculated my own personal FIRSTStar(TM) Rating of 3.75 based on the three teams I've been a part of the last four years. Could make an interesting CD Forum membership rating system (to give Brandon a little more work).
Adam
Mark Hamilton
10-04-2002, 20:48
Originally posted by Aaron Vernon
FIRSTStar Raw Score = 5 * (Group 1 awards) + 4*(Group 2 awards) + 3*(Group 3 awards) + 2*(Group 4 awards) + 1*(Group 5 awards)
Group 1 awards = National Champion, Chairman's Award Winner
Group 2 awards = Regional Chairman's Winner, Chairman's Award Finalist
Group 3 awards = Regional/Division Champion, National Finalist
Group 4 awards = Regional/Division Finalist, Technical awards (Creativity, Quality, Controls, Tomorrow's tech, Ind. Design), #1 Seed, and Highest Rookie Seed
Group 5 awards = everything else
After you have the raw score, you divide that by the number of years that a team has competed. This gives you your FIRSTStar rating.
I think all the nationals judged awards deserve more points.
Furthermore your data is missing the J+J sportsmanship award for 2000, which we (team 108) won.
Curtis Williams
10-04-2002, 21:06
Sounds like we need to make ourselves an SQL or Access database with all this info. Then people can run their own queries. I'm willing to help.
Joe Johnson
10-04-2002, 21:14
I like the idea of normalizing by the number of competitions.
How about also normalizing by the number of teams at a regional?
I am thinking that perhaps some metrics would have this option included and some would not.
So... Winning a 75 team regional would be weighted somewhat more than winning a 33 team regional. I am struggling with what the weighting should be though -- I don't think that I would make it worth (75/33) as much but it seems that it should be worth somewhat more.
I think the main thing is that we should get the data in a format that lets folks propose various metrics, then we could all compute that result of what is proposed and we could see if we agreed with the basic trend.
I have another tricky question. What about teams that sort of split up into several teams? Should the new teams get the credit for the old team's performance or should they start off fresh? I am leaning toward a "New Number, New Stats" policy. Any other ideas on how to handle this?
Joe J.
I don't think the "New Number, New Stats" policy would work. Some teams remained almost exactly the same, but they changed sponsors, and under the old FIRST rules, changed numbers. Is this right? If my understanding is correct, then these teams have quite a bit of history that would not be accounted for, which I thought was the entire point.
Originally posted by Mark Hamilton
I think all the nationals judged awards deserve more points.
Furthermore your data is missing the J+J sportsmanship award for 2000, which we (team 108) won.
I'm not singling out your post, but I think we should hold off trying to correct the data.
The reason I say this is that I found myself composing a very similar letter myself. (A few awards were missing from my team as well)
Before we get into the Data integrity stage, lets work the format out. I think this a good "test" set of data. Let's leave it as it is, and concentrate on getting a final form. I think the excel sheet is great, but for this to be truly viable, it would most likely need to be a database back end, serving web queries up on the fly. Although even as I write this, I think why does it have to be dynamic. Most of the content is static for 10 months of the year. It would only change during competitions. The only case I could make for dynamic data is comparing one team against another.
My only concern is this,
I really hope people don't become too enamored by this. My team is a good characterization of this. We've had good years, and we've had off years. Thankfully, the good years, (4 of them five if you count this year) out weigh the bad years (2).
Originally posted by Joe Johnson
I have another tricky question. What about teams that sort of split up into several teams? Should the new teams get the credit for the old team's performance or should they start off fresh? I am leaning toward a "New Number, New Stats" policy. Any other ideas on how to handle this?
Joe J.
Conversely, how about teams that have ceased to exists. (Plymouth North for example)
How do we take them into account. Do we delete them, or standby them? Keep in mind teams also resurface. (TigerBolt)
Joe Johnson
10-04-2002, 21:40
I say we keep teams that are no longer active in the FIRSTStar(tm) system.
Some day (a few decades from now) Chief Delphi may retire -- if they do, I want folks to have to a benchmark to hold themselves up to ;-)
Joe J.
Phil Chang
11-04-2002, 18:18
another thing for this database might be consistency, how many competitions has a team won in a row. Team las Guerrillas has won 5 so far in a row, and looking for a 6th (nationals)
Champions- Cocomo Indiana-2001
Champions-for sweet repeat 2001
Champions-chief delphi invitational 2001
Champions-buckeye regional 2002
Champions-great lakes regional 2002
-TheChosun
JamesJones
12-04-2002, 12:39
Now that the ratings are coming together very nicely. Could they be used by FIRST to influence the pairings during seeding and division groupings during the competition? I know past performance may not be indicative of future results but does anyone think 111, 47 or 16 are going to put out a crummy bot next year? We played some robots 2 and 3 times at KSC. Can you imagine how skewed the seeding results would be if some team got stuck playing Bomb Squad 3 out of 9 matches? If we have a past robot performance metric, then the sum of that metric for one alliance pair should be approximately equal to sum of their opponents in a given match. That way, if you are a medium performing robot and the computer randomly pairs you with wildstang in one match, they should have your alliance oppose an alliance with a medium performing robot and say 47. This way seeding would be much more performance based and not as much the luck of the draw. (BTW, I'm not saying all the high seeds didn't perform well but you have to agree that for many teams there is a lot of luck involved).
This could also be used to even up the competitiveness of division. The summation of the FirstStar ratings in each division should be the same.
The only problem with this is your team gets labelled and may get stuck competing in a certain class for which you may not be suited that particular year. In other words, if you had stellar performance in the past but turned out a bad bot one year you would be competing with all the other bots that had a history of stellar performance. In other words you would get creamed. On the other hand if you had a rather poor history but turned out a great bot one year you would rise to the top like a rocket!
Hmmmmmmm.
James Jones
Engineer/Coach
Team 180 SPAM
Originally posted by JamesJones
...
James Jones
Engineer/Coach
Team 180 SPAM
I don't think using the previous year's results to set up matches is a wise idea. I'd rather go with random pairing. I don't know about anyone else, but I wouldn't want to get paired unfairly to other teams just because we did bad one year.
SGopwani
12-04-2002, 14:02
I definitely would not want to have pairings based on previous years ratings. It seems to me that this would place teams that will likely be strong (47, 111, 308) in much more difficult matches. The result would be that teams which have weaker records get easier matches, most likely more points, and thus higher rankings. Sure, this makes it nicer to rookie teams, but it makes the seeding/qualification ranking system somewhat obsolete, as you do not have a list of the best robots....do we really want to punish our strong veteran teams??? And do we want finals/eliminations to be full of teams with weaker schedules??? I like the random pairings, i agree that sometimes a team can get hit with a couple tough matches, but fixing the matches in this way, i feel, would only make the situation worse
Jim Meyer
12-04-2002, 15:03
I think a better use of data of this type would be to try to ensure that each team in a division had a relatively equal schedule. I know the system would not be perfect but ANY effort to eliminate some of the "luck of the draw" effect would be a step in the right direction.
Even something based on a teams years of experience would help. (every team would play with, and against roughly the same number of rookies, 2 year vets, 3 year vets etc.)
I think a system like this could go a long way towards having the best teams seed at the top. (We've all seen a few teams who seeded maily because they were lucky enough to have good partners.)
Jgreenwd1
03-05-2002, 09:31
we were higher then i thought we would be on the list. and a good job putting that toghather must have took a lot of hard work, or a lot of free time;)
SharkBite
03-05-2002, 11:39
im sorry but the only way to be fair is to keep the matches random.....i can think of a whole lot of complications that could arise by trying to match up teams by quality
first of all, the highest scoring rounds are the ones with the highest quality robots...... and out of that match one of the great robots has to lose
even when a not so good team gets carried, most of the time they cant hold the position, and if they do, they usually dont last in the finals
besides isnt this what its all about? working with everyone no matter what
vBulletin® v3.6.4, Copyright ©2000-2017, Jelsoft Enterprises Ltd.