Go to Post The biggest thing is you don't have to make your robot like another teams. Come up with something unique and go from there. RAISE THE BAR..... - camtunkpa [more]
Home
Go Back   Chief Delphi > Competition > Rules/Strategy
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
Closed Thread
Thread Tools Rate Thread Display Modes
  #1   Spotlight this post!  
Unread 04-26-2015, 02:43 PM
chandrew's Avatar
chandrew chandrew is offline
Strategy Sun Tzu
AKA: Andrew Chan
FRC #1410 (GWHS Robotics)
Team Role: Tactician
 
Join Date: Mar 2015
Rookie Year: 2013
Location: Colorado
Posts: 20
chandrew is an unknown quantity at this point
Team 1410’s Unique Weighted Standardized Score Data Analysis System

This year Team 1410 decided to implement a unique system for data analysis using introductory level statistical analysis methods. Instead of comparing individual variables or comparing averages of every robot we decided to test out a system that would give definitive rankings similar to those of OPR but taking into account and valuing individual variables to see how above/below average a robot was in the categories that were valuable to our team and also eliminating some of the skew that often occurs with OPR.

Z-scores or standardized scores in general take a particular data value (in this case the individual team average fir a category) and subtract the mean for a particular data category and then divide by the standard deviation for that category. This gives a measure of how many standard deviations a robot is above or below the average value. For more detailed information on z-scores specifically see http://stattrek.com/statistics/dicti...nition=z_score.

We calculated our standardized scores through the creation of the attached Excel spreadsheet that would automatically calculate the individual unweighted Z scores, weighted Z scores for a first pick, and weighted z scores for a second pick and the sums of these scores. These sums were then compiled in a separate sheet inside the workbook. This is similar to the process used for decathlon rankings. In addition to the aggregate scores, each team also has an individual score page which shows which categories they are above/below average in and why they are ranked where they are. Z-scores were selected as the metric of choice here due to the reflection of the average values in an area and the ability that they provide to standardize all of the data into one definitive number for all of the categories that were defined.

The spreadsheet is organized into a main sheet, a rankings sheet and an individual sheet for every team at the competition. The main sheet contains all of the raw data values for every category. The rankings page contains the team sums of z-scores in each category as well as some additional information to show the accuracy of the system (number selected at, final team ranking and type of robot). The scores can be sorted by selecting the column of scores (basic, 1st pick weighted or second pick weighted) by clicking the letter above the column and then using the sort largest to smallest tool in Excel. A box pops up asking to expand the selection and should be accepted as this allows the team numbers and other categories to be sorted as well. The individual sheets have the labeled averages, z-scores, weighted z-scores etc. for every team at the competition.

There are a decent number of changes to make to our system, specifically making it easier to edit and link through Excel, spending more time perfecting weights before competition, as well as reducing scout error in matches by changing the method by which we scout. Specifically pertaining to the latter we plan on investigating several methods including a text messaged based data sending process such as 1986 and 303 use, however any suggestions are greatly appreciated as we are trying to omit the issues caused by poor wireless quality with data loss.

Overall the individual sheets proved to be more important in final decisions as our team realized that we had different needs as competition progressed and the weights became less reflective of what we needed in order to succeed (e.g. stack height/stack capping ability outweighing auto totes). These individual values were exceedingly valuable in our selection of Team 2240 as a second pick as even though they had a below average stack number and an above average number of dead bots which skewed the scores, the above average tote stack height level pointed them out as a good possibility to build/cap four stacks. This along with high standard deviations in these categories made them a good second pick for our picking position. In the end we were very satisfied with the analysis and the results that it gave us as a 7th seeded alliance captain that made it to semifinals and plan to continue and develop this system into the future. Feel free to send me direct message or email me at achan1861@gmail.com with any questions about this system or our strategy in general.
Attached Files
File Type: xlsx Data Analysis.xlsx (188.8 KB, 50 views)
Closed Thread


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 10:59 PM.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi