Go to Post I quickly realized that, "hey, I'm looking at a wonderful piece of technology that defies gravity" and I calmed down quite a bit. - Elgin Clock [more]
Home
Go Back   Chief Delphi > ChiefDelphi.com Website > Extra Discussion
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
Reply
Thread Tools Rate Thread Display Modes
  #1   Spotlight this post!  
Unread 22-09-2014, 15:18
Basel A's Avatar
Basel A Basel A is offline
It's pronounced Basl with a soft s
AKA: @BaselThe2nd
FRC #3322 (Eagle Imperium)
Team Role: College Student
 
Join Date: Mar 2009
Rookie Year: 2009
Location: Ann Arbor, Michigan
Posts: 1,925
Basel A has a reputation beyond reputeBasel A has a reputation beyond reputeBasel A has a reputation beyond reputeBasel A has a reputation beyond reputeBasel A has a reputation beyond reputeBasel A has a reputation beyond reputeBasel A has a reputation beyond reputeBasel A has a reputation beyond reputeBasel A has a reputation beyond reputeBasel A has a reputation beyond reputeBasel A has a reputation beyond repute
paper: Scouting Accuracy Benchmarking Study

Thread created automatically to discuss a document in CD-Media.

Scouting Accuracy Benchmarking Study by Basel A
Reply With Quote
  #2   Spotlight this post!  
Unread 22-09-2014, 15:19
Basel A's Avatar
Basel A Basel A is offline
It's pronounced Basl with a soft s
AKA: @BaselThe2nd
FRC #3322 (Eagle Imperium)
Team Role: College Student
 
Join Date: Mar 2009
Rookie Year: 2009
Location: Ann Arbor, Michigan
Posts: 1,925
Basel A has a reputation beyond reputeBasel A has a reputation beyond reputeBasel A has a reputation beyond reputeBasel A has a reputation beyond reputeBasel A has a reputation beyond reputeBasel A has a reputation beyond reputeBasel A has a reputation beyond reputeBasel A has a reputation beyond reputeBasel A has a reputation beyond reputeBasel A has a reputation beyond reputeBasel A has a reputation beyond repute
Re: paper: Scouting Accuracy Benchmarking Study

Before opening this up, I'd like to provide some of the background for this past study. It came out of my concern about 3322's scouting this season. We switched from a paper-only system to an entirely redesigned paper-to-digital system between Week 3 and Week 5. There were then additional changes to the system from Week 5 to MSC. With so many changes going on, I did not feel confident about the accuracy of our data. Following MSC, I began asking around for teams' datasets from MSC to compare with ours. We collected 4 from elite teams, and the data from these 5 teams became this study.

I decided to allow the teams involved to remain anonymous. If the teams would like to come forward, they're free to do so.

If any other teams would like to add their (MSC 2014) data to the study, I'd be happy to take it. Just send me a PM for my email, and once I've received it, I'll get right on adding your data.

With that, I'd like to open this up for questions or comments.
__________________
Team 2337 | 2009-2012 | Student
Team 3322 | 2014-Present | College Student
“Be excellent in everything you do and the results will just happen.”
-Paul Copioli
Reply With Quote
  #3   Spotlight this post!  
Unread 22-09-2014, 16:08
Caleb Sykes's Avatar
Caleb Sykes Caleb Sykes is offline
Registered User
FRC #4536 (MinuteBots)
Team Role: Mentor
 
Join Date: Feb 2011
Rookie Year: 2009
Location: St. Paul, Minnesota
Posts: 1,046
Caleb Sykes has a reputation beyond reputeCaleb Sykes has a reputation beyond reputeCaleb Sykes has a reputation beyond reputeCaleb Sykes has a reputation beyond reputeCaleb Sykes has a reputation beyond reputeCaleb Sykes has a reputation beyond reputeCaleb Sykes has a reputation beyond reputeCaleb Sykes has a reputation beyond reputeCaleb Sykes has a reputation beyond reputeCaleb Sykes has a reputation beyond reputeCaleb Sykes has a reputation beyond repute
Re: paper: Scouting Accuracy Benchmarking Study

This is a great paper, I really enjoyed reading it. In the back of my mind, I have always wanted to run an analysis like this for every team at an event that will give me data, and then give an award to the team with the "best" scouting data using an analysis like this. Of course, thinking grand ideas is far different than implementing them.

The importance of good scouts too often seems undervalued relative to other team positions, and I think that giving a small team award could go a long way toward making scouts have a more enjoyable experience.
Reply With Quote
  #4   Spotlight this post!  
Unread 22-09-2014, 17:09
Lil' Lavery Lil' Lavery is offline
TSIMFD
AKA: Sean Lavery
FRC #1712 (DAWGMA)
Team Role: Mentor
 
Join Date: Nov 2003
Rookie Year: 2003
Location: Philadelphia, PA
Posts: 6,599
Lil' Lavery has a reputation beyond reputeLil' Lavery has a reputation beyond reputeLil' Lavery has a reputation beyond reputeLil' Lavery has a reputation beyond reputeLil' Lavery has a reputation beyond reputeLil' Lavery has a reputation beyond reputeLil' Lavery has a reputation beyond reputeLil' Lavery has a reputation beyond reputeLil' Lavery has a reputation beyond reputeLil' Lavery has a reputation beyond reputeLil' Lavery has a reputation beyond repute
Send a message via AIM to Lil' Lavery
Re: paper: Scouting Accuracy Benchmarking Study

Hmm, was hoping for a more complete data set to try and grade the accuracy of OPR for 2014. Still a nice paper, and an interesting first look at accuracy in scouting.
Reply With Quote
  #5   Spotlight this post!  
Unread 22-09-2014, 18:58
XaulZan11's Avatar
XaulZan11 XaulZan11 is online now
Registered User
AKA: John Christiansen
no team
Team Role: Mentor
 
Join Date: Nov 2006
Rookie Year: 2006
Location: Milwaukee, Wi
Posts: 1,326
XaulZan11 has a reputation beyond reputeXaulZan11 has a reputation beyond reputeXaulZan11 has a reputation beyond reputeXaulZan11 has a reputation beyond reputeXaulZan11 has a reputation beyond reputeXaulZan11 has a reputation beyond reputeXaulZan11 has a reputation beyond reputeXaulZan11 has a reputation beyond reputeXaulZan11 has a reputation beyond reputeXaulZan11 has a reputation beyond reputeXaulZan11 has a reputation beyond repute
Send a message via AIM to XaulZan11
Re: paper: Scouting Accuracy Benchmarking Study

The sample size may be small, but did you find a correlation between amount of data taken and accuracy? (My personal opinion/assumption is that teams generally take way too much data which makes it harder to scout, hurting the data teams actually use).
Reply With Quote
  #6   Spotlight this post!  
Unread 22-09-2014, 19:48
Basel A's Avatar
Basel A Basel A is offline
It's pronounced Basl with a soft s
AKA: @BaselThe2nd
FRC #3322 (Eagle Imperium)
Team Role: College Student
 
Join Date: Mar 2009
Rookie Year: 2009
Location: Ann Arbor, Michigan
Posts: 1,925
Basel A has a reputation beyond reputeBasel A has a reputation beyond reputeBasel A has a reputation beyond reputeBasel A has a reputation beyond reputeBasel A has a reputation beyond reputeBasel A has a reputation beyond reputeBasel A has a reputation beyond reputeBasel A has a reputation beyond reputeBasel A has a reputation beyond reputeBasel A has a reputation beyond reputeBasel A has a reputation beyond repute
Re: paper: Scouting Accuracy Benchmarking Study

Quote:
Originally Posted by Lil' Lavery View Post
Hmm, was hoping for a more complete data set to try and grade the accuracy of OPR for 2014. Still a nice paper, and an interesting first look at accuracy in scouting.
I tried to do a project like this back in 2012 using data from 2010-2012 (I used 2337's datasets for this period, 2-3 events per year). While this was somewhat useful in comparing the utility of OPR across different years, I ran into difficulty comparing OPRs directly to scouting data because extrapolating scores from teams' statistics is a nontrivial task. OPRs are, of course, dependent directly on match scores, so this extrapolation is necessary for comparison. For example, in 2011, tube scoring did not scale linearly. You'd encounter the same problem in 2014 with scores of balls depending on the number of assists. However, it's definitely a topic worthy of further study.

Quote:
Originally Posted by XaulZan11 View Post
The sample size may be small, but did you find a correlation between amount of data taken and accuracy? (My personal opinion/assumption is that teams generally take way too much data which makes it harder to scout, hurting the data teams actually use).
Interesting question! This isn't something I originally considered, but I took a look. With only 5 data points, 4 of which are pretty similar in accuracy, I wasn't optimistic about having a real answer here, but then it got worse. All 5 teams took around the same amount of data! They varied from 16 fields of data to 21. None had exactly the same number, and the most accurate team was in the middle at 18 fields.
__________________
Team 2337 | 2009-2012 | Student
Team 3322 | 2014-Present | College Student
“Be excellent in everything you do and the results will just happen.”
-Paul Copioli
Reply With Quote
Reply


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 10:06.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi