I think it was because the teams didn’t have there firmware updated. I’m not the most Knowledgeable when it comes to that type of stuff.
That would be in another thread please! This discussion is about how we might evaluate the system and to collect data with which to evaluate it, not its theoretical merits! I find it incredible that anybody is still discussing the theory of such a system when there are hard facts available for the collecting! What does anybody’s opinion on something count for against hard experimental data? We don’t debate on why gravity should or shouldn’t exist, we don’t vote on astrophysics, and I’m stumped as to why we should try to debate our way out of this!
I’m not trying to pick on you here, I know other people are doing it, but please knock it off.
That is very important data! Since we do not have Thursday matches in this system maybe we need to make running the robot on the field as part of the inspection process.
I should say that i am sorry, i quickly went to say my view, of the delay that was made, and then jumped into a topic that i wasn’t prepared to talk about, it will be deleted.
The delay that i know about;
Firmware upgrades; Well teams didn’t keep up on the Firmware, some teams have issues with net connectivity, and at regional events were not allowed to broadcast wifi, so i don’t think that extra Thursday would help.
Rebooting; The FMS was rebooted after every single match, is this standard procedure?
Communication; A catch all, a team gets disabled and goes starts autonomous code/Two teams press disable at the same time and disable the whole alliance/A team randomly goes in autonomous while the MC is on the field/ect.
my questions
1: to Michigan Teams: Do you guys miss traveling out of state to compete; (this is mainly to those big teams that choose not to travel this year).
2: to teams outside of Michigan: has the competition dropped off a bit compared to last year, where last year Michigan Teams competed?
3: to Michigan Teams: how much has the competition level dropped compared from the regional last year to these division events this year.
4: to teams who are outside of Michigan and traveling to Championships: do you believe that the district system in Michigan gave these teams any advantage/disadvantage over the other teams attending the Championship Event? Do you believe the competition level displayed by the Michigan teams matched your expectations, or was it varied?
Success can be measured in different ways; better competition for everyone is good. this is just a simple matter of this question
is Michigan turning to Chinese ethics (quality < quantity)?
So, let’s play the numbers game with this, starting with team growth:
How has Michigan’s growth rate changed, relative to prior years and other parts of the US/Canada? (I don’t mean to exclude the rest of the world, but the data is too small there) Are more rookie teams forming? Are more veteran teams being retained?
There is also a question of costs:
Can teams use smaller budgets because of cost savings? How much money that used to go to regional fundraising is now going to teams? Was the $1000 reduction in registration a factor in the formation of some rookie teams?
Now a question of publicity:
Is the travel divide between Michigan and non-Michigan teams significantly impacting the quality of given events? (Do recall that the Michigan boundary was established out of convenience of geography) Is the increase in local events helping teams get more press coverage? Is the glamor-reduction at events causing negative media coverage? On the topic of travel: what is the likelihood that a team will leave FIRST because of local competitions versus what is the likelihood that a team will start because of a local competition?
Now, to answer the questions:
Growth rates can be calculated by dumping data out of FIRST and meticulously sorting it, and then analyzed given economic conditions of the US/Canada. It can be done eventually.
Costs are tougher to determine - most teams don’t publish their budgets. Event costs may be made available through FiM - as I understand, 7 districts are operating for the cost of one regional. In addition to cost, one must eventually consider the concept of “value” - how much bang did teams get for their dollar? Two local events for $5000 versus one event for $6000 seems seems huge.
Publicity will be much harder to quantify - counting news articles might not be feasible or even relative. Surveys might not yield enough data unless all teams respond (truthfully). Value of data might also become a question - are rookie team’s inputs more important than veterans? How much do we care about what non-Michigan teams have to say?
What makes matters worse, is that these aren’t even all of my questions. But its a start.
After attending the Rookie Regional, and Traverse City I feel that FiM is a step in the “correct” direction. That is just my opinion, some criteria I feel need to be evaluated.
Are some events considerably weaker than others? Yes some events are going to be weaker, this could be due to timing, first week events are generally less competitive than 6th week ones. But I am talking about geographical areas causing weaker winners. The winners from one event should be roughly equal to the winners from another event.
Does it benefit teams? Do we see more teams playing at higher levels? Higher levels is based on the skill of the team in question, obviously I don’t expect every team to be playing at the level of 217 but I expect gradual improvement through the course of the season, that would be a true measure of this tests impact. A team that barely moves their first event needs to at LEAST score one ball in their second, that is a measurable improvement and I feel we should all strive to improve between events.
Long Term, one concern I have is that because we are organized by geographical districts (more or less) we will become very separate. For example let us say that the next year 85 comes out and wins Traverse City AGAIN, Im sure BOB is a great team (in fact after playing with and against them I KNOW they are) but I do not want to get into a situation where the State Championship becomes what is essentially the same teams over and over. With luck this structure will allow teams that are not perennial powerhouses to gain some ground.
A small concern of mine is that the depth of field will diminish, by the 4th alliances third pick teams should still be picking teams they WANT not just whoever is the least crappy machine left (Not saying they do or dont now, just saying it is something I never like to see) If this situation occurs frequently then there is a failing in the system.
I have some concerns about the structure, but I feel the goals are in the right direction. Best of luck and I cannot WAIT to compete instead of just watch.
FIRST is about inspiring young people to think about careers in Science, Technology, etc. How do you objectively measure the affect of the MI format on inspiration? In my opinion, it would be important to measure the format’s affect on the expansion of the program (perhaps measured by the percentage of students in a region that have the opportunity to participate). This would take some time to do.
There is a concern about “quantity vs. quality”, but what qualities are we talking about? If the new format led to a situation that drove students away from science and technology that would be a bad thing. If the average quality of the robots was diluted by a bunch of newbies and under-resourced teams, so what? This program isn’t about us mentors and it isn’t about the robots. The major learning benefits come during training, planning, and build - which isn’t directly affected by the competition format.
As for the two day format, the lack of Thursday practice day increases the risk of schedule delay due to undiscovered field issues, and increases the liklihood of “no-shows” for unresolved robot issues. Those could be measured easily enough.
I don’t think it is valid to associate field issues and the MI format. Any delays and communication problems at TC were apparently no worse than anywhere else. For the most part, they were associated with the new control hardware.
We were at Traverse City this weekend. Thursday evening check-in and inspection was a big help. We initially had communication problems due to firmware updates (thanks to Jim Sontag for fixing us up). Again, not specifically a MI format issue. It would be nice for teams attending their first event to be able to do a functional test on Thursday evening.
Our students don’t seem to care about missing the out-of-state travel experience (not that we had money to go to a second event anywhere with the standard format!). Anything that involves getting out of town, staying in a hotel, eating in restaurants, and hanging out with other friends is OK by them.
I like the MI program. Anything that gets us into 3 competitions for less money than 1 is alright by me. Last year, we only had money for one regional. Our robot was badly damaged on Friday morning, and wasn’t fully back up until Saturday morning - that squelched most of our “competition season”. It wasn’t particulary inspirational. The MI format would have helped us a lot.
I haven’t read every post here, but I thought I’d throw in my 2 cents at what I’m seeing that I like from the Michigan events that may be overlooked.
Our team participates in two regionals every year, both an hour away from our school. Convincing our school’s administration, students, teachers, and parents to drive an hour to watch a robotics competition is extremely hard. District events would give us more local competitions, so we could get much more interest from our local area.
Yes, this is correct. It could have been easily avoided if the teams understood the requirements BEFORE entering the arena. Our driver station firmware was also not up to date, but we caught it before opening ceremonies.
One success already, and I realize this is only anecdotal, not firm facts.
I heard one of the host coordiators say the Traverse City District cost $7,000 to put on. Compare that to the estimated $150,000 - $250,000 it would have cost to stage another regional; that’s a real success right there. Most of this was due to the hard work of the organizing committee and the support of the community. For example, we volunteers were fed very well by a different catering company every meal. The caterers also sponsored the event; our exellent lasagne supper cost the committee $2.00 per head.
I still share some of the concerns brought up in this thread, and a few others. I’m willing to withhold judgement until sometime after the season is over.
Withholding judgment until all the facts are in is wise. Not figuring out your result categorization criteria until after the event is data mining not experimentation. I wasn’t trying to ask you to judge, I was asking you how you plan to judge when it comes time.
Some small number of facts can be sifted from the opinions and congratulations http://www.chiefdelphi.com/forums/showthread.php?t=75311 My short summary so far of these facts runs: (current through post number 8)
Michigan Feedback: (broken up by category and then tallied by how many times said)
- Event Quality: votes are equal, lower,
- Venue: smaller venue easier keep track of team members
- 4 refs too few?(2), poor reffing
Possibly Everywhere Feedback: watchdog error? (1918), field reset slow, score-count method bad (disagreement among counters), score count method questionable
I think that’s interesting stuff. I take the ref stuff with a grain of salt personally as very few people as, a general rule, go home saying “man those refs were awesome!” but I mostly want to leave you guys to make your own conclusions from whatever data is available.
Checking out usfirst.org it seems that the qualifier matches are not listed. Can somebody who is going to Kettering make a note to themselves to try to bring home this data for study? Does anybody have this data for TC, maybe in their scouting database? It would also be great to know how many robots are not making it onto a field per match. That is really important stuff.
This is up http://www2.usfirst.org/2009comp/events/GT/rankings.html It looks like that 12 qualifiers number was pretty accurate across the table at TC. How did they even it out for the two 11’s? Anybody know?
The number 12 looks pretty good compared to 8 in NH http://www2.usfirst.org/2009comp/events/NH/rankings.html, Ohio http://www2.usfirst.org/2009comp/events/OH/rankings.html, Midwest http://www2.usfirst.org/2009comp/events/IL/rankings.html, Oklahoma http://www2.usfirst.org/2009comp/events/OK/rankings.html or 7 in Kansas City http://www2.usfirst.org/2009comp/events/KC/rankings.html, NJ http://www2.usfirst.org/2009comp/events/NJ/rankings.html and DC http://www2.usfirst.org/2009comp/events/DC/rankings.html
I guess this pretty well provides a piece of an answer for everybody who asked “will teams play more under the new system.” Yes, TC seems to be a regional and a half of qualifier on-field time for half the price.
I need to find a better way to process that data. I think it can answer the other questions too.
edit: holy crap guys! That 12 matches per team was with thirty eight teams. How on earth did you guys do that?
This is not that difficult, even at official FRC regionals, if you have less than 40 teams.
UTC New England regional had 35 teams in 2005, and every team had 12 qualification matches.
You’re totally right. It seems that I was momentarily struck with a total inability to do basic math. I estimate 76 matches from TC. That’s not too far outside of some other ones (at a glance BAE had 62, Buckeye did 78…etc
The average FIRST event has 70 - 80 matches. This means there are 480 slots (at an 80 match event). 480/40= 12. This was by design and not by accident or some miraclulous feet of efficiency.
One thing that is impressive though is this means you are playing every 6-7 matches on average which puts the pace on par with actually quite similar to playing Saturday afternoon. Ask anyone who has attended the Detroit regional in the past. The good news is you play a lot of matches. The bad news is you better be fast at repairs because you are playing every 30 minutes. With a 10 minute request queue, this only gives you about 20 minutes to fix things. Can anyone say “I need a hacksaw, 4 feet of aluminum, a riveter and 2 drills STAT!”
Oh and that was 5 match break on average. Sometimes you will play and then play 3 matches later. By being on the field you are actually late to queuing.
I am pretty sure I saw the matches listed on Friday night - but I can’t verify that for sure. If they went missing, that is not an indication of anything wrong with the Michigan district model. BAE qual matches are missing too: http://www2.usfirst.org/2009comp/events/NH/matchresults.html That’s a FIRST website problem, not a Michigan problem.
Edit: TBA has them http://www.thebluealliance.net/tbatv/event.php?eventid=237
This is up http://www2.usfirst.org/2009comp/events/GT/rankings.html It looks like that 12 qualifiers number was pretty accurate across the table at TC. How did they even it out for the two 11’s? Anybody know?
If you don’t show up at the field, you haven’t played all of the matches you’re assigned to.
The number 12 looks pretty good compared to 8 in NH, Ohio , Midwest, Oklahoma or 7 in Kansas City, NJ and DC
I guess this pretty well provides a piece of an answer for everybody who asked “will teams play more under the new system.” Yes, TC seems to be a regional and a half of qualifier on-field time for half the price.
Plus, all those teams get to go to **another **district event and play another 12 games - for $1000 less than going to one Regional.
edit: holy crap guys! That 12 matches per team was with thirty eight teams. How on earth did you guys do that?
76 matches. Originally scheduled 11:30-12:52 and 14:00-18:52 on Friday, 9:00-11:12 on Saturday. Field problems pushed 9 matches from Friday night to Saturday morning, even going about an hour late on Friday. There was a little over an hour slop time in the event schedule on Saturday which was taken up by the longer running of the seeding matches. Elims started less than 1/2 hour late.
All the rest of the stuff you cited is subjective opinions. Walt mentioned the survey that has been commissioned. What other kind of documentation do you want?
I can see the elimination matches now, but not much else. I’m not trying to say the lack of data was Michigan’s fault.
Yes, I understand that. The “not showing up” was hopefully going to be a different metric, namely how well the robots were holding up to the pace of that sort of regional setup and also how ready the teams were for competition. I didn’t think that was reflected in the data online from the usfirst website. Are you saying it was?
Its pretty epic…
Yeah, grabbing facts from random things people say seems a pretty non-scientific way to do things, but I am at a loss for the most part as to how to find any data on the non-numerical aspects of this without a webcast or some plane tickets and free weekends. I would really love to get my hands on something better if it is available.
The survey sounds really promising. I guess the real answer to “what other sort of documentation do you want?” would depend a ton on what is and is not covered in that survey. Really right now I am searching for sufficient facts to answer the questions that have been asked here. Here are a bunch of them.
I looked into the usfirst.org team profiles for the rookies but those are filled out so early in the season for them that I fear it won’t be terribly useful. I guess we need some data on rookie teams. That can be hard numeric sort of data. For better or for worse the rest looks somewhat hard to boil down. Volunteer numbers might be able to be calculated from other data such as delays but since there are a lot of problems which can’t be fixing by piling more volunteers on I don’t think that’s a valid metric. Perhaps we can get numbers on the raw number of volunteers and raw number of man-hours and compare it to other regionals of similar size? That will tell us if they are getting at least as many, but it won’t say if that is enough.
1 and 10 look pretty well answered for TC thankfully. 6 can be hopefully covered with the existing survey. Not sure how to do some others. Ideas?
1 and 2 are pretty self-evident, 3 I actually don’t understand what that means.
These ones are all quality things. No bright idea has come to me yet as to how to tackle them.
Alex outlines his questions and how to solve them (the ones he knows how) really well. I honestly would be quoting his entire post word for word here if I tried to summarize so here is the link back again. http://www.chiefdelphi.com/forums/showpost.php?p=829195&postcount=25 I haven’t gotten a bright way to solve the ones he doesn’t know how to solve yet.
I guess the easy answers are: rookie team statistics, attendance, guests lists, matches where robots did not show up, matches where robots never drove, volunteers (in man hours and in a head count).
Are budgets for various areas of FIRST published? Anybody know where? Do any teams want to publish their budgets from this year and last year? I don’t expect there to be enough teams that agree to do it to get a reasonable idea of how most teams act…but maybe it is worth a shot.
The rest needs some way to boil it down…and if you can think of a good way please tell me. You’re right though, collecting piles of anecdotes is not good practice.
Some of this has been covered a bit as I was waiting for permission to post such a big post. Thanks for your permission Katy.
===============================================
Since this thread is about metrics and analysis, I will include some of the analysis that went into figuring out the structure. This is some of the quantitative goals and overview of the analysis that went into it.
Goal: Improve Competition quality: Analysis was done on using several scouting databases on the major metrics that improved team quality. The number 1 metric was play time. Teams that attended 2 or more events began building more competitive robots. This was measured by analyzing W/L and picking selections at Regionals, Regional Awards, and then picking order at the Championship. FiM wanted to make it affordable for more teams to compete. If you want you can do the analysis yourself. The data is on FIRST databases, and many teams have historical scouting data.
Goal: Higher match density: More matches give more play time. The typical timeframe for a FIRST event allows for 80 qualification matches. This gives 480 slots. At a 60 team regional you get 8 matches. At a 40 team event you get 12. More matches reduce the “luck (both bad and good) factor”. This also gives teams more opportunity to try different things which should improve Saturday afternoon strategies. This also gives more opportunities for teams to get running. This was best demonstrated at last years pilot Rookie event (all Rookies) where every team was able to compete except 1 Rookie team that attempted a complicated crab drive and would not accept vetran assistance. Once you are “running” at the 1st event, you have another event with another 12 matches to compete at.
Goal: Less time off work for Mentors, less time away for from school for students: This was accomplished by converting the “Thursday” practice day to an 8 hour fix-it window for teams. This allows teams the same amount of time to prep machines, but let’s them do it at their leisure. As a mentor this makes it much easier as I will only have 2 days away from work (2 fridays) for two events instead of 4. Because the events are closer, I can drive to most events after work instead of 1/2 a day of travel. This is also less time off school for students.
Goal: Lower costs per event. This has been covered a bunch.
Goal: Qualification points system. As/if FIRST grows, they would like to come up with a more robust point system that qualifies teams for the Championship. Currently most teams buy slots. This is a big budget advantage. The points system was analyzed using the data mentioned above in order to figure out “robot” quality. Feedback is good on this, but would ask people to do the same analysis that went into this. I.E. if you have “ideas” about what a point structure should look like, run historical data through it and see how it plays out. I will caution that most people focus on teh wrong end of the curve worrying about who comes out on top. The real issue is not who is on top, it is about coming up with the “fairest” cut point. In MI this was distinguishing the 60th from the 61st. Most worry about who the top 10 are. Who cares, the top 60 get to play. Being team number 61 is the worst spot (actually 61 will probably get in as some other team won’t be able to attend, but you get the point). Ask any team that distinguishing the 24th best team at a comp from the 25th is the hardest decision to make.
Goal: Better community support at events: This is better attendance by parents, and friends of competing students. Our team is doing 1 event 20 minutes away and 1 event 2 hours away from the school. I will ask someone to take attendance and give this as a relative metric. In the past twice as many parents have attended the closer events than the “far” away events. Teams will need to measure this as it is often difficult to distinguish a Parent in a T-shirt from a mentor. Same with friends in the stands. If other teams outside of MI would take this data too along with distance from Home where the event takes place, and team size. This would be a good metric to compare. Teams that do more than 1 event woud definitely be good data.
Goal: Growth: Pull up a team map and you will notice that teams are where events are and additional events go to areas of high team density. This is an interesting effect best demonstrated by Minnesota. Minnesota made an event and had a ton of Rookie teams. Because there were a lot of teams, they needed another event.
=========================================================
I cover these as these are engineered successes. There was criteria and analysis set up for most of these, and they are designed to succeed from the start. If you set up a criteria of more matches and give teams more matches, then congratulations you succeed with giving teams more matches.
A lot of the open ended items are quality metrics. These are often less tangible. Especially early on. Coming from Chrysler, I can tell you that only relying solely on metrics is a bad idea. There were some (not all) cars that on paper had the “right” numbers to be a success. These metrics were developed by well intended, smart people, who quite frankly missed the mark. Our best recieved vehicles by the public were often held to certain metrics that we knew it must achieve, but then held to a quality standard evaluated by strong evaluators who knew what was right thing to do from experience and talent. Ask a bean counter to design a car and you will have the lowest cost highest profitability vehicle ever designed that will tank in the market. Ask a racecar engineer to design a minivan and you will have the fastest best handling minivan in the world with no cup holders, no radio and no interior. Ask a customer what they want in a vehicle, and they will highlight what they don’t like about their current vehicle, but will forget to mention things they take for granted (like a rear window diefroster unless their current vehicles is broken). Ask someone who is passionate and capable about making a great car for the customer to make a car they want, give them the support they ask for, and you will get a great car.
One other thing about metrics. If you have a Pass/Fail system, you must be very careful. 2 very dangerous outcomes occur from this.
#1 if something comes very close to achieving its objects, it still is a failure because it did not achieve its objective. Should we have ended the Space program after the first launch “failure”. What about Apollo 13? From a mission objective it was a disaster. Many consider it a success because much was learned, and the crew arrived home safely. Pass/Fail is very binary and can cause you to throw away valuable ideas that just need development.
#2 People will only agree to metrics that are too easy to achieve. This can dramtically lower the standards. I have a lot of friends in the education system that feel that “no Child Left behind” dramatically reduced their ability to effectively teach because it promoted “by funding” unethical practices of promoting students not ready to move on to the next grade. Persons in charge felt it was morally better to pass a child not ready to move on, than to loose funding for the dozens of others that were ready. By lowerin gthe standards of success they then lowered the level of achievement and many students found out they didn’t need to work as hard and thus could simply pass by attendance.
Dave Lavery was absolutely right when he said that this is preordained that it will be a success (earlier thread this past summer). Mathematically speaking there are already way too many successful features for it to be a complete failure. I understand his concerns of predetermined success, but I also understand some of the efforts that have gone into making it a success.
We are all very good at being critics and stating opinions. Many of us are good at measuring things and pointing out what is wrong. Some of us are good at analyzing data and seperating real trends from statistical anomalies. Even less have the foresight to see these anomalies and account for them in their designs. Few of us (CD) community go into the level of design detail required to create something truly original, elegant, and robust.
In my eyes, there is very little that distinguishes a great artist, athelete, or engineer/architect. Finding a loophole in an already great system (FIRST) that will allow more to achieve more for less requires the same formula as a great work of art or a beautiful building. Some excellent vision of something that not everyone else can see, mixed with a healthy dose of hardwork, passion, and skill. Like many other things created by man, there will be flaws, room for improvement, and of course controversy. Best of all opportunity to create something even better.
For those content to be critics, I would ask that you be a good critic. Go to a district event and cover it with the attention of any good critic. Obserrve both bad and good and give them both the attention they deserve. For those not content to merely criticize (this thread is looking to be productive), the real works of art will come from the persons who figure out a self-sustaining model that will work elsewhere. The FiM model will not currently work in its current form for many areas. Figure out and test what will.
One concern with the MI format is the short time between the announcement of the “qualified” State Championship teams (Sunday?) and the required accept/reject for the event (Tuesday). The same thing applies for State teams qualifying for the Championship event.
If were to qualify for State or Atlanta, it would only take us about 2 seconds to decide that we want to accept. It would take a lot longer to arrange funding, travel, lodging, etc. I understand that we can send “conditional” purchase orders and checks to FIRST, but we need $ to cover them if needed. This means we would need to raise funds and make arrangements in advance if we thought we had an outside chance at qualifying. It seems appear rather presumtuous and conceited to raise funds for something we haven’t earned - kind of like building the trophy case before the game. A lot of teams will do a lot of work and raise a lot of money for nothing. Maybe they can carry it over to the next season.
With the traditional system, qualifying teams know their fate at the end of each regional, and there are a couple of weeks between the last regional and the Championship.
It would be nice, at least at the State level, if there was an estimated qualifying point total for receiving an invitation. That would give teams a better “heads up” about their chances.
Perhaps a useful metric for evaluating the MI format would be the percentage of qualifying teams that are able to participate at the next level of play.
I think it is about top 50%.
IF you didn’t play saturday afternoon at your first event, you better play at your second event or win some awards.
If you win or are runners up at your first event (especially if you are a first round pick), start fund raising.
If you were a late pick at your first event and get knocked out early, and you are a late pick for your second event and get knocked out early, you should still be on the bubble. If you won over 50% of your matches in Qual, you are probably good. If you lost most of your matches in qual at both events probably not.
As one possible method, you could take the total amount of available points and divide by the total number of Michigan teams. This is the average points value. Take your performance from your first regional and project how well you need to do at your second regional. If you achieve this value, I think you are garaunteed to go (mathematically speaking). Since there are some teams doing 3 events and the third event doesn’t count, the upper 50% should actually be lower than the Average points value. Also since some teams will win really big (a couple awards plus a possibly a couple events), this will also drive the 50% mark down.
Why not raise funds and book mark it for next year if you can. If not, bu an extra control system for a practice bot.
Actually, I think the bogey number would be based on the median scores, not average. The average point total for a district event would be around 24 (about 950/40). However, the numbers would be skewed toward the more successful teams. A team with a 6-6 record that was a late pick for eliminations and got eliminated in the quarter finals without winning any awards would only earn about 15 points, but would likely be in the top 50%. Factor in the reduction for 3-event teams and decliners, and the 2-event bogey could be in the mid 20’s.