I’ve noticed that in addition to a rise in 3-event teams that are attending 2 districts and an outside regional, more teams are moving towards 3 regionals. Do you think it will be a growing trend in places that don’t have districts?

Here’s a late addition of when each event reached initial capacity (blue bar) and when it finished drawing from the waitlist if it reached max advertised capacity (red bar). Those events where the blue/red bar reaches the top are those that never reached maximum capacity. The scale is in days from registration opening. The blue bar can be thought of as the free market, while the red bar represents the controlled market of the Regional Director/FIRST HQ’s discretion. The Blue goes quickly, but the red lags.

The first chart is in order from left to right based on which events reached initial capacity first.

The second chart is in order by when the team list was finalized from the waitlist.

*FIRST* HQ is highly concerned with all teams getting their first Regional, but don’t care so much if they get a second Regional, so some waitlists drag on and never seem to close.

RegionalRegistration2012.xls (534 KB)

RegionalRegistration2012.xls (534 KB)

How many days must teams wait before second and third regional registration opens?

Here are some charts of interest.

These show how old the teams are that have dropped out each of the years from 1999 to 2012.

The number in the pie wedge is the number of teams that didn’t return from the previous year, and the color of the wedge corresponds to the number of year’s the team actually competed. This includes teams that may have skipped a year, but the skipped years weren’t counted.

In the 2nd attachment the % in the pie wedge is the % of all teams lost that year.

Any chance you could express the loss in percentages of the total loss that year? I’m hoping it will make a y2y comparison easier

See if the 2nd attachment fits what you want.

Thanks Mark.

So what occurred in 2000 and 2001? It looks like we have made steady improvement in retaining new teams since then, but those two years really stand out.

That cross-year comparison of % might be deceptive and I’m thinking about other ways to portray this kind of data. Maybe normalize % against the total number of teams each year, so losing 1% of total teams in 1999 can reasonably be compared to 1% of teams in 2012.

For instance, 1999 lost 12.7% of it’s teams, while 2000 lost almost 5% fewer (7.9%) of it’s teams.

P.S. I added full comparison charts that show the difference.

The “problem” with 2000 is that it had the lowest overall dropout rate of all the years charted, so the percentages are disproportional in comparison to the surrounding years. 2000 only lost 32 teams.

Most of the other years are pretty similar with less than a 1% spread in losses of total teams (7.9% - 8.8%), so the charts do generally work in comparison. The outliers, where comparison doesn’t work, are 1999(12.7%), 2002(10%), and 2005(11.9%), all poor years for retaining teams.

Ah, OK… Makes sense. Amazing how easily numbers can be deceiving If not interpreted correctly.