View Full Version : Average score per match and cycle times
dardeshna
11-11-2016, 22:48
During the beginning of the season when working on strategy and prototyping, we try to predict possible robot cycle times and average points pet match. Generally, at the end of the season, we look back on these estimated values and laugh, knowing that they were totally unrealistic.
How do other teams predict predict cycle times and average number of points scored per match? Is there a rule of thumb people like to use?
Cheers,
Devin
One good rule of thumb is that a good robot will perform tasks (very approximately) as quickly as a human who is moving at a normal, not rushed pace. So try setting up a a field or just some scoring elements and timing different team members pretending to be a robot. Another thing that can help is comparing the cycle to previous games, and guessing how much easier or harder the tasks will be.
AlexanderLuke
12-11-2016, 00:26
Generally, at the end of the season, we look back on these estimated values and laugh, knowing that they were totally unrealistic.
I take you mean unrealistic as in you expected robots to score faster and higher than what actually happened during the competition season?
Also, just for the curiosity and observation of us all, would you be willing to share your predictions of recent games?
Caleb Sykes
12-11-2016, 00:41
Is there a rule of thumb people like to use?
Imagine an average FRC robot doing the task, think through how slowly actions normally take. Take your estimate and multiply it by 3, and that is roughly how long an average robot will actually take to complete the task.
dardeshna
12-11-2016, 01:17
I take you mean unrealistic as in you expected robots to score faster and higher than what actually happened during the competition season?
Also, just for the curiosity and observation of us all, would you be willing to share your predictions of recent games?
Yeah we overestimated quite a bit.
For the community's enjoyment - 12 high goals per match average for one initial strategy, 5 second cycles and 23 high goals per match for a catapult prototype. Keep in mind our robot's performance was "modest" - I believe the best we ever ended up with was 3 or 4 high goals with our spring powered shooter.
EDIT: And for RR (I was a freshman so I could be misquoting), but I think we were aiming for a few 4 tote stacks and never got past a couple 2-3 tote stacks.
Hitchhiker 42
12-11-2016, 14:05
Imagine an average FRC robot doing the task, think through how slowly actions normally take. Take your estimate and multiply it by 3, and that is roughly how long an average robot will actually take to complete the task.
Obligatory relevant xkcd comic:
https://xkcd.com/1658/
Monochron
12-11-2016, 23:00
For the community's enjoyment - 12 high goals per match average for one initial strategy, 5 second cycles and 23 high goals per match for a catapult prototype. Keep in mind our robot's performance was "modest" - I believe the best we ever ended up with was 3 or 4 high goals with our spring powered shooter.
I think this kind of overestimating probably came mostly from misunderstanding the tasks and lack of familiarity with robot performance. I'm not sure what kind of process you use for estimating, but having a seasoned team member consistently working on this each year is a good way to help. Study of previous games robot performance throughout a season really helps. This is something where a high quantity of knowledge is very important.
MARS_James
12-11-2016, 23:20
EDIT: And for RR (I was a freshman so I could be misquoting), but I think we were aiming for a few 4 tote stacks and never got past a couple 2-3 tote stacks.
That awkward moment when i thought you meant Rebound Rumble, then was confused when you said tote stack
Lil' Lavery
13-11-2016, 00:34
For most games that involve cycling between a loading area and a scoring area, a "good" (1st round pick) team will successfully accomplish the task roughly 3 times per match.
Citrus Dad
15-11-2016, 01:21
After watching the 2012 and 2013 games, we estimated that a fast robot can run a full court scoring cycle about 4-5 times early in the year and up to 6 to 7 times by Champs. Worked like a charm for 2014. For 2015 we guessed that teams could put up 4 stacks (which turned out to be reach) but alliances could put up 6 easily. For 2016, we looked at the earlier cycle times and guessed 6 full court, but noted that they could also poach.
It turned out that our high point scores for 2014, 2015 and 2016 using this method was dead on. (My son and I chose the break point on are over/under for Champs right at the eventual high score each year.)
Andrew Schreiber
15-11-2016, 09:04
During the beginning of the season when working on strategy and prototyping, we try to predict possible robot cycle times and average points pet match. Generally, at the end of the season, we look back on these estimated values and laugh, knowing that they were totally unrealistic.
How do other teams predict predict cycle times and average number of points scored per match? Is there a rule of thumb people like to use?
Cheers,
Devin
I typically worry less about predicting absolute scores and prefer to focus on analyzing the impact that doing actions at different rates has on the outcomes. For example - 2014 if you could pass about as fast as you could acquire a ball there was a distinct advantage but if it took you a lot longer there was less advantage. By finding how long you CAN take to do things before they become less valuable you can drive your strategic design.
Another good example is when you have multiple goals with differing point values (such as 2013 or 2016, but I only have the 2013 model built) Obviously hitting the 2 point goals was easier so your accuracy was increased, but we needed to figure out how much different the accuracy had to be to make up the point difference. This is another case where you're not looking exactly for a raw cycle speed but instead looking at points where the plot of scores reaches a local maximum.
I've recently started using an online tool called guesstimate for building these models. It's reasonably easy. Here are links to 2013 (https://www.getguesstimate.com/models/7494), 2014 (https://www.getguesstimate.com/models/7524), and 2015 (https://www.getguesstimate.com/models/7509). Mind you, these are not really complete models, they were built to tell me what I needed to know about the game based on our discussions. Other folks may have different needs.
Mike Schreiber
15-11-2016, 09:20
After watching the 2012 and 2013 games, we estimated that a fast robot can run a full court scoring cycle about 4-5 times early in the year and up to 6 to 7 times by Champs. Worked like a charm for 2014. For 2015 we guessed that teams could put up 4 stacks (which turned out to be reach) but alliances could put up 6 easily. For 2016, we looked at the earlier cycle times and guessed 6 full court, but noted that they could also poach.
How did you come to these predictions? Some sort of math or model? "Engineering Judgement"?
At the start of every season, we try and calculate the score of a given robot archetype on an empty field in a match. For time values, we take a rough approximation that's partly heuristics and partly having a human being perform the task.
These scores give us a baseline to work off of when we're deciding what type of robot to build.
In addition to counting how much score we contribute, we also create a short list of things that we require from our alliance partners to get our 4RP every match (assuming win).
https://docs.google.com/spreadsheets/d/1IRphgzS0-JxL2UAmNKzxBi4mAuTROmMhAJAG0dB7geU/edit#gid=0
BrendanB
15-11-2016, 13:44
One of the metrics we used this year was asking what game will we see on Einstein and when has that style of play generally been introduced to the events.
2014 it was consistent three assist cycles.
2015 it was consistent capped & littered six stacks with the use of the cans from the step.
For 2016 we predicted:
Breach, capturing the tower with counts below 0 using the high goal, autonomous scoring, and at least one scaling machine.
Using 2014 & 2015 as our main examples from the previous two seasons in districts we noticed these prime strategies typically weren't consistent until the later weeks of district play with a few exceptions. It helped to show that the simple concepts would work well in this game when looking at a tower count of 0 with low goals being much easier to accomplish in earlier weeks compared to a tower count of 0 using high goals.
We also looked back at previous games and what the winning strategy was for our home regional in Week 1 knowing our first event in Week 2 had a similar playing field of fresh teams.
When it came to assigning times we said 3-4 cycles from the secret passage was our maximum output early in the season potentially 5-6 later on. We assumed each cycle would take roughly 25-30 seconds conservatively breaking down our route to 5 second increments to cross defenses, cross open sections of field with limited visibility, line up to receive a ball, line up to score, and add in some extra buffer time for traffic or difficulty in one stage.
Something you should always include is accuracy and plan on missing a few shots. When you crunch numbers during the first week of build season in games like 2012 & 2013 you should plan that at least one shot each cycle might miss potentially more when competing at your earlier events. Mid tier goals like 2012 & 2013 you can see some increased accuracy compared to going higher. Low goals in any game will still see missed attempts but has the highest accuracy which benefits early events.
nuclearnerd
15-11-2016, 14:08
I've recently started using an online tool called guesstimate for building these models. It's reasonably easy. Here are links to 2013 (https://www.getguesstimate.com/models/7494), 2014 (https://www.getguesstimate.com/models/7524), and 2015 (https://www.getguesstimate.com/models/7509). Mind you, these are not really complete models, they were built to tell me what I needed to know about the game based on our discussions. Other folks may have different needs.
Oooh, an easy way to do calculations with probabilistic distribution inputs? I like. Thank you!
Citrus Dad
15-11-2016, 17:11
How did you come to these predictions? Some sort of math or model? "Engineering Judgement"?
Math and observations. We were counting the number of cycles and so we could see how many were happening for different levels of performance. In 2013 some teams got up to 7 and 8 cycles in Champs! So we see cycles of 20-30 seconds in most games.
dardeshna
16-11-2016, 13:06
I typically worry less about predicting absolute scores and prefer to focus on analyzing the impact that doing actions at different rates has on the outcomes. For example - 2014 if you could pass about as fast as you could acquire a ball there was a distinct advantage but if it took you a lot longer there was less advantage. By finding how long you CAN take to do things before they become less valuable you can drive your strategic design.
Another good example is when you have multiple goals with differing point values (such as 2013 or 2016, but I only have the 2013 model built) Obviously hitting the 2 point goals was easier so your accuracy was increased, but we needed to figure out how much different the accuracy had to be to make up the point difference. This is another case where you're not looking exactly for a raw cycle speed but instead looking at points where the plot of scores reaches a local maximum.
I've recently started using an online tool called guesstimate for building these models. It's reasonably easy. Here are links to 2013 (https://www.getguesstimate.com/models/7494), 2014 (https://www.getguesstimate.com/models/7524), and 2015 (https://www.getguesstimate.com/models/7509). Mind you, these are not really complete models, they were built to tell me what I needed to know about the game based on our discussions. Other folks may have different needs.
How did you determine the different distributions and timing of various actions? Other folks have mentioned having humans simulate the action, so just curious.
Another thought that someone on our team brought up was the idea of numbers in the game manual hinting at how many cycles would be done per match. For example, a capture requires (required) 8 goals, which might point to an average of 3 high goals per bot (which is about where we ended up at least).
Andrew Schreiber
16-11-2016, 13:27
How did you determine the different distributions and timing of various actions? Other folks have mentioned having humans simulate the action, so just curious.
Another thought that someone on our team brought up was the idea of numbers in the game manual hinting at how many cycles would be done per match. For example, a capture requires (required) 8 goals, which might point to an average of 3 high goals per bot (which is about where we ended up at least).
Gut feel and experience? Obviously for things like travel times you can compute that.
Distributions - Again, gut feel. I really alternate between 3. Normal in which I'm specifying the 95th percentile edges and the average is between them. Long tail in which I assume it'll take about the minimum number but could have a longer tail out to the max. And equal where it's just an equal distribution between all possible values (I don't use this one much for obvious reasons, though if I wanted to have a non-constant lift time for totes in 2015 I could have done that)
But, as I said, I don't so much use them to compute scores I focus more on interactions between actions and outcomes.
Edit: Numbers in manuals are based on what the GDC thinks things should be worth. They may be based on their simulations but, let's be honest, this is a group that didn't see the rain of frisbees at the end of the match in 2013... so, uh, use with caution.
dardeshna
16-11-2016, 13:32
Gut feel and experience? Obviously for things like travel times you can compute that.
Distributions - Again, gut feel. I really alternate between 3. Normal in which I'm specifying the 95th percentile edges and the average is between them. Long tail in which I assume it'll take about the minimum number but could have a longer tail out to the max. And equal where it's just an equal distribution between all possible values (I don't use this one much for obvious reasons, though if I wanted to have a non-constant lift time for totes in 2015 I could have done that)
But, as I said, I don't so much use them to compute scores I focus more on interactions between actions and outcomes.
Ah okay that makes sense. I like your idea of looking at when a higher scoring option becomes less effective. Could definitely see how that could have been very useful this year with high/low goals.
Now I think about it from that perspective, our off season robot that was a low goaler was significantly quicker in cycles than our high goaler build season robot, so it did end up doing better.
Edit: Numbers in manuals are based on what the GDC thinks things should be worth. They may be based on their simulations but, let's be honest, this is a group that didn't see the rain of frisbees at the end of the match in 2013... so, uh, use with caution.
Rain of frisbees at the end of the match in 2013? Before my time :(
Andrew Schreiber
16-11-2016, 14:21
Ah okay that makes sense. I like your idea of looking at when a higher scoring option becomes less effective. Could definitely see how that could have been very useful this year with high/low goals.
Now I think about it from that perspective, our off season robot that was a low goaler was significantly quicker in cycles than our high goaler build season robot, so it did end up doing better.
Rain of frisbees at the end of the match in 2013? Before my time :(
So, it's not all about less effective. Sometimes it's finding things that give you more bang for your time (2014 w/ passes) or things that have more room for delays because you have to do things fewer times (2015 w/ cans) or even things more tolerance for screwing up.
The way I try to evaluate games is look at the inputs I can control (how fast I do things, accuracy, which scoring methods, etc) and what outputs exist (usually raw score but sometimes other things like limited cans in 2015). Then it's all about finding how to change inputs to get the outputs I want.
EDIT: Oh gosh, I keep forgetting that that 2013 was a "long time ago" :(
Generally, at the end of the season, we look back on these estimated values and laugh, knowing that they were totally unrealistic.
In my two seasons of FRC, I have definitely seen this to be true on my team:D But I think it is definitely important to set these high goals so that the team is always striving for more during the season.
dardeshna
16-11-2016, 15:47
In my two seasons of FRC, I have definitely seen this to be true on my team:D But I think it is definitely important to set these high goals so that the team is always striving for more during the season.
I guess that begs another question, how do teams avoid falling into the "carry" strategy trap and having their robot attempt to do everything, ending up with a situation where it does nothing well? Obviously there are some teams that can pull this off but not everyone for sure.
Rangel(kf7fdb)
16-11-2016, 16:04
I guess that begs another question, how do teams avoid falling into the "carry" strategy trap and having their robot attempt to do everything, ending up with a situation where it does nothing well? Obviously there are some teams that can pull this off but not everyone for sure.
It all comes down to understanding your team and what you guys are capable of. The general rule that many successful teams follow is to be the best they can at one particular task instead of the master of none. There are and will always be teams that can do it all but even so, being really good at one particular task can make your team competitive enough to go head to head with these teams. The 2013 world championship alliance is a great example of this design strategy.
RoboChair
16-11-2016, 16:04
I guess that begs another question, how do teams avoid falling into the "carry" strategy trap and having their robot attempt to do everything, ending up with a situation where it does nothing well? Obviously there are some teams that can pull this off but not everyone for sure.
The real trick to that is to be able to do the minimum everything required to win a match assuming similar opponents. Then on top of that be really good in a defined role for the game being played.
Look at our 2015 robot, a lot of teams made fun of the fact we could only do stacks of 5. But we had our can grabbers and those were our focus, that was our high end role, beat the other guy to the cans every time without fail.(admittedly maybe not the best example, but still)
Don't rely on everyone all the time, but don't do everything all the time either.
Lil' Lavery
17-11-2016, 12:17
I guess that begs another question, how do teams avoid falling into the "carry" strategy trap and having their robot attempt to do everything, ending up with a situation where it does nothing well? Obviously there are some teams that can pull this off but not everyone for sure.
Prioritize what your robot needs to be able to accomplish, and then stick to that priority list as best as possible when designing and fabricating your robot. That's not to say you can't also grab some "low hanging fruit" from your lower priority items, but try to avoid designing around lower priority items. Don't compromise the tasks you NEED to be good at in favor of adding in additional low priority functionality.
I guess that begs another question, how do teams avoid falling into the "carry" strategy trap and having their robot attempt to do everything, ending up with a situation where it does nothing well? Obviously there are some teams that can pull this off but not everyone for sure.
On 3481, after analyzing the game we talk about setting realistic goals based on what we know we can actually accomplish. This year we challenged ourselves to be a high goal shooter. After perfecting the shooter we just kept trying to increase the amount of high goals we could score per match. So I guess it is important to specialize in one aspect of the game and work on it until you perfect it, and always try to better your bot.
BrendanB
17-11-2016, 17:23
I guess that begs another question, how do teams avoid falling into the "carry" strategy trap and having their robot attempt to do everything, ending up with a situation where it does nothing well? Obviously there are some teams that can pull this off but not everyone for sure.
Always prioritize tasks and stick to them. You can re-prioritize as you go along depending on how your strategies changes or any major updates that would shift your team, but you should always have a unified top priority typically starting from basic game play up.
It is important to step back from the chaos of the season and look at the bigger picture and do so without being emotionally attached to a strategy or concept. Setting goals like being the best at high goal shooting is a focused strategy, but you have to remember all of the other sub-systems you need to have perfected if you can be the best at high goal shooting. If your drivebase is just "okay" chances are you'll struggle at crossing defenses or navigating around the field. If you focused on making a simple collector you'll have trouble acquiring game pieces which will reduce the number of shots you can take. If your shooter controls have too many variables your shots will be inconsistent. If you are working up until the end of build season without time to test and debug your drivers will struggle on the field learning how to use your robot.
Many of these lessons I've learned over the years from mistakes we've made by not keeping ourselves prioritized and lacking a cohesive effort to field the best robot WE can as a team. Many times fielding your best robot means it won't be the best robot on the field. This isn't a bad thing. If everyone fielded the best robot they could at their events and kept improving their machine FRC would be more fun & exciting. Always field YOUR best machine and never stop improving.
We had to scale back our efforts drastically in Week 4 of build season so we could compete at our first event with a completed machine. It was a little deflating knowing younger teams in our area were doing more than us but we perfected what we had and never stopped improving it. What we started our season with (https://www.thebluealliance.com/match/2016marea_qm3) was far from what we ended our season with (https://www.thebluealliance.com/match/2016nhbb_qm15).
vBulletin® v3.6.4, Copyright ©2000-2017, Jelsoft Enterprises Ltd.