WOT Calculator

WOT or the Weighted Objectives Table is a great tool for determining your robot’s stratigy. In order to make this process easier for teams I drew up an Excel file which does all the calculations for you. Hopefully this encourages teams to use the WOT in the future.

The inner workings of WOT are highlighted in John V Neuns paper “Using the Engineering Design Process for Design of a Compitition robot”. If you don’t already know the WOT design process would strongly recommend reading his paper (located below) before even looking at the Excel documents.

In addition, I’ve also included a drivetrain example for your benifit. (The example is meant to highlight how to use the excel document; the results are not necessarily accurate, or my opinions.)

WOT Calculator.xls (24.5 KB)
WOT Calculator (drivetrain example).xls (24.5 KB)
Engineering_Design_Process_in_Competition_Robotics_-PAPER.20091204[1].pdf (2.17 MB)


WOT Calculator.xls (24.5 KB)
WOT Calculator (drivetrain example).xls (24.5 KB)
Engineering_Design_Process_in_Competition_Robotics
-_PAPER.20091204[1].pdf (2.17 MB)

1 Like

More applicable White Paper linked below. Raul’s comments in the thread should be taken serously; the weights are subjective and linear, so it’s important to specifically define the criteria and the reasoning behind the weights as much as possible.

Using a WOT for Competition Robot Design (also by JVN)

Since both the weights and the scores are subjective, it’s important to look more closely if you have two or more choices that are close after the initial scoring. One way to do this is by performing a sensitivity analysis.

This is performed by changing the weights and scores by small amounts individually, and seeing how that affects the rank order. For example, if there’s one criteria with a high weight and one alternative with a high score in that criteria, that might be the sole driver why that alternative is higher then another. If you made a mistake in either the weight or the score, you might be selecting a sub-optimal solution. There’s a lot of fancy things you can do analytically (monte-carlo, treat each weight or score as a variable and see what value makes the result change, etc).

There is a quick and dirty way to do this also. I’m going to use BJC’s spreadsheet example. Individually, set each weight to 0, and look at how things are ordered. For example, if you set the weight of cost to 0, you’ll see that the order of the results do not change at all. This means that in this study, cost isn’t a discriminator. If you do the same thing with the maneuverability weight, you’ll find that 6wd wins big. Thus, maneuverability is important to consider further. The same thing happens with the traction weights, with those set to 0, Mecanum wins.

Another thing to look for is whether any scores are the same across the board. That would be an indication that a certain criteria isn’t a discriminator and could be removed to simplify the study without much consequence. In this case, virtually the same scores are given to both the hard to be pushed and the pushing others criteria. They could be combined to a traction criteria with a weight of 15. This makes it easier to decide if traction is as important maneuverability. In some cases, this double representation might cause something to be over-weighted compared to the others.

Thanks for posting the spreadsheet.

To reiterate what was said before, giving specifications to different scores is a good idea. For example, Pushing may be determined by your wheels Coefficient of Friction. General Andymark Placation Wheels (1.4CoF) may be a 3 on a scale of 1-5 on which Omni or Mecanum wheels could be a 1 and Pneumatic wheels or Tank treads could be a 5.

The other thing I would like to reiterate is that by using this excel spreadsheet instead of having to do the Math several times; you can quickly change or eliminate the weights of criteria to see how different criteria will affect what type of drivetrain is optimal. It’s really quite fun to play around with.

Your welcome, the actual spreadsheet wasn’t hard to make. Its just the sort of thing that is handy to have, but you never really get around to actully making.

Rather then use another subjective value to represent an objective measurement, you can actually use the measurement as your criteria. For example, if you multiply the CoF by 2.5, you get a value that scales very nicely within a typical range of 0.7CoF to 2.0Cof. The 0.7 gets a score of 1.75 and the 1.4 CoF gets a score of 3.5. This transforms the objective value to the same scale as everything else.

It is important to do the scaling. If you were comparing CoF to Weight, you wouldn’t want a small weight change of 20 lbs to 22 lbs to swamp the large CoF change from 1.0 to 1.5. If you were to multiply the CoF by 2.5 as done previously and divide the weight by 5, you’d end up with the CoF change being worth 1.25 points in the score and the weight change being worth 0.4 points in the score.

It can also be misleading to do this however, because your scaling factor is still subjective. Be careful that just because your score now goes out to a few more decimal points, it isn’t treated as more accurate.

You can even get fancier with the scales. For example, you could decide to treat everything with a CoF of < 1 as below normal, and not worth distinguishing. You could assign everything < 1 a score of 1, and then use a linear scale above that. You could also do something fancy like use an exponential function that has small changes in scores for CoF < 1 but larger changes above that. Again the warning about false precision applies.