Optimization, when to stop

I’m a big Ether fan, most of his posts are set out to make the reader think. One of his posts got me thinking about software optimization.

Backstory: Stacy from FRC 1318 built a set interfaces to allow data from the FIRST API server to be pulled into the stat package/language R for processing. This would be a good thing for scouting teams, they could get data from FIRST and apply it to their match.

Later on, Ether wanted to get the World OPR calculations done. He had worked on it and wasn’t successful, I gave it a shot, but I had approached the problem the wrong way and I wasn’t successful.

Stacy came out with code that processed the WOPR in about 13 seconds. Pretty cool, this would allow WOPR discussions to take place during long breaks of matches and be discussed by color commentators.

I was impressed, both that Stacy had gotten it to work and that it ran in 13 seconds. Ether suggested that the time get cut in half and then in a later post by a factor of 10. Both which may be reasonable goals. But are they really?

When I’ve been asked in the past to do some optimization’s I start looking at how many times the the transaction will run and by how many people.

XKCD has a good cartoon that covers this situation.

So lets look at the situation. We want to shave about 5 seconds off (cut time in half). If it’s just Ether doing the WOPR run after Worlds, then we should spend about 25 seconds working on it. If we want to get it to about a second, we can work about a minute on it.

So clearly for the “just Ether case”, it’s not worth it.

But lets take the other extreme. After every match at every event we are going to calculate WOPR. Lets go with 70 events and about 70 matches per event 4900, lets call it 5000 matches.

Spreading them across 7 weeks and 7 days gives us about 100 times per day. Looking at the chart, saving 5 seconds gives us about 10 days of time we can spend on this. (If you were to spread them across a year, you would get 13 per day or the ability to spend 2 hours on the task).

A more likely case is wanting WOPR for District Champs / World Championships and events that bring big teams together like IRI and Cheesy Champs. Lets call it 12 events and 70 matches that gives us about 50 times a day we will do WOPR, so we can spend about 5 days on the optimization. (Likewise if you were to spread it across the year, you can spend about 20 minutes, you may have already blown most of that time reading this article. )

The above discussion assumes that you are doing the work and that you are doing the activity, you are “reaping” the time saved. In our case of WOPR everywhere / every match, unless you are Koko Ed, you are not going to all those events. So your efforts are paying into the system.

If you have 5 days to spend and if you think that WOPR calculations will become a thing to do thousands of times in a year, take up Ether’s challenge to cut the time in 1/2.

You might also factor in the number of teams that might individually be utilizing this task concurrently for their own scouting purposes no matter who optimized it.

Sometimes we have to forgive the theoretically minded for exploring the realm of possibilities.

In your situation, your argument makes sense. But what I think Ether was after is that OPR can be computed much more quickly, and perhaps this makes it possible to do that computation much more often for a higher reward. Such as in projections, or monte carlo simulations, or in navigating some some sort of decision tree (ok so I’m making some things up here). But in fact, linear solvers constitute a large field in their own right, enjoying applications in embedded electronics, supercomputing, computational science research, computer vision, artificial intelligence, engineering simulations, and more and more and more. The point is, you can certainly stop optimizing once your particular application no longer benefits from further optimization, in fact your boss may compel you to. But other applications may become feasible if you can massively reduce the cost of the process.

After all, how many quotes are there about never needing 100 KB, oh wait nevermind 100 MB, oh wait nevermind 100 GB of storage space? Similarly for internet speeds, global travel times, the cost of goods and services, etc. When you need a certain result, do what gets the job done. But a little research into how to do it better can put much bigger results on the table.

Thanks for the encouraging words Foster.

Having done this computation in Octave (Matlab), SciLab, and Python, I knew how long it should take. But my initial R implementation was a factor of 10 (or more) slower. I wasn’t willing to accept this.

So while I was awaiting input from the R community here on CD, I researched it myself and got R to do the computation just as fast. In so doing, I learned a lot about R. And that was the whole point of the exercise because, echoing what Aren said, what I learned makes R a far more powerful tool for me to use for other large problems I may encounter in the future.

As I said, I’m not an R guru, so my solution was actually a hybrid. I used the AWK scripting language to pre-process the 8column data into three files: Aij.dat, b.dat, and T.dat. Then my R code reads those three files and computes the OPR.

For the large 8column data set linked above, the AWK script execution time is about 270 miliseconds. The R script execution time is about 660 milliseconds.

If some R guru will take this pseudo-code and write an equivalent R script that runs as fast as my AWK script, I will combine that with my R OPR script and post it here on CD for all R mavens to study and perhaps improve.

Fair deal?

*

8col_processing.doc (635 Bytes)


8col_processing.doc (635 Bytes)

Here is a ZIP of Aij.dat, b.dat, and T.dat

*

Aij b T.zip (154 KB)


Aij b T.zip (154 KB)