I was wondering whether my sample run was really unreasonably slow, so I pulled together a very similar program in Perl, a language that is less beautiful than Ruby but is extremely fast. Sure enough, the Perl version took half the time. So, should we try to optimize?
We need to think about time again. Yes, we might be able to make this run faster, and thus reduce the program execution time and the time a user spends waiting for it, but to do this we’d have to burn some of the programmer’s time, and thus the time the user waits for the programmer to get the program written. In most cases, my instinct would be that 13.54 seconds to process a week’s data is OK, so I’d declare victory. But let’s suppose we’re starting to get gripes from people who use the program, and we’d like to make it run faster.
There’s an obvious optimization opportunity here: why bother sorting all the fetch tallies when all we really want to do is pick the top 10? It’s easy enough to write a little code to run through the array once and pick the 10 highest elements.
Would that help? I found out by instrumenting the program to find out how much time it spent doing its two tasks. The answer was (averaging over a few runs) 7.36 seconds in the first part and 0.07 in the second. Which is to say, “No, it wouldn’t help.”
Might it be worthwhile to try to optimize the first part? Probably not; all it does is match regular expressions, and store and retrieve data using a Hash, and these are among the most heavily optimized parts of Ruby.
So, getting fancy in replacing that sort would probably waste the time of the programmer and the customer waiting for the code, without saving any noticeable amount of computer or waiting-user time. Also, experience would teach that you’re not apt to go much faster than Perl does for this kind of task, so the amount of speedup you’re going to get is pretty well bounded.
We’ve just finished writing a program that does something useful and turns out to be all about search. But we haven’t come anywhere near actually writing any search algorithms. So, let’s do that.
SOME HISTORY OF TALLYING
In the spirit of credit where credit is due, the notion of getting real work done by scanning lines of textual input using regular expressions and using a content-addressable store to build up results was first popularized in the awk programming language, whose name reflects the surnames of its inventors Aho, Weinberger, and Kernighan.
This work, of course, was based on the then-radical Unix philosophy—due mostly to Ritchie and Thompson—that data should generally be stored in files in lines of text, and to some extent validated the philosophy.
Larry Wall took the ideas behind awk and, as the author of Perl, turned them into a high-performance, industrial-strength, general-purpose tool that doesn’t get the credit it deserves. It served as the glue that has held together the world’s Unix systems, and subsequently large parts of the first-generation Web.
Please check back next week for the continuation of this article.