Home arrow Practices arrow More Techniques for Finding Things

More Techniques for Finding Things

In this second part of a two-part series that provides an overview of search techniques for the developer, you'll learn more about the challenges and trade-offs of various approaches. It is excerpted from chapter four of Beautiful Code: Leading Programmers Explain How They Think, written by Andy Oram and Greg Wilson (O'Reilly, 2007; ISBN: 0596510047). Copyright © 2007 O'Reilly Media, Inc. All rights reserved. Used with permission from the publisher. Available from booksellers or direct from O'Reilly Media.

TABLE OF CONTENTS:
  1. More Techniques for Finding Things
  2. Binary Search
  3. Binary Search Trade-offs
  4. Escaping the Loop
  5. Searching the Web
By: O'Reilly Media
Rating: starstarstarstarstar / 4
July 17, 2008

print this article
SEARCH DEV SHED

TOOLS YOU CAN USE

advertisement

Problem: Who Fetched What, When? 

Running a couple of quick scripts over the logfile data reveals that there are 12,600,064 instances of an article fetch coming from 2,345,571 different hosts. Suppose we are interested in who was fetching what, and when? An auditor, a police officer, or a marketing professional might be interested.

So, hereís the problem: given a hostname, report what articles were fetched from that host, and when. The result is a list; if the list is empty, no articles were fetched.

Weíve already seen that a languageís built-in hash or equivalent data structure gives the programmer a quick and easy way to store and look up key/value pairs. So, you might ask, why not use it?

Thatís an excellent question, and we should give the idea a try. There are reasons to worry that it might not work very well, so in the back of our minds, we should be thinking of a Plan B. As you may recall if youíve ever studied hash tables, in order to go fast, they need to have a small load factor; in other words, they need to be mostly empty. However, a hash table that holds 2.35 million entries and is still mostly empty is going to require the use of a whole lot of memory.

To simplify things, I wrote a program that ran over all the logfiles and pulled out all the article fetches into a simple file; each line has the hostname, the time of the transaction, and the article name. Here are the first few lines:

  crawl-66-249-72-77.googlebot.com 1166406026 2003/04/08/Riffs
  egspd42470.ask.com 1166406027 2006/05/03/MARS-T-Shirt
  84.7.249.205 1166406040 2003/03/27/Scanner

(The second field, the 10-digit number, is the standard Unix/Linux representation of time as the number of seconds since the beginning of 1970.)

Then I wrote a simple program to read this file and load a great big hash. Example 4-5 shows the program.

EXAMPLE 4-5. Loading a big hash

1 class BigHash
2
3   def initialize(file)
4     @hash = {}
5     lines = 0
6     File.open(file).each_line do |line|
7       s = line.split
8       article = s[2].intern
9       if @hash[s[0]]
10        @hash[s[0]] << [ s[1], article ]
11      else
12        @hash[s[0]] = [ s[1], article ]
13      end
14      lines += 1
15      STDERR.puts "Line: #{lines}" if (lines % 100000) == 0
16    end
17  end
18
19  def find(key)
20    @hash[key]
21  end
22
23 end

The program should be fairly self-explanatory, but line 15 is worth a note. When youíre running a big program thatís going to take a lot of time, itís very disturbing when it works away silently, maybe for hours. What if somethingís wrong? What if itís going incredibly slow and will never finish? So, line 15 prints out a progress report after every 100,000 lines of input, which is reassuring.

Running this program was interesting. It took about 55 minutes of CPU time to load up the hash, and the program grew to occupy 1.56 GB of memory. A little calculation suggests that it costs around 680 bytes to store the information for each host, or slicing the data another way, about 126 bytes per fetch. This is a little scary, but probably reasonable for a hash table.

Retrieval performance was excellent. I ran 2,000 queries, half of which were randomly selected hosts from the log and thus succeeded, while the other half were those same hostnames reversed, none of which succeeded. The 2,000 queries completed in an average of about .02 seconds, so Rubyís hash implementation can look up records in a hash containing 12 million or so records thousands of times per second.

Those 55 minutes to load up the data are troubling, but there are some tricks to address that. You could, for example, load it up once, then serialize the hash out and read it back in. And I didnít try particularly hard to optimize the program.

The program was easy and quick to write, and it runs fast once itís initialized, so its performance is good both in terms of waiting-for-the-program time and waiting-for-the-programmer time. Still, Iím unsatisfied. I have the feeling that there ought to be a way to get this kind of performance while burning less memory, less startup time, or both. It involves writing our own search code, though.



 
 
>>> More Practices Articles          >>> More By O'Reilly Media
 

blog comments powered by Disqus
escort Bursa Bursa escort Antalya eskort
   

PRACTICES ARTICLES

- Calculating Development Project Costs
- More Techniques for Finding Things
- Finding Things
- Finishing the System`s Outlines
- The System in So Many Words
- Basic Data Types and Calculations
- What`s the Address? Pointers
- Design with ArgoUML
- Pragmatic Guidelines: Diagrams That Work
- Five-Step UML: OOAD for Short Attention Span...
- Five-Step UML: OOAD for Short Attention Span...
- Introducing UML: Object-Oriented Analysis an...
- Class and Object Diagrams
- Class Relationships
- Classes

Developer Shed Affiliates

 


Dev Shed Tutorial Topics: