Home arrow Oracle arrow Page 2 - Generic Architecture for Caching Table Data: Keeping It Real (and Small)

Limit the size of the cache - Oracle

In the third installment of this series we will add some more functionality to the cache that will alleviate some of the drawbacks that we discussed in earlier articles.

TABLE OF CONTENTS:
  1. Generic Architecture for Caching Table Data: Keeping It Real (and Small)
  2. Limit the size of the cache
  3. Showing the cache content
  4. Conclusion
By: Mark Vilrokx
Rating: starstarstarstarstar / 4
October 25, 2005

print this article
SEARCH DEV SHED

TOOLS YOU CAN USE

advertisement

Another drawback of caching is that it consumes memory.  If you design your cache incorrectly, or if you pick the wrong tables to cache, you might actually bring the whole database to its knees, something that does not impress customers (I speak from experience).  Big caches also tend to become slower.  It is therefore useful to design for this up front and somehow limit the size of the cache.  Unfortunately, the “optimal” size, i.e. the maximum size of the cache without causing memory issues, varies from system to system.  Some customers have huge servers with massive amounts of RAM, some don’t.  So you can only accommodate, not actually set the cache size (although we will default it to 1000 records).

When you limit the size of your cache, you must confront the following issue: what do I do when the cache is full?  In this case, what if I already have 1000 records in the cache and now I want to cache a new record?  The only option you have available is to overwrite an existing record.  Actually, we are not really overwriting but deleting an existing record and creating a new one.  But which one?  Well, that is the subject of many debates and there are many selection algorithms out there that you are welcome to implement (MRU, LRU, LFU …).  Again, those algorithms are out of the scope of this series.  I am just going to remove a random record from the cache and then add the new one.

First we need a new global variable that holds the maximum cache size:

   g_max_cache_size   PLS_INTEGER;

and getter and setter procedures to manage the variable:

   PROCEDURE set_max_cache_size (p_cache_size IN PLS_INTEGER)
   AS
   BEGIN
      g_max_cache_size := NVL (p_cache_size, 1000);
   END set_max_cache_size;
   FUNCTION max_cache_size
      RETURN PLS_INTEGER
   AS
   BEGIN
      RETURN g_max_cache_size;
   END max_cache_size;

Then we need a procedure that can delete one record from the cache:

   PROCEDURE delete_from_cache (p_dept_id IN dept.deptno%TYPE)
   AS
   BEGIN
      g_dept_cache.DELETE (p_dept_id);
   END delete_from_cache;

And finally we need a “randomizer,” a function that returns a random dept_id that exists in the cache.

   FUNCTION random_cached_dept_id
      RETURN dept.deptno%TYPE
   AS
      l_random_dept_id   dept.deptno%TYPE;
      l_cache_idx        PLS_INTEGER;
      l_cache_counter    PLS_INTEGER;
      l_random_number    PLS_INTEGER;
   BEGIN
      l_cache_idx := g_dept_cache.FIRST;
      l_cache_counter := 1;
      l_random_number := DBMS_RANDOM.VALUE (1, g_max_cache_size);
      <<random_loop>>
      LOOP
         EXIT random_loop WHEN l_cache_counter = l_random_number;
         l_cache_counter :=   l_cache_counter
                            + 1;
         l_cache_idx := g_dept_cache.NEXT (l_cache_idx);
      END LOOP random_loop;
      RETURN l_cache_idx;
   END random_cached_dept_id;

Basically this function runs through the cache a random number of times, at which point it stops and returns the index of the record it stopped at (remember that the index is not just an index number in our cache, but the Department Identifier).  It's probably not the finest piece of code ever created, but it works.

Now we modify the write_to_cache procedure to make sure we never exceed the maximum cache size:

   PROCEDURE write_to_cache (
      p_dept_id     IN   dept.deptno%TYPE,
      p_dept_data   IN   department_tp
   )
   AS
   BEGIN
      IF (g_dept_cache.COUNT >= max_cache_size)
      THEN
         delete_from_cache (random_cached_dept_id);
      END IF;
      g_dept_cache (p_dept_id).NAME := p_dept_data.NAME;
      g_dept_cache (p_dept_id).LOCATION := p_dept_data.LOCATION;
   END write_to_cache;

Now your cache will never grow beyond the maximum size, and your customers can set the cache size to whatever is suitable for their system.  It is sufficient to say that the smaller the maximum cache size, the less efficient your cache actually becomes, although that also depends on the amount of distinct values.  In my case for example, there are only four distinct values in DEPTNO, so if I set the cache to four I’ll cover all of them, and there will never be any in and out swapping, truly an ideal cache size.



 
 
>>> More Oracle Articles          >>> More By Mark Vilrokx
 

blog comments powered by Disqus
escort Bursa Bursa escort Antalya eskort
   

ORACLE ARTICLES

- Oracle Java Security Woes Continue
- Oracle's New IaaS Cloud Option: There's a Ca...
- Oracle Acquires Eloqua to Boost Cloud Presen...
- Choosing Innovation: Oracle Survey Insights
- Oracle Fixes Privilege Escalation Bug
- Oracle`s Communications Service Availability...
- Oracle Releases Exalytics, Taleo Plans
- Oracle Releases Communications Network Integ...
- Oracle Releases Communications Data Model 11...
- Oracle Releases PeopleSoft PeopleTools 8.52
- Oracle Integrates Cloudera Apache Distro, My...
- Oracle Releases MySQL 5.5.18
- Oracle Announces NoSQL Database Availability
- Sorting Database Columns With the SELECT Sta...
- Retrieving Table Data with the LIKE Operator

Developer Shed Affiliates

 


Dev Shed Tutorial Topics: