Developing and Implementing Applications, continued

This article, the second of three parts, focuses on the design and creation of applications that use the database. It is excerpted from chapter five of the book Oracle Database 10g DBA Handbook, written by Kevin Loney and Bob Bryla (McGraw-Hill/Osborne, 2005; ISBN: 0072231459).

Implementing the Database Resource Manager

You can use the Database Resource Manager to allocate percentages of system resources to classes of users and jobs. For example, you could allocate 75 percent of the available CPU resources to your online users, leaving 25 percent to your batch users. To use the Database Resource Manager, you will need to create resource plans, resource consumer groups, and resource plan directives.

Prior to using the Database Resource Manager commands, you must create a “pending area” for your work. To create a pending area, use the CREATE_PENDING_AREA procedure of the DBMS_RESOURCE_MANAGER package. When you have completed your changes, use the VALIDATE_PENDING_AREA procedure to check the validity of the new set of plans, subplans, and directives. You can then either submit the changes (via SUBMIT_PENDING_AREA) or clear the changes (via CLEAR_PENDING_AREA). The procedures that manage the pending area do not have any input variables, so a sample creation of a pending area uses the following syntax:


If the pending area is not created, you will receive an error message when you try to create a resource plan.

To create a resource plan, use the CREATE_PLAN procedure of the DBMS_RESOURCE_ MANAGER package. The syntax for the CREATE_PLAN procedure is shown in the following listing:

(plan                     IN VARCHAR2,
     comment                  IN VARCHAR2,
     cpu_mth                  IN VARCHAR2 DEFAULT
     active_sess_pool_mth   IN VARCHAR2 DEFAULT
     parallel_degree_limit_mth IN VARCHAR2 DEFAULT
     queueing_mth              IN VARCHAR2 DEFAULT

When you create a plan, give the plan a name (in the plan variable) and a comment. By default, the CPU allocation method will use the “emphasis” method, allocating CPU resources based on percentage. The following example shows the creation of a plan called DEVELOPERS:

  Comment => ‘Developers, in Development database’);


The hyphen (-) character is a continuation character in SQL*Plus, allowing a single command to span multiple lines.

In order to create and manage resource plans and resource consumer groups, you must have the ADMINISTER_RESOURCE_MANAGER system privilege enabled for your session. DBAs have this privilege with the with admin option. To grant this privilege to non-DBAs, you must execute the GRANT_SYSTEM_PRIVILEGE procedure of the DBMS_RESOURCE_MANAGER_PRIVS package. The following example grants the user MARTHA the ability to manage the Database Resource Manager:

(grantee_name => ‘Martha’, –
privilege_name => ‘ADMINISTER_RESOURCE_MANAGER’, –
 admin_option => TRUE);

You can revoke MARTHA’s privileges via the REVOKE_SYSTEM_PRIVILEGE procedure of the DBMS_RESOURCE_MANAGER package.

With the ADMINISTER_RESOURCE_MANAGER privilege enabled, you can create a resource consumer group using the CREATE_CONSUMER_GROUP procedure within DBMS_RESOURCE_ MANAGER. The syntax for the CREATE_CONSUMER_GROUP procedure is shown in the following listing:

(consumer_group IN VARCHAR2,
comment        IN VARCHAR2,

You will be assigning users to resource consumer groups, so give the groups names that are based on the logical divisions of your users. The following example creates two groups—one for online developers and a second for batch developers:

 (Consumer_Group => ‘Online_developers’, – 
 Comment => ‘Online developers’);
 (Consumer_Group => ‘Batch_developers’, –
 Comment => ‘Batch developers’);

Once the plan and resource consumer groups are established, you need to create resource plan directives and assign users to the resource consumer groups. To assign directives to a plan, use the CREATE_PLAN_DIRECTIVE procedure of the DBMS_RESOURCE_MANAGER package. The syntax for the CREATE_PLAN_DIRECTIVE procedure is shown in the following listing:

    (plan                     IN VARCHAR2,
     group_or_subplan         IN VARCHAR2,
     comment                  IN VARCHAR2,
     cpu_p1                   IN NUMBER   DEFAULT NULL, 
     cpu_p2                   IN NUMBER   DEFAULT NULL,
     cpu_p3                   IN NUMBER   DEFAULT NULL,
     cpu_p4                   IN NUMBER   DEFAULT NULL,
     cpu_p5                   IN NUMBER   DEFAULT NULL,
     cpu_p6                   IN NUMBER   DEFAULT NULL, 
     cpu_p7                   IN NUMBER   DEFAULT NULL,
     cpu_p8                   IN NUMBER  DEFAULT NULL,
     active_sess_pool_p1      IN NUMBER 
     queueing_p1              IN NUMBER 
     parallel_degree_limit_p1 IN NUMBER 
     switch_group             IN VARCHAR2 
     switch_time              IN NUMBER 
     switch_estimate          IN BOOLEAN 
     max_est_exec_time        IN NUMBER 
     undo_pool                IN NUMBER 
     max_idle_time            IN NUMBER 
     max_idle_time_blocker    IN NUMBER 
     switch_time_in_call      IN NUMBER DEFAULT NULL);

The multiple CPU variables in the CREATE_PLAN_DIRECTIVE procedure support the creation of multiple levels of CPU allocation. For example, you could allocate 75 percent of all your CPU resources (level 1) to your online users. Of the remaining CPU resources (level 2), you could allocate 50 percent to a second set of users. You could split the remaining 50 percent of resources available at level 2 to multiple groups at a third level. The CREATE_PLAN_DIRECTIVE procedure supports up to eight levels of CPU allocations.

The following example shows the creation of the plan directives for the Online_developers and Batch_developers resource consumer groups within the DEVELOPERS resource plan:

(Plan => ‘DEVELOPERS’, –
 Group_or_subplan => ‘Online_developers’, –
 Comment => ‘online developers’, –
 Cpu_p1 => 75, –
 Cpu_p2=> 0, –
 Parallel_degree_limit_p1 => 12);
(Plan => ‘DEVELOPERS’, –
 Group_or_subplan => ‘Batch_developers’, –
 Comment => ‘Batch developers’, –
 Cpu_p1 => 25, –
 Cpu_p2 => 0, –
 Parallel_degree_limit_p1 => 6);

In addition to allocating CPU resources, the plan directives restrict the parallelism of operations performed by members of the resource consumer group. In the preceding example, batch developers are limited to a degree of parallelism of 6, reducing their ability to consume system resources. Online developers are limited to a degree of parallelism of 12.

To assign a user to a resource consumer group, use the SET_INITIAL_ CONSUMER_GROUP procedure of the DBMS_RESOURCE_MANAGER package. The syntax for the SET_INITIAL_ CONSUMER_GROUP procedure is shown in the following listing:

    (user             IN VARCHAR2, 
     consumer_group   IN VARCHAR2)

If a user has never had an initial consumer group set via the SET_INITIAL_ CONSUMER_ GROUP procedure, the user is automatically enrolled in the resource consumer group named DEFAULT_CONSUMER_GROUP.

To enable the Resource Manager within your database, set the RESOURCE_MANAGER_ PLAN database initialization parameter to the name of the resource plan for the instance. Resource plans can have subplans, so you can create tiers of resource allocations within the instance. If you do not set a value for the RESOURCE_MANAGER_PLAN parameter, resource management is not performed in the instance.

You can dynamically alter the instance to use a different resource allocation plan via the set initial_consumer_group clause of the alter system command. For example, you could create a resource plan for your daytime users (DAYTIME_USERS) and a second for your batch users (BATCH_USERS). You could create a job that each day executes this command at 6:00 A.M.:

alter system set initial_consumer_group = ‘DAYTIME_USERS’;

Then at a set time in the evening, you could change consumer groups to benefit the batch users:

alter system set initial_consumer_group = ‘BATCH_USERS’;

The resource allocation plan for the instance will thus be altered without needing to shut down and restart the instance.

When using multiple resource allocation plans in this fashion, you need to make sure you don’t accidentally use the wrong plan at the wrong time. For example, if the database is down during a scheduled plan change, your job that changes the plan allocation may not execute. How will that affect your users? If you use multiple resource allocation plans, you need to consider the impact of using the wrong plan at the wrong time. To avoid such problems, you should try to minimize the number of resource allocation plans in use.

In addition to the examples and commands shown in this section, you can update existing resource plans (via the UPDATE_PLAN procedure), delete resource plans (via DELETE_PLAN), and cascade the deletion of a resource plan plus all its subplans and related resource consumer groups (DELETE_PLAN_CASCADE). You can update and delete resource consumer groups via the UPDATE_CONSUMER_GROUP and DELETE_CONSUMER_GROUP procedures, respectively. Resource plan directives may be updated via UPDATE_PLAN_DIRECTIVE and deleted via DELETE_PLAN_DIRECTIVE.

When you are modifying resource plans, resource consumer groups, and resource plan directives, you should test the changes prior to implementing them. To test your changes, create a pending area for your work. To create a pending area, use the CREATE_PENDING_AREA procedure of the DBMS_RESOURCE_MANAGER package. When you have completed your changes, use the VALIDATE_PENDING_AREA procedure to check the validity of the new set of plans, subplans, and directives. You can then either submit the changes (via SUBMIT_PENDING_ AREA) or clear the changes (via CLEAR_PENDING_AREA). The procedures that manage the pending area do not have any input variables, so a sample validation and submission of a pending area uses the following syntax:


{mospagebreak title=Switching Consumer Groups}

Three of the parameters in the CREATE_PLAN_DIRECTIVE procedure allow sessions to switch consumer groups when resource limits are met. As shown in the previous section, the parameters for CREATE_PLAN_DIRECTIVE include Switch_Group, Switch_Time, and Switch_Estimate.

The Switch_Time value is the time, in seconds, a job can run before it is switched to another consumer group. The default Switch_Time value is NULL (unlimited). You should set the Switch_ Group parameter value to the group the session will be switched to once the switch time limit is reached. By default, Switch_Group is NULL. If you set Switch_Group to the value ‘CANCEL_ SQL’, the current call will be cancelled when the switch criteria is met. If the Switch_Group value is ‘KILL_SESSION’, the session will be killed when the switch criteria is met.

You can use the third parameter, Switch_Estimate, to tell the database to switch the consumer group for a database call before the operation even begins to execute. If you set Switch_Estimate to TRUE, Oracle will use its execution time estimate to automatically switch the consumer group for the operation instead of waiting for it to reach the Switch_Time value.

You can use the group-switching features to minimize the impact of long-running jobs within the database. You can configure consumer groups with different levels of access to the system resources and customize them to support fast jobs as well as long-running jobs—the ones that reach the switch limit will be redirected to the appropriate groups before they even execute.

Implementing Stored Outlines

As you migrate from one database to another, the execution paths for your queries may change. Your execution paths may change for several reasons:

  1. You may have enabled different optimizer features in the different databases.
  2. The statistics for the queried tables may differ in the databases.
  3. The frequency with which statistics are gathered may differ among the databases.
  4. The databases may be running different versions of the Oracle kernel.

The effects of these differences on your execution paths can be dramatic, and they can have a negative impact on your query performance as you migrate or upgrade your application. To minimize the impact of these differences on your query performance, you can use a feature called a stored outline.

A stored outline stores a set of hints for a query. Those hints will be used every time the query is executed. Using the stored hints will increase the likelihood that the query will use the same execution path each time. Hints decrease the impact of database moves on your query performance. You can view the outlines and related hints via the USER_OUTLINES and USER_OUTLINE_HINTS views.

To start creating hints for all queries, create custom categories of outlines and use the category name as a value of the CREATE_STORED_OUTLINES parameter in the database initialization file, as shown here:


In this example, outlines will be stored for queries within the DEVELOPMENT category. You must have the CREATE ANY OUTLINE system privilege in order to create an outline. Use the create outline command to create an outline for a query, as shown in the following listing:

create outline YTD_SALES
    for category DEVELOPMENT
select Year_to_Date_Sales
  from SALES
 where region = ‘SOUTH’
  and period = 1;


If you do not specify a name for your outline, the outline will be given a system-generated name.

If you have set CREATE_STORED_OUTLINES to a category name in your initialization file, Oracle will create stored outlines for your queries; using the create outline command gives you more control over the outlines that are created.


You can create outlines for DML commands and for create table  as select commands.

Once an outline has been created, you can alter it. For example, you may need to alter the outline to reflect significant changes in data volumes and distribution. You can use the rebuild clause of the alter outline command to regenerate the hints used during query execution, as shown next:

alter outline YTD_SALES rebuild;

You can also rename an outline via the rename clause of the alter outline command, as shown here:

alter outline YTD_SALES rename to YTD_SALES_REGION;

You can change the category of an outline via the change category clause, as shown in the following example:

alter outline YTD_SALES_REGION change category to DEFAULT;

To manage stored outlines, use the DBMS_OUTLN package, which gives you the following capabilities:

  1. Drop outlines that have never been used
  2. Drop outlines within a specific category
  3. Move outlines from one category to another
  4. Create outlines for specific statements
  5. Update outlines to the current version’s signature

Each of these capabilities has a corresponding procedure within DBMS_OUTLN. To drop outlines that have never been used, execute the DROP_UNUSED procedure, as shown in the following example:


You can clear the “used” setting of an outline via the CLEAR_USED procedure. Pass the name of the outline as the input variable to CLEAR_USED:


To drop all the outlines within a category, execute the DROP_BY_CAT procedure. The DROP_ BY_CAT procedure has the name of the category as its only input parameter. The following example drops all the outlines within the DEVELOPMENT category:


To reassign outlines from an old category to a new category, use the UPDATE_BY_CAT procedure, as shown in the following example:

 (oldcat => ‘DEVELOPMENT’, –
  newcat => ‘TEST’);

To drop a specific outline, use the drop outline command.

If you have imported outlines generated in an earlier release, use the UPDATE_SIGNATURES procedure of DBMS_OUTLN to ensure the signatures are compatible with the current release’s computation algorithm.

{mospagebreak title=Editing Stored Outlines}

You can use DBMS_OUTLN_EDIT to edit the stored outlines. The procedures within DBMS_ OUTLN_EDIT are detailed in the following table:





Changes the join position for the hint identified by outline name and hint number to the position specified. Inputs are name, hintno, and newpos.


Creates outline editing tables in the user’s schema.


Drops the outline editing tables in the user’s schema.


Generates a signature for the specified SQL text.


Refreshes the in-memory copy of the outline, synchronizing it with the edits made.


As of Oracle 10g, you no longer need to execute the CREATE_EDIT_ TABLES procedure because the edit tables are available as temporary tables in the SYSTEM schema. The procedure is still available, however, for backward compatibility.

You can use private outlines, which are seen only within your current session. Changes made to a private outline do not affect any other users. To enable private outline editing, set the USE_ PRIVATE_OUTLINES initialization parameter to TRUE. Use the REFRESH_PRIVATE_OUTLINE procedure to have your changes take effect for the in-memory versions of the outlines.

Using SQL Profiles

As of Oracle 10g, you can use SQL profiles to further refine the SQL execution plans chosen by the optimizer. SQL profiles are particularly useful when you are attempting to tune code that you do not have direct access to (for example, within a packaged application). The SQL profile consists of statistics that are specific to the statement, allowing the optimizer to know more about the exact selectivity and cost of the steps in the execution plan.

SQL profiling is part of the automatic tuning capability described in Chapter 8. Once you accept a SQL profile recommendation, it is stored in the data dictionary. As with stored outlines, you can use a category attribute to control its usage. See Chapter 8 for further details on the use of the automatic tools for detection and diagnosis of SQL performance issues.

Sizing Database Objects

Choosing the proper space allocation for database objects is critical. Developers should begin estimating space requirements before the first database objects are created. Afterward, the space requirements can be refined based on the actual usage statistics. In the following sections, you will see the space estimation methods for tables, indexes, and clusters. You’ll also see methods for determining the proper settings for pctfree and pctused.


You can enable Automatic Segment Space Management when you create a tablespace; you cannot enable this feature for existing tablespaces. If you are using Automatic Segment Space Management, Oracle manages the pctused, freelists, and freelist groups parameters.

Why Size Objects?

You should size your database objects for three reasons:

  1. To preallocate space in the database, thereby minimizing the amount of future work required to manage objects’ space requirements
  2. To reduce the amount of space wasted due to overallocation of space
  3. To improve the likelihood of a dropped free extent being reused by another segment

You can accomplish all these goals by following the sizing methodology shown in the following sections. This methodology is based on Oracle’s internal methods for allocating space to database objects. Rather than rely on detailed calculations, the methodology relies on approximations that will dramatically simplify the sizing process while simplifying the long-term maintainability of the database.

{mospagebreak title=The Golden Rule for Space Calculations}

Keep your space calculations simple, generic, and consistent across databases. There are far more productive ways to spend your work time than performing extremely detailed space calculations that Oracle may ignore anyway. Even if you follow the most rigorous sizing calculations, you cannot be sure how Oracle will load the data into the table or index.

In the following section, you’ll see how to simplify the space-estimation process, freeing you to perform much more useful DBA functions. These processes should be followed whether you are generating the default storage values for a dictionary managed tablespace or the extent sizes for locally managed tablespaces.


In an Oracle 10g database, you should be using locally managed tablespaces. If you have upgraded from a prior release that used dictionary-managed tablespaces, you should replace them with locally managed tablespaces.

The Ground Rules for Space Calculations

Oracle follows a set of internal rules when allocating space:

  1. Oracle only allocates whole blocks, not parts of blocks.
  2. Oracle allocates sets of blocks rather than individual blocks.
  3. Oracle may allocate larger or smaller sets of blocks depending on the available free space in the tablespace.

Your goal should be to work with Oracle’s space-allocation methods instead of against them. If you use consistent extent sizes, you can largely delegate the space allocation to Oracle even in a dictionary-managed tablespace.

The Impact of Extent Size on Performance

There is no direct performance benefit gained by reducing the number of extents in a table. In some situations (such as in Parallel Query environments), having multiple extents in a table can significantly reduce I/O contention and enhance your performance. Regardless of the number of extents in your tables, they need to be properly sized.

Oracle reads data from tables in two ways: by RowID (usually immediately following an index access) and via full table scans. If the data is read via RowID, the number of extents in the table is not a factor in the read performance. Oracle will read each row from its physical location (as specified in the RowID) and retrieve the data.

If the data is read via a full table scan, the size of your extents can impact performance to a very small degree. When reading data via a full table scan, Oracle will read multiple blocks at a time. The number of blocks read at a time is set via the DB_FILE_MULTIBLOCK_READ_COUNT database initialization parameter and is limited by the operating system’s I/O buffer size. For example, if your database block size is 8KB and your operating system’s I/O buffer size is 128KB, you can read up to 16 blocks per read during a full table scan. In that case, setting DB_FILE_ MULTIBLOCK_READ_COUNT to a value higher than 16 will not change the performance of the full table scans.

Your extent sizes should take advantage of Oracle’s ability to perform multiblock reads during full table scans. For example, if your operating system’s I/O buffer is 128KB, your extent sizes should be a multiple of 128KB.

Consider a table that has ten extents, each of which is 128KB in size. For this example, the operating system’s I/O buffer size is 128KB. To perform a full table scan, Oracle must perform ten reads (because 128KB is the operating system I/O buffer size). If the data is compressed into a single 1280KB extent, Oracle still must perform ten reads to scan the table. Compressing the extents results in no gain in performance.

If the table’s extent size is not a multiple of the I/O buffer size, the number of reads required for full table scans may increase. For the same 1280KB table, you could create eight extents that are 160KB each. To read the first extent, Oracle will perform two reads: one for the first 128KB of the extent, and a second read for the last 32KB of the extent (reads cannot span extents). To read the whole table, Oracle must therefore perform two reads per extent, or 16 reads. Reducing the number of extents from ten to eight increases the number of reads by 60 percent.

To avoid paying a performance penalty for your extent sizes, you must choose from one of the following strategies:

  1. Create extents that are significantly larger than your I/O size. If the extents are very large, very few additional reads will be necessary even if the extent size is not a multiple of the I/O buffer size.
  2. Set DB_FILE_MULTIBLOCK_READ_COUNT to take full advantage of the I/O buffer size for your operating system. Note that setting this too high may make the optimizer think that full table scans are more efficient than they actually are, resulting in changes to existing execution plans.
  3. If you must create small extents, choose extent sizes that are a multiple of the I/O buffer size for your operating system.

If the I/O buffer size for your operating system is 128KB, your pool of extent sizes from which to choose are 128KB, 256KB, 512KB, 1MB, and so on. You can further reduce the pool of extent sizes from which to choose.

Use a pool of extent sizes that meets the following criterion: Every extent size will hold an integral multiple of every smaller extent size. The simplest implementation of this rule is to create extent sizes that increase by doubling: 1MB, 2MB, 4MB, 16MB, 32MB. To reduce the number of extent sizes to manage, you can quadruple the values instead of doubling: 1MB, 4MB, 16MB, and so on. Use these values for your locally managed tablespaces or rely on their automatic extent-sizing feature.

{mospagebreak title=Sizing the Objects}

To effectively manage your space, all you need to do is select a set of space values that meet the criteria described in the preceding sections. Once the space allocations are finalized, separate them by tablespace. Here’s an example:

create tablespace DATA_1M
datafile ‘/u01/oracle/VLDB/data_1m.dbf’
size 100M
extent management local uniform size 1M;

create tablespace DATA_MEDIUM
datafile ‘/u01/oracle/VLDB/data_4m.dbf’
size 400M
extent management local uniform size 4M;
create tablespace DATA_LARGE
datafile ‘/u01/oracle/VLDB/data_16m.dbf’ size 16000M
extent management local uniform size 16M;

In this example, three separate DATA tablespaces are created, with extent sizes of 1MB, 4MB, and 16MB. If you need to create a table 3MB in size, you can either create it with three 1MB extents in DATA_1M or with one 4MB extent in DATA_4M. A table that will grow to 10MB can be placed in DATA_4M.

As your tables grow in size, your default storage clauses will grow in a consistent fashion, following the space rules and your standards for extent sizes. DATA_64M would be next, followed by DATA_256M and DATA_1G. Use the same extent sizes across your databases to ease space management of your entire database environment.

As the extent sizes grow, the distribution of extent sizes across tablespaces will usually result in a separation of table types—small static tables will be isolated in the tablespaces with small extent sizes. Large transaction-processing tables (or their partitions) will be segregated to the large extent size tables, simplifying later management and tuning activities.

In the following sections you will see guidelines for estimations of the space usage for your objects. Because the target sizes (1MB, 4MB, 16MB, and so on) are not close together, the following estimations do not include highly detailed calculations.


You can set the database block size at the tablespace level. You must set the block size during the tablespace creation, and you must have already created a buffer cache for the block size. Be sure the extent sizes you choose account for the largest block size in use in the database. Limiting the usage of nonstandard block sizes in the database will simplify cross-database maintenance and your sizing procedures.

Estimating Space Requirements for Tables

As of Oracle Database 10g, you can use the CREATE_TABLE_COST procedure of the DBMS_ SPACE package to estimate the space required by a table. The procedure determines the space required for a table based on attributes such as the tablespace storage parameters, the tablespace block size, the number of rows, and the average row length. The procedure is valid for both dictionary-managed and locally managed tablespaces.

There are two versions of the CREATE_TABLE_COST procedure (it is overloaded so you can use the same procedure both ways). The first version has four input variables: tablespace_name, avg_row_size, row_count, and pct_free. Its output variables are used_bytes and alloc_bytes. The second version’s input variables are tablespace_name, colinfos, row_count, and pct_free; its output variables are used_bytes and alloc_bytes. Descriptions of the variables are provided in the following table:





The tablespace in which the object will be created.


The average length of a row in the table.


The description of the columns.


The anticipated number of rows in the table.


The pctfree setting for the table.


The space used by the table’s data. This value includes the overhead due to the pctfree setting and other block features.


The space allocated to the table’s data, based on the tablespace characteristics. This value takes the tablespace extent size settings into account.

For example, if you have an existing tablespace named USERS, you can estimate the space required for a new table in that tablespace. In the following example, the CREATE_TABLE_COST procedure is executed with values passed for the average row size, the row count, and the pctfree setting. The used_bytes and alloc_bytes variables are defined and are displayed via the DBMS_ OUTPUT.PUT_LINE procedure:

calc_used_bytes NUMBER;
   calc_alloc_bytes NUMBER;
tablespace_name => ‘USERS’,
       avg_row_size => 100,
row_count => 5000,
pct_free => 10,
used_bytes => calc_used_bytes,
       alloc_bytes => calc_alloc_bytes
DBMS_OUTPUT.PUT_LINE(‘Used bytes: ‘||calc_used_bytes);
   DBMS_OUTPUT.PUT_LINE(‘Allocated bytes: ‘||calc_alloc_bytes);

The output of this PL/SQL block will display the used and allocated bytes calculated for these variable settings. You can easily calculate the expected space usage for multiple combinations of space settings prior to creating the table.


You must use the set serveroutput on  command to enable the script’s output to be displayed within a SQL*Plus session.

{mospagebreak title=Estimating Space Requirements for Indexes}

As of Oracle Database 10g, you can use the CREATE_INDEX_COST procedure of the DBMS_ SPACE package to estimate the space required by an index. The procedure determines the space required for a table based on attributes such as the tablespace storage parameters, the tablespace block size, the number of rows, and the average row length. The procedure is valid for both dictionary-managed and locally managed tablespaces.

For index space estimations, the input variables include the DDL commands executed to create the index and the name of the local plan table (if one exists). The index space estimates rely on the statistics for the related table. You should be sure those statistics are correct before starting the space-estimation process; otherwise, the results will be skewed.

The variables for the CREATE_INDEX_COST procedure are described in the following table:




The create index command


The number of bytes used by the index’s data


The number of bytes allocated for the index’s extents


The plan table to use (the default is NULL)

Because the CREATE_INDEX_COST procedure bases its results on the table’s statistics, you cannot use this procedure until the table has been created, loaded, and analyzed. The following example estimates the space required for a new index on the BOOKSHELF table. The tablespace designation is part of the create index command passed to the CREATE_INDEX_COST procedure as part of the ddl variable value.

calc_used_bytes NUMBER;
calc_alloc_bytes NUMBER;
ddl => ‘create index BOOK_CAT on BOOKSHELF ‘||
‘(CategoryName) tablespace BOOKS_INDEX’,
used_bytes => calc_used_bytes,
      alloc_bytes => calc_alloc_bytes
DBMS_OUTPUT.PUT_LINE(‘Used bytes = ‘||calc_used_bytes);
DBMS_OUTPUT.PUT_LINE(‘Allocated bytes = ‘||calc_alloc_bytes);

The output of the script will show the used and allocated bytes values for the proposed index.

Estimating the Proper pctfree

The pctfree value represents the percentage of each data block that is reserved as free space. This space is used when a row that has already been stored in that data block grows in length, either by updates of previously NULL fields or by updates of existing values to longer values.

There is no single value for pctfree that will be adequate for all tables in all databases. To simplify space management, choose a consistent set of pctfree values:

  1. For indexes whose key values are rarely changed: 2
  2. For tables whose rows seldom change: 2
  3. For tables whose rows frequently change: 10 to 30

Why maintain free space in a table or index even if the rows seldom change? Oracle needs space within blocks to perform block maintenance functions. If there is not enough free space available (for example, to support a large number of transaction headers during concurrent inserts), Oracle will temporarily allocate part of the block’s pctfree area. You should choose a pctfree value that supports this allocation of space. To reserve space for transaction headers in insert-intensive tables, set the initrans parameter to a nondefault value. In general, your pctfree area should be large enough to hold several rows of data.


Oracle automatically allows up to 255 concurrent update transactions for any data block, depending on the available space in the block.

Because pctfree is tied to the way in which updates occur in an application, determining the adequacy of its setting is a straightforward process. The pctfree setting controls the number of records that are stored in a block in a table. To see if pctfree has been set correctly, first determine the number of rows in a block. You can use the DBMS_STATS package to gather statistics. If the pctfree setting is too low, the number of chained rows will steadily increase. You can monitor the database’s V$SYSSTAT view for increasing values of the “table fetch continued row” action; these indicate the need for the database to access multiple blocks for a single row.


When rows are moved due to inadequate space in the pctfree area, the move is called a row migration. Row migration will impact the performance of your transactions.


For indexes that will support a large number of inserts, pctfree may need to be as high as 50 percent.

Determining the Proper pctused

The pctused value determines when a used block is re-added to the list of blocks into which rows can be inserted. For example, consider a table that has a pctfree value of 20 and a pctused value of 50. When rows are inserted into the table, Oracle will keep 20 percent of each block free (for use by later updates of the inserted records). If you now begin to delete records from the block, Oracle will not automatically reuse the freed space inside the block. New rows will not be inserted into the block until the block’s used space falls below its pctused percentage—in this case, 50 percent.

The pctused value, by default, is set to 40. If your application features frequent deletions and you use the default value for pctused, you may have many blocks in your table that are only 40 percent used.

For most systems, you can set pctused so that pctused plus pctfree equals 80. If your pctfree setting is 20 percent, for example, set your pctused value to 60 percent. That way, at least 60 percent of each block will be used, saving 20 percent of the block for updates and row extensions.

When creating tablespaces, you can enable Automatic Segment Space Management, in which case Oracle will manage the freelist dynamically. When Automatic Segment Space Management is enabled, you do not set the pctused setting.

Reverse Key Indexes

In a reverse key index, the values are stored backward—for example, a value of 2201 is stored as 1022. If you use a standard index, consecutive values are stored near each other. In a reverse key index, consecutive values are not stored near each other. If your queries do not commonly perform range scans and you are concerned about I/O contention in your indexes, reverse key indexes may be a tuning solution to consider. When sizing a reverse key index, follow the same method used to size a standard index, as shown in the prior sections of this chapter.

Sizing Bitmap Indexes

If you create a bitmap index, Oracle will dynamically compress the bitmaps generated. The compression of the bitmap may result in substantial storage savings. To estimate the size of a bitmap index, estimate the size of a standard (B*tree) index on the same columns using the methods provided in the preceding sections of this chapter. After calculating the space requirements for the B*tree index, divide that size by 10 to determine the most likely maximum size of a bitmap index for those columns. In general, bitmap indexes will be between 2 and 10 percent of the size of a comparable B*tree index. The size of the bitmap index will depend on the variability and number of distinct values in the indexed columns.

Sizing Index-Organized Tables

An index-organized table is stored sorted by its primary key. The space requirements of an index-organized table closely mirror those of an index on all of the table’s columns. The difference in space estimation comes in calculating the space used per row, because an index-organized table does not have RowIDs.

The following listing gives the calculation for the space requirement per row for an index-organized table (note that this storage estimate is for the entire row, including its out-of-line storage):

Row length for sizing = Average row length 
                         + number of columns
                         + number of long columns
                         + 2 header bytes

Enter this value as the row length when using the CREATE_TABLE_COST procedure for the index-organized table.

{mospagebreak title=Sizing Tables That Contain Large Objects (LOBs)}

LOB data (in BLOB or CLOB datatypes) is usually stored apart from the main table. You can use the lob clause of the create table command to specify the storage for the LOB data. In the main table, Oracle stores a lob locator value that points to the LOB data. When the LOB data is stored out of line, between 36 and 86 bytes of control data (the lob locator) remain inline in the row piece.

Oracle does not always store the LOB data apart from the main table. In general, the LOB data is not stored apart from the main table until the LOB data and the lob locator value total more than 4000 bytes. Therefore, if you will be storing short LOB values, you need to consider their impact on the storage of your main table. If your LOB values are less than 4000 characters, you may be able to use VARCHAR2 datatypes instead of LOB datatypes for the data storage.

Sizing Partitions

You can create multiple partitions of a table. In a partitioned table, multiple separate physical partitions constitute the table. For example, a SALES table may have four partitions: SALES_ NORTH, SALES_SOUTH, SALES_EAST, and SALES_WEST. You should size each of those partitions using the table-sizing methods described earlier in this chapter. You should size the partition indexes using the index-sizing methods shown earlier in this chapter.

Using Temporary Tables

You can create temporary tables to hold temporary data during your application processing. The table’s data can be specific to a transaction or maintained throughout a user’s session. When the transaction or session completes, the data is truncated from the table.

To create a temporary table, use the create global temporary table option of the create table command. To delete the rows at the end of the transaction, specify on commit delete rows, as shown here:

create global temporary table MY_TEMP_TABLE (Name     VARCHAR2(25),
 Street   VARCHAR2(25),
City     VARCHAR2(25))
on commit delete rows;

You can then insert rows into MY_TEMP_TABLE during your application processing. When you commit, Oracle will truncate MY_TEMP_TABLE. To keep the rows for the duration of your session, specify on commit preserve rows instead.

From the DBA perspective, you need to know if your application developers are using this feature. If they are, you need to account for the space required by their temporary tables during their processing. Temporary tables are commonly used to improve processing speeds of complex transactions, so you may need to balance the performance benefit against the space costs. You can create indexes on temporary tables to further improve processing performance, again at the cost of increased space usage.


Temporary tables and their indexes do not acquire any space until the first insert into them occurs. When they are no longer in use, their space is deallocated.

Supporting Tables Based on Abstract Datatypes

User-defined datatypes, also known as abstract datatypes, are a critical part of object-relational database applications. Every abstract datatype has related constructor methods users execute to manipulate data in tables. Abstract datatypes define the structure of data—for example, an ADDRESS_ TY datatype may contain attributes for address data, along with methods for manipulating that data. When you create the ADDRESS_TY datatype, Oracle will automatically create a constructor method called ADDRESS_TY. The ADDRESS_TY constructor method contains parameters that match the datatype’s attributes, facilitating inserts of new values into the datatype’s format. In the following sections, you will see how to create tables that use abstract datatypes, along with information on the sizing and security issues associated with that implementation.

You can create tables that use abstract datatypes for their column definitions. For example, you could create an abstract datatype for addresses, as shown here:

create type ADDRESS_TY as object
(Street   VARCHAR2(50),
City      VARCHAR2(25),
State     CHAR(2),
Zip       NUMBER);

Once the ADDRESS_TY datatype has been created, you can use it as a datatype when creating your tables, as shown in the following listing:

create table CUSTOMER
(Name    VARCHAR2(25),
Address  ADDRESS_TY);

When you create an abstract datatype, Oracle creates a constructor method for use during inserts. The constructor method has the same name as the datatype, and its parameters are the attributes of the datatype. When you insert records into the CUSTOMER table, you need to use the ADDRESS_TY datatype’s constructor method to insert Address values, as shown here:

insert into CUSTOMER values (‘Joe’,ADDRESS_TY(‘My Street’, ‘Some City’, ‘ST’, 10001));

In this example, the insert command calls the ADDRESS_TY constructor method in order to insert values into the attributes of the ADDRESS_TY datatype.

The use of abstract datatypes increases the space requirements of your tables by 8 bytes for each datatype used. If a datatype contains another datatype, you should add 8 bytes for each of the datatypes.

{mospagebreak title=Using Object Views}

The use of abstract datatypes may increase the complexity of your development environment. When you query the attributes of an abstract datatype, you must use a syntax that is not used against tables that do not contain abstract datatypes. If you do not implement abstract datatypes in all your tables, you will need to use one syntax for some of your tables and a separate syntax for other tables—and you will need to know ahead of time which queries use abstract datatypes.

For example, the CUSTOMER table uses the ADDRESS_TY datatype described in the previous section:

create table CUSTOMER
(Name    VARCHAR2(25),
Address  ADDRESS_TY);

The ADDRESS_TY datatype, in turn, has four attributes: Street, City, State, and Zip. If you want to select the Street attribute value from the Address column of the CUSTOMER table, you may write the following query:

select Address.Street from CUSTOMER;

However, this query will not work. When you query the attributes of abstract datatypes, you must use correlation variables for the table names. Otherwise, there may be an ambiguity regarding the object being selected. To query the Street attribute, use a correlation variable (in this case, “C”) for the CUSTOMER table, as shown in the following example:

select C.Address.Street from CUSTOMER  C;

As shown in this example, you need to use correlation variables for queries of abstract datatype attributes even if the query only accesses one table. There are therefore two features of queries against abstract datatype attributes: the notation used to access the attributes and the correlation variables requirement. In order to implement abstract datatypes consistently, you may need to alter your SQL standards to support 100-percent usage of correlation variables. Even if you use correlation variables consistently, the notation required to access attribute values may cause problems as well, because you cannot use a similar notation on tables that do not use abstract datatypes.

Object views provide an effective compromise solution to this inconsistency. The CUSTOMER table created in the previous examples assumes that an ADDRESS_TY datatype already exists. But what if your tables already exist? What if you had previously created a relational database application and are trying to implement object-relational concepts in your application without rebuilding and re-creating the entire application? What you would need is the ability to overlay object-oriented (OO) structures such as abstract datatypes on existing relational tables. Oracle provides object views as a means for defining objects used by existing relational tables.

If the CUSTOMER table already exists, you could create the ADDRESS_TY datatype and use object views to relate it to the CUSTOMER table. In the following listing, the CUSTOMER table is created as a relational table, using only the normally provided datatypes:

create table CUSTOMER
(Name        VARCHAR2(25) primary key, Street       VARCHAR2(50),
City         VARCHAR2(25),
State        CHAR(2),
Zip          NUMBER);

If you want to create another table or application that stores information about people and addresses, you may choose to create the ADDRESS_TY datatype. However, for consistency, that datatype should be applied to the CUSTOMER table as well. The following examples will use the ADDRESS_TY datatype created in the previous section.

You can create an object view based on the CUSTOMER table, using any datatype you have defined. To create an object view, use the create view command. Within the create view command, specify the query that will form the basis of the view. The code for creating the CUSTOMER_OV object view is shown in the following listing:

create view CUSTOMER_OV (Name, Address) as select Name,
       ADDRESS_TY(Street, City, State, Zip)
  from CUSTOMER;

The CUSTOMER_OV view will have two columns: the Name and the Address columns (the latter is defined by the ADDRESS_TY datatype). Note that you cannot specify object as an option within the create view command.

Several important syntax issues are presented in this example. When a table is built on existing abstract datatypes, you select column values from the table by referring to the names of the columns (such as Name) instead of their constructor methods. When creating the object view, however, you refer to the names of the constructor methods (such as ADDRESS_TY) instead. Also, you can use where clauses in the query that forms the basis of the object view. You can therefore limit the rows that are accessible via the object view.

If you use object views, you as the DBA will administer relational tables the same way as you did before. You will still need to manage the privileges for the datatypes (see the following section of this chapter for information on security management of abstract datatypes), but the table and index structures will be the same as they were before the creation of the abstract datatypes. Using the relational structures will simplify your administration tasks while allowing developers to access objects via the object views of the tables.

You can also use object views to simulate the references used by row objects. Row objects are rows within an object table. To create an object view that supports row objects, you need to first create a datatype that has the same structure as the table, as shown here:

create or replace type CUSTOMER_TY as object
(Name        VARCHAR2(25),
 Street       VARCHAR2(50),
 City         VARCHAR2(25),
 State        CHAR(2),
 Zip          NUMBER);

Next, create an object view based on the CUSTOMER_TY type while assigning OID (object identifier) values to the records in CUSTOMER:

  with object identifier (Name) as
select Name, Street, City, State, Zip
  from CUSTOMER;

The first part of this create view command gives the view its name (CUSTOMER_OV) and tells Oracle that the view’s structure is based on the CUSTOMER_TY datatype. An object identifier, also known as an OID,identifies the row object. In this object view, the Name column will be used as the OID.

If you have a second table that references CUSTOMER via a foreign key/primary key relationship, you can set up an object view that contains references to CUSTOMER_OV. For example, the CUSTOMER_CALL table contains a foreign key to the CUSTOMER table, as shown here:

create table CUSTOMER_CALL
(Name           VARCHAR2(25),
 Call_Number    NUMBER,
 Call_Date      DATE,
 constraint CUSTOMER_CALL_PK
primary key (Name, Call_Number), 
 constraint CUSTOMER_CALL_FK foreign key (Name)
    references CUSTOMER(Name));

The Name column of CUSTOMER_CALL references the same column in the CUSTOMER table. Because you have simulated OIDs (called pkOIDs) based on the primary key of CUSTOMER, you need to create references to those OIDs. Oracle provides an operator called MAKE_REF that creates the references (called pkREFs). In the following listing, the MAKE_REF operator is used to create references from the object view of CUSTOMER_CALL to the object view of CUSTOMER:

create view CUSTOMER_CALL_OV as
select MAKE_REF(CUSTOMER_OV, Name) Name,

Within the CUSTOMER_CALL_OV view, you tell Oracle the name of the view to reference and the columns that constitute the pkREF. You could now query CUSTOMER_OV data from within CUSTOMER_CALL_OV by using the DEREF operator on the Customer_ID column:

select DEREF(CCOV.Name)
 where Call_Date = TRUNC(SysDate);

You can thus return CUSTOMER data from your query without directly querying the CUSTOMER table. In this example, the Call_Date column is used as a limiting condition for the rows returned by the query.

Whether you use row objects or column objects, you can use object views to shield your tables from the object relationships. The tables are not modified; you administer them the way you always did. The difference is that the users can now access the rows of CUSTOMER as if they are row objects.

From a DBA perspective, object views allow you to continue creating and supporting standard tables and indexes while the application developers implement the advanced object-relational features as a layer above those tables. 

[gp-comments width="770" linklove="off" ]

chat sex hikayeleri Ensest hikaye