Developing and Implementing Applications, concluded

This article, the third of three parts, focuses on the design and creation of applications that use the database. It is excerpted from chapter five of the book Oracle Database 10g DBA Handbook, written by Kevin Loney and Bob Bryla (McGraw-Hill/Osborne, 2005; ISBN: 0072231459).

Security for Abstract Datatypes

The examples in the previous sections assumed that the same user owned the ADDRESS_TY datatype and the CUSTOMER table. What if the owner of the datatype is not the table owner? What if another user wants to create a datatype based on a datatype you have created? In the development environment, you should establish guidelines for the ownership and use of abstract datatypes.

For example, what if the account named DORA owns the ADDRESS_TY datatype, and the user of the account named GEORGE tries to create a PERSON_TY datatype? GEORGE executes the following command:

create type PERSON_TY as object
(Name     VARCHAR2(25),
 Address  ADDRESS_TY);
/

If GEORGE does not own the ADDRESS_TY abstract datatype, Oracle will respond to this create type command with the following message:

Warning: Type created with compilation errors.

The compilation errors are caused by problems creating the constructor method when the datatype is created. Oracle cannot resolve the reference to the ADDRESS_TY datatype because GEORGE does not own a datatype with that name.

GEORGE will not be able to create the PERSON_TY datatype (which includes the ADDRESS_ TY datatype) unless DORA first grants him EXECUTE privilege on her type. The following listing shows this grant:

grant EXECUTE on ADDRESS_TY to George;

NOTE

You must also grant EXECUTE privilege on the type to any user who will perform DML operations on the table.

Now that the proper grants are in place, GEORGE can create a datatype that is based on DORA’s ADDRESS_TY datatype:

create or replace type PERSON_TY as object (Name     VARCHAR2(25),
 
Address  Dora.ADDRESS_TY);
/

GEORGE’s PERSON_TY datatype will now be successfully created. However, using datatypes based on another user’s datatypes is not trivial. For example, during insert operations, you must fully specify the name of the owner of each type. GEORGE can create a table based on his PERSON_ TY datatype (which includes DORA’s ADDRESS_TY datatype), as shown in the following listing:

create table GEORGE_CUSTOMERS
(Customer_ID NUMBER,
 
Person   PERSON_TY);

If GEORGE owned the PERSON_TY and ADDRESS_TY datatypes, an insert into CUSTOMER would use the following format:

insert into GEORGE_CUSTOMERS values (1,PERSON_TY(‘SomeName’,
  
ADDRESS_TY(‘StreetValue’,’CityValue’,’ST’,11111)));

This command will not work. During the insert, the ADDRESS_TY constructor method is used, and DORA owns it. Therefore, the insert command must be modified to specify DORA as the owner of ADDRESS_TY. The following example shows the corrected insert statement, with the reference to DORA shown in bold:

insert into GEORGE_CUSTOMERS values
(1,PERSON_TY(‘SomeName’, 
   Dora.ADDRESS_TY(‘StreetValue’,’CityValue’,’ST’,11111)));

NOTE

In Oracle Database 10g, you can use a synonym for another user’s datatype.

Whenever possible, limit the ability to create abstract datatypes to those users who will own the rest of the application schema objects.

NOTE

When you create a synonym, Oracle does not check the validity of the object for which you are creating the synonym. If you use create synonym x for y, for example, Oracle does not check to make sure that “y” is a valid object name or valid object type. The validation of that object’s accessibility via synonyms is only checked when the object is accessed via the synonym.

In a relational-only implementation of Oracle, you grant the EXECUTE privilege on procedural objects, such as procedures and packages. Within the object-relational implementation of Oracle, the EXECUTE privilege is extended to cover abstract datatypes as well. The EXECUTE privilege is used because abstract datatypes can include methods—PL/SQL functions and procedures that operate on the datatypes. If you grant someone the privilege to use your datatype, you are granting the user the privilege to execute the methods you have defined on the datatype. Although DORA did not yet define any methods on the ADDRESS_TY datatype, Oracle automatically creates constructor methods that are used to access the data. Any object (such as PERSON_TY) that uses the ADDRESS_TY datatype uses the constructor method associated with ADDRESS_TY.

You cannot create public types, and you cannot create public synonyms for your types. Therefore, you will need to either reference the owner of the type or create the type under each account that can create tables in your database. Neither of these options is a simple solution to the problem of datatype management.

{mospagebreak title=Indexing Abstract Datatype Attributes}

In the preceding example, the GEORGE_CUSTOMERS table was created based on a PERSON_TY datatype and an ADDRESS_TY datatype. As shown in the following listing, the GEORGE_ CUSTOMERS table contains a normal column—Customer_ID—and a Person column that is defined by the PERSON_TY abstract datatype:

create table GEORGE_CUSTOMERS
(Customer_ID   NUMBER,
 Person        PERSON_TY);

From the datatype definitions shown in the previous section of this chapter, you can see that PERSON_TY has one column—Name—followed by an Address column defined by the ADDRESS_ TY datatype.

When referencing columns within the abstract datatypes during queries, updates, and deletes, specify the full path to the datatype attributes. For example, the following query returns the Customer_ ID column along with the Name column. The Name column is an attribute of the datatype that defines the Person column, so you refer to the attribute as Person.Name, as shown here:

select C.Customer_ID, C.Person.Name
  from GEORGE_CUSTOMERS C;

You can refer to attributes within the ADDRESS_TY datatype by specifying the full path through the related columns. For example, the Street column is referred to as Person.Address.Street, which fully describes its location within the structure of the table. In the following example, the City column is referenced twice—once in the list of columns to select and once within the where clause:

select C.Person.Name,
       C.Person.Address.City
  from GEORGE_CUSTOMERS C
where C.Person.Address.City like ‘C%';

Because the City column is used with a range search in the where clause, the optimizer may be able to use an index when resolving the query. If an index is available on the City column, Oracle can quickly find all the rows that have City values starting with the letter C, as requested by the query.

To create an index on a column that is part of an abstract datatype, you need to specify the full path to the column as part of the create index command. To create an index on the City column (which is part of the Address column), you can execute the following command:

create index I_GEORGE_CUSTOMERS$CITY
on GEORGE_CUSTOMERS(Person.Address.City);

This command will create an index named I_GEORGE_CUSTOMER$CITY on the Person.Address.City column. Whenever the City column is accessed, the optimizer will evaluate the SQL used to access the data and determine if the new index can be useful to improve the performance of the access.

When creating tables based on abstract datatypes, you should consider how the columns within the abstract datatypes will be accessed. If, like the City column in the previous example, certain columns will commonly be used as part of limiting conditions in queries, they should be indexed. In this regard, the representation of multiple columns in a single abstract datatype may hinder your application performance, because it may obscure the need to index specific columns within the datatype.

When you use abstract datatypes, you become accustomed to treating a group of columns as a single entity, such as the Address columns or the Person columns. It is important to remember that the optimizer, when evaluating query access paths, will consider the columns individually. You therefore need to address the indexing requirements for the columns even when you are using abstract datatypes. In addition, remember that indexing the City column in one table that uses the ADDRESS_TY datatype does not affect the City column in a second table that uses the ADDRESS_TY datatype. If there is a second table named BRANCH that uses the ADDRESS_TY datatype, then its City column will not be indexed unless you create an index for it.

{mospagebreak title=Quiescing and Suspending the Database}

You can temporarily quiesce or suspend the database during your maintenance operations. Using these options allows you to keep the database open during application maintenance, avoiding the time or availability impact associated with database shutdowns.

While the database is quiesced, no new transactions will be permitted by any accounts other than SYS and SYSTEM. New queries or attempted logins will appear to hang until you unquiesce the database. The quiesce feature is useful when performing table maintenance or complicated data maintenance. To use the quiesce feature, you must first enable the Database Resource Manager, as described earlier in this chapter. In addition, the RESOURCE_MANAGEMENT initialization parameter must have been set to TRUE when the database was started, and it must not have been disabled following database startup.

While logged in as SYS or SYSTEM (other SYSDBA privileged accounts cannot execute these commands), quiesce the database:

alter system quiesce restricted;

Any non-DBA sessions logged into the database will continue until their current command completes, at which point they will become inactive. In Real Application Clusters configurations, all instances will be quiesced.

To see if the database is in quiesced state, log in as SYS or SYSTEM and execute the following query:

select Active_State from V$INSTANCE;

The Active_State column value will be either NORMAL (unquiesced), QUIESCING (active non-DBA sessions are still running), or QUIESCED.

To unquiesce the database, use the following command:

alter system unquiesce;

Instead of quiescing the database, you can suspend it. A suspended database performs no I/O to its datafiles and control files, allowing the database to be backed up without I/O interference. To suspend the database, use the following command:

alter system suspend;

NOTE

Do not use this command unless you have put the database in hot backup mode.

Although the alter system suspend command can be executed from any SYSDBA privileged account, you can only resume normal database operations from the SYS and SYSTEM accounts. Use SYS and SYSTEM to avoid potential errors while resuming the database operations. In Real Application Clusters configurations, all instances will be suspended. To see the current status, use the following command:

select Database_Status from V$INSTANCE;

The database will be either SUSPENDED or ACTIVE. To resume the database, log in as SYS or SYSTEM and execute the following command:

alter system resume;

{mospagebreak title=Supporting Iterative Development}

Iterative development methodologies typically consist of a series of rapidly developed prototypes. These prototypes are used to define the system requirements as the system is being developed. These methodologies are attractive because of their ability to show the customers something tangible as development is taking place. However, there are a few common pitfalls that occur during iterative development that undermine its effectiveness.

First, effective versioning is not always used. Creating multiple versions of an application allows certain features to be “frozen” while others are changed. It also allows different sections of the application to be in development while others are in test. Too often, one version of the application is used for every iteration of every feature, resulting in an end product that is not adequately flexible to handle changing needs (which was the alleged purpose of the iterative development).

Second, the prototypes are not always thrown away. Prototypes are developed to give the customer an idea of what the final product will look like; they should not be intended as the foundation of a finished product. Using them as a foundation will not yield the most stable and flexible system possible. When performing iterative development, treat the prototypes as temporary legacy systems.

Third, the development/test/production divisions are clouded. The methodology for iterative development must very clearly define the conditions that have to be met before an application version can be moved to the next stage. It may be best to keep the prototype development completely separate from the development of the full application.

Finally, unrealistic timelines are often set. The same deliverables that applied to the structured methodology apply to the iterative methodology. The fact that the application is being developed at an accelerated pace does not imply that the deliverables will be any quicker to generate.

{mospagebreak title=Iterative Column Definitions}

During the development process, your column definitions may change frequently. You can drop columns from existing tables. You can drop a column immediately, or you can mark it as “unused,” to be dropped at a later time. If the column is dropped immediately, the action may impact performance. If the column is marked as unused, there will be no impact on performance. The column can actually be dropped at a later time when the database is less heavily used.

To drop a column, use either the set unused clause or the drop clause of the alter table command. You cannot drop a pseudocolumn, a column of a nested table, or a partition key column.

In the following example, column Col2 is dropped from a table named TABLE1:

alter table TABLE1 drop column Col2;

You can mark a column as unused, as shown here:

alter table TABLE1 set unused column Col3;

Marking a column as unused does not release the space previously used by the column. You can also drop any unused columns:

alter table TABLE1 drop unused columns;

You can query USER_UNUSED_COL_TABS, DBA_UNUSED_COL, and ALL_UNUSED_ COL_TABS to see all tables with columns marked as unused.

NOTE

Once you have marked a column as unused, you cannot access that column. If you export the table after designating a column as unused, the column will not be exported.

You can drop multiple columns in a single command, as shown in the following example:

alter table TABLE1 drop (Col4, Col5);

NOTE

When dropping multiple columns, you should not use the column keyword of the alter table command. The multiple column names must be enclosed in parentheses, as shown in the preceding example.

If the dropped columns are part of primary keys or unique constraints, you will also need to use the cascade constraints clause as part of your alter table command. If you drop a column that belongs to a primary key, Oracle will drop both the column and the primary key index.

If you cannot immediately arrange for a maintenance period during which you can drop the columns, mark them as unused. During a later maintenance period, you can complete the maintenance from the SYS or SYSTEM account.

Forcing Cursor Sharing

Ideally, application developers should use bind variables in their programs to maximize the reuse of their previously parsed commands in the shared SQL area. If bind variables are not in use, you may see many very similar statements in the library cache—queries that differ only in the literal value in the where clause.

Statements that are identical except for their literal value components are called similar statements. Similar statements can reuse previously parsed commands in the shared SQL area if the CURSOR_SHARING initialization parameter is set to SIMILAR or FORCE. In general, you should favor using SIMILAR over FORCE, because SIMILAR will allow for a new execution plan to be generated reflecting any histogram data known about the literal value.

Setting CURSOR_SHARING to EXACT (the default setting) reuses previously parsed commands only when the literal values are identical.

To use stored outlines with CURSOR_SHARING set to FORCE or SIMILAR, the outlines must have been generated with that CURSOR_SHARING setting in effect.

NOTE

Dynamic SQL commands are always parsed, essentially bypassing the value of the shared SQL area.

Managing Package Development

Imagine a development environment with the following characteristics:

  1. None of your standards are enforced.
  2. Objects are created under the SYS or SYSTEM account.
  3. Proper distribution and sizing of tables and indexes is only lightly considered.
  4. Every application is designed as if it were the only application you intend to run in your database.

As undesirable as these conditions are, they are occasionally encountered during the implementation of purchased packaged applications.

Properly managing the implementation of packages involves many of the same issues that were described for the application development processes in the previous sections. This section will provide an overview of how packages should be treated so they will best fit with your development environment.

Generating Diagrams

Most CASE tools have the ability to reverse engineer packages into a physical database diagram. Reverse engineering consists of analyzing the table structures and generating a physical database diagram that is consistent with those structures, usually by analyzing column names, constraints, and indexes to identify key columns. However, normally there is no one-to-one correlation between the physical database diagram and the entity relationship diagram. Entity relationship diagrams for packages can usually be obtained from the package vendor; they are helpful in planning interfaces to the package database.

Space Requirements

Most Oracle-based packages provide fairly accurate estimates of their database resource usage during production usage. However, they usually fail to take into account their usage requirements during data loads and software upgrades. You should carefully monitor the package’s undo requirements during large data loads. A spare DATA tablespace may be needed as well if the package creates copies of all its tables during upgrade operations.

Tuning Goals

Just as custom applications have tuning goals, packages must be held to tuning goals as well. Establishing and tracking these control values will help to identify areas of the package in need of tuning (see Chapters 8 and 9).

{mospagebreak title=Security Requirements}

Unfortunately, many packages that use Oracle databases fall into one of two categories: either they were migrated to Oracle from another database system, or they assume they will have full DBA privileges for their object owner accounts.

If the packages were first created on a different database system, their Oracle port very likely does not take full advantage of Oracle’s functional capabilities, such as sequences, triggers, and methods. Tuning such a package to meet your needs may require modifying the source code.

If the package assumes that it has full DBA authority, it must not be stored in the same database as any other critical database application. Most packages that require DBA authority do so in order to add new users to the database. You should determine exactly which system-level privileges the package administrator account actually requires (usually just CREATE SESSION and CREATE USER). You can create a specialized system-level role to provide this limited set of system privileges to the package administrator.

Packages that were first developed on non-Oracle databases may require the use of the same account as another Oracle-ported package. For example, ownership of a database account called SYSADM may be required by multiple applications. The only way to resolve this conflict with full confidence is to create the two packages in separate databases.

Data Requirements

Any processing requirements that the packages have, particularly on the data-entry side, must be clearly defined. These requirements are usually well documented in package documentation.

Version Requirements

Applications you support may have dependencies on specific versions and features of Oracle. If you use packaged applications, you will need to base your kernel version upgrade plans on the vendor’s support for the different Oracle versions. Furthermore, the vendor may switch the optimizer features it supports—for example, requiring that your COMPATIBLE parameter be set to a specific value. Your database environment will need to be as flexible as possible in order to support these changes.

Because of these restrictions outside of your control, you should attempt to isolate the packaged application to its own instance. If you frequently query data across applications, the isolation of the application to its own instance will increase your reliance on database links. You need to evaluate the maintenance costs of supporting multiple instances against the maintenance costs of supporting multiple applications in a single instance.

Execution Plans

Generating execution plans requires accessing the SQL statements that are run against the database. The shared SQL area in the SGA maintains the SQL statements that are executed against the database (accessible via the V$SQL_PLAN view). Matching the SQL statements against specific parts of the application is a time-consuming process. You should attempt to identify specific areas whose functionality and performance are critical to the application’s success and work with the package’s support team to resolve performance issues. You can use the STATSPACK utility or the Automated Workload Repository (see Chapter 9) to gather all the commands generated during testing periods and then determine the explain plans for the most resource-intensive queries in that set. If the commands are still in the shared SQL area, you can see the statistics via V$SQL and the explain plan via V$SQL_PLAN.

Acceptance Test Procedures

Purchased packages should be held to the same functional requirements that custom applications must meet. The acceptance test procedures should be developed before the package has been selected; they can be generated from the package-selection criteria. By testing in this manner, you will be testing for the functionality you need rather than what the package developers thought you wanted.

Be sure to specify what your options are in the event the package fails its acceptance test for functional or performance reasons. Critical success factors for the application should not be overlooked just because it is a purchased application.

{mospagebreak title=The Testing Environment}

When establishing a testing environment, follow these guidelines:

  • It must be larger than your production environment. You need to be able to forecast future performance.
  • It must contain known data sets, explain plans, performance results, and data result sets.
  • It must be used for each release of the database and tools, as well as for new features.
  • It must support the generation of multiple test conditions to enable the evaluation of the features’ business costs. You do not want to have to rely on point analysis of results; ideally, you can determine the cost/benefit curves of a feature as the database grows in size.
  • It must be flexible enough to allow you to evaluate different licensing cost options.
  • It must be actively used as a part of your technology implementation methodology.

When testing transaction performance, be sure to track the incremental load rate over time. In general, the indexes on a table will slow the performance of loads when they reach a second internal level. See Chapter 8 for details on indexes and load performance.

When testing, your sample queries should represent each of the following groups:

  1. Queries that perform joins, including merge joins, nested loops, outer joins, and hash joins
  2. Queries that use database links
  3. DML that uses database links
  4. Each type of DML statement (insert, update, and delete statements)
  5. Each major type of DDL statement, including table creations, index rebuilds, and grants
  6. Queries that use Parallel Query, if that option is in use in your environment

The sample set should not be fabricated; it should represent your operations, and it must be repeatable. Generating the sample set should involve reviewing your major groups of operations as well as the OLTP operations executed by your users. The result will not reflect every action within the database, but will allow you to be aware of the implications of upgrades and thus allow you to mitigate your risk and make better decisions about implementing new options.

[gp-comments width="770" linklove="off" ]

chat sex hikayeleri