Home arrow Oracle arrow Page 6 - Tuning Your Database for Availability

Failure Groups - Oracle

When dealing with databases, uptime is not the only issue; if the database is up but slow, the end user will still be unhappy. Fortunately, there are ways to tune an Oracle database to deal with this issue, which is the subject of this article. It is excerpted from chapter three of Oracle Database 10g High Availability with RAC, Flashback and Data Guard, written by Matthew Hart and Scott Jesse (McGraw-Hill/Osborne, 2004; ISBN: 0072254289).

TABLE OF CONTENTS:
  1. Tuning Your Database for Availability
  2. Creating Baselines for Comparing the Workload
  3. Viewing the ADDM Reports
  4. The DB Time Model
  5. Additional Advisors
  6. Failure Groups
  7. Managing ASM Environments with EM
By: McGraw-Hill/Osborne
Rating: starstarstarstarstar / 13
October 06, 2005

print this article
SEARCH DEV SHED

TOOLS YOU CAN USE

advertisement
 

A failure group allows you to take the redundancy of disks to the next level, by creating a group containing disks from multiple controllers. As such, if a controller fails, and all of the disks associated with that controller are inaccessible, other disks within the disk group will still be accessible as long as they are connected to a different controller. By creating a failure group within the disk group, Oracle and ASM will mirror writes to different disks, but will also mirror writes to disks within different failure groups, so that the loss of a controller will not impact access to your data.

File Size Limits on ASM

As we have discussed, ASM disk groups support a variety of different file types, including online redo logs, controlfiles, datafiles, archived redo logs, RMAN backup sets, and Flashback Logs. In Oracle Database 10g Release 1, ASM imposes a maximum file size on any file in an ASM disk group, regardless of the file type. That maximum size depends on the redundancy level of the disk group itself, as shown here:

Max File Size             Redundancy Level

300GB                        External redundancy

150GB                        Normal redundancy

100GB                        High redundancy

These maximum values do not affect the maximum values imposed by other limitations, such as a maximum size for database files themselves. In Oracle Database 10g Release 1, for example, a database file is limited to approximately 4 million blocks. This limit applies irrespective of the underlying storage mechanism, file system, or platform. As you can see, for most block sizes this will not be an issue. However, some platforms (such as Tru64) support a db_block_size of up to 32k. As such, the normal maximum database file size with a 32K block would be 128GB. However, if you are on a high-redundancy ASM disk group (3-way mirroring), the maximum file size would actually be 100GB.

In addition, Oracle Database 10g includes a new feature: the ability to create a tablespace using the BIGFILE syntax. When creating a BIGFILE tablespace, you are only allowed a single datafile in that tablespace, but the limit on the number of blocks is increased to approximately 4 billion blocks (from 4 million). The theoretical maximum file size for a datafile in a BIGFILE tablespace would then be in the Terabytes—but as you can see, ASM will limit the datafile size to 300GB or lower, based on the Redundancy Level of the disk group. Expect this limitation to be removed in subsequent patches or releases. See Metalink Note 265659.1 for details.

Rebalancing Operations

Inherent to ASM is the ability to add and remove disks from a disk group on the fly without impacting the overall availability of the disk group itself, or of the database. This, again, is one of the precepts of grid computing. ASM handles this by initiating a rebalance operation any time a disk is added or removed. If a disk is removed from the disk group, either due to a failure or excess capacity in the group, the rebalance operation will remirror the extents that had been mirrored to that disk and redistribute the extents among the remaining disks in the group. If a new disk is added to the group, the rebalance will do the same, ensuring that each disk in the group has a relatively equal number of extents.

Because of the way the allocation units are striped, a rebalance operation only requires that a small percentage of extents be relocated, minimizing the impact of this operation. Nevertheless, you can control the rebalance operation by using the parameter ASM_POWER_LIMIT, which is a parameter specific to the ASM instance. By default, this is set to 1, meaning that any time a disk is added or removed, a rebalance operation will begin—using a single slave. By setting this value to 0 for a disk group, you can defer the operation until later (say overnight), at which time you can set the ASM_POWER_LIMIT to as high as 11. This will generate 11 slave processes to do the work of rebalancing. This can be accomplished via the alter system command:

alter system set asm_power_limit=0;
alter system set asm_power_limit=11;

Background Processes for ASM 

An ASM instance introduces two new types of background processes—the RBAL process and the ARBn processes. The RBAL process within the ASM instance actually determines when a rebalance needs to be done and estimates how long it will take. RBAL then invokes the ARB processes to do the actual work. The number of ARB processes invoked depends on the ASM_ POWER_LIMIT setting. If this is set to the max of 11, then an ASM instance would have 11 ARB background processes, starting with ARB0 and ending with ARBA. In addition, a regular database instance will have an RBAL and an ASMB process, but the RBAL process in a database instance is used for making global calls to open the disks in a disk group. The ASMB process communicates with the CSS daemon on the node and receives file extent map information from the ASM instance. ASMB is also responsible for providing I/O stats to the ASM instance.

ASM and RAC

Because it is managed by Oracle, ASM environments are particularly well-suited for a RAC installation. Using a shared disk array with ASM disk groups for file locations can greatly simplify the storage of your datafiles. ASM eliminates the need to configure RAW devices for each file, simplifying the file layout and configuration. ASM also eliminates the need to use a cluster file system, as ASM takes over all file management duties for the database files. However, you can still use a cluster file system if you want to install your ORACLE_HOME on a shared drive (on those platforms that support the ORACLE_HOME on a cluster file system). We discuss ASM instance configuration in a RAC environment in the next section, as well as in Chapter 4.

Implementing ASM

Conceptually, as we mentioned, ASM requires a separate instance be created on each node/server where any Oracle instances reside. On the surface, this instance is just like any other Oracle instance, with an init file, init parameters, and so forth, except that this instance never opens a database. The major difference between an ASM instance and a regular instance lies in a few parameters:

  1. INSTANCE_TYPE = ASM (mandatory for an ASM instance)
  2. ASM_DISKSTRING = /dev/raw/raw* (path to look for candidate disks)
  3. ASM_DISKGROUPS = ASM_DISK (defines disk groups to mount at startup)

Aside from these parameters, the ASM instance requires an SGA of around 100MB, leaving the total footprint for the ASM instance at around 130MB. Remember—no controlfiles, datafiles, or redo logs are needed for an ASM instance. The ASM instance is used strictly to mount disk groups. A single ASM instance can manage disk groups used by multiple Oracle databases on a single server. However, in a RAC environment, each separate node/server must have its own ASM instance (we will discuss this in more detail in Chapter 4).

Creating the ASM Instance

If you are going to create a database using ASM for the datafiles, you must first create the ASM instance and disk groups to be used. It is possible to do this via the command line by simply creating a separate init file, with INSTANCE_TYPE = ASM. Disk groups can be created or modified using SQL commands such as CREATE DISK GROUP, ALTER DISK GROUP, DROP DISK GROUP, and so on. The instance name for your ASM instance should be +ASM, with the + actually being part of the instance name. In a RAC environment, the instance_number will be appended, so ASM instances will be named +ASM1, +ASM2, +ASM3, and so forth.

However, as in the past, we recommend using the GUI tools such as DBCA. The simplest way to get an ASM instance up and running is to create your database by using the DBCA. If there is not currently an ASM instance on the machine, the DBCA will create an ASM instance for you, in addition to your database instance. If you are using the DBCA to create a RAC database, then an ASM instance will be created on each node selected as part of your cluster. After creation of the ASM instance, you will be able to create the disk groups directly from within the DBCA, which is created by providing disks in the form of RAW slices. The ASM instance will then mount the disk groups before proceeding with the rest of the database creation. Figure 3-5 shows the option you will have when creating the database, to choose the type of storage on the Storage Options screen. After selecting this option, you will be presented with the ASM Disk Groups screen, which will show you the available disk groups. On a new installation, this screen will be blank, as there will be no disk groups. So, at this point, on a new installation, you would choose the Create New option.


FIGURE 3-5.  Choosing ASM on the DBCA Storage Options screen

Creating ASM Disk Groups Using the DBCA

In the Create Disk Group screen (see Figure 3-6), ASM searches for disks based on the disk discovery string defined in the ASM_DISKSTRING parameter. Different platforms have different default values for the disk string. On Linux, the default will be /dev/raw/*, and on Solaris, the default will be /dev/rdsk/*. This value can be modified from within the DBCA, as shown in Figure 3-6, by clicking the Change Disk Discovery Path button. Once you have the correct path, you should end up with a list of possible candidate disks. Note in Figure 3-6 that the Show Candidates radio button is selected. This will only display disks in the discovery string that are not already part of another disk group. Note also that two of the disks have FORMER under the Header Status. This is because these two disks were at one time part of a different disk group, but that group was dropped. This status might also show up if a disk is removed from an existing disk group—nevertheless, these disks are available to be added to this group, and are considered valid candidates. In Figure 3-6, you can see that we have selected these two disks to make up our new disk group.


FIGURE 3-6.  Creating the ASM disk group


NOTE

On most Linux platforms, Oracle provides a special ASM library to simplify the ASM integration with the operating system, and to ease the disk discovery process. At this time, the library is not available for all platforms, but you can check OTN periodically for the availability of a library for your platform at http://otn.oracle.com/software/tech/linux/asmlib/ index.html

 


 

At this point, the ASM instance has already been created by the DBCA, and the disk group will be mounted by the ASM instance. If this is a RAC environment, the ASM instance will have been created on all nodes selected as participating in the database creation. (ASM mounts the disk groups in a manner similar to how it mounts the database, but rather than via an ALTER DATABASE MOUNT; command, the ALTER DISKGROUPS MOUNT; command is used instead.) Once the disk group is mounted, you can proceed to select the group for your database files and continue on with the creation of the actual database. You will be asked if you want to use Oracle managed files, and you will also be prompted to set up a flashback recovery area. If you choose to set up a flashback recovery area, we recommend that you set up a separate ASM disk group for the flashback recovery area. You will be given an opportunity by the DBCA to create that separate disk group. If you choose not to do so during creation time, you can create additional disk groups later on, either manually or using Enterprise Manager.


NOTE

ASM requires that the ocssd daemon be running (cluster synchronization service) even in a single instance. (On Windows, ocssd manifests itself as a service called OracleCSService.) For this reason, Oracle will install/create the ocssd daemon (or OracleCSService) automatically on a new Oracle Database 10g install, even in a single-instance environment.




 
 
>>> More Oracle Articles          >>> More By McGraw-Hill/Osborne
 

blog comments powered by Disqus
escort Bursa Bursa escort Antalya eskort
   

ORACLE ARTICLES

- Oracle Java Security Woes Continue
- Oracle's New IaaS Cloud Option: There's a Ca...
- Oracle Acquires Eloqua to Boost Cloud Presen...
- Choosing Innovation: Oracle Survey Insights
- Oracle Fixes Privilege Escalation Bug
- Oracle`s Communications Service Availability...
- Oracle Releases Exalytics, Taleo Plans
- Oracle Releases Communications Network Integ...
- Oracle Releases Communications Data Model 11...
- Oracle Releases PeopleSoft PeopleTools 8.52
- Oracle Integrates Cloudera Apache Distro, My...
- Oracle Releases MySQL 5.5.18
- Oracle Announces NoSQL Database Availability
- Sorting Database Columns With the SELECT Sta...
- Retrieving Table Data with the LIKE Operator

Developer Shed Affiliates

 


Dev Shed Tutorial Topics: