This chapter describes the storage configuration tasks that you must complete before you start the installer to install Oracle Clusterware and Oracle Automatic Storage Management (Oracle ASM), and that you must complete before adding an Oracle Real Application Clusters (Oracle RAC) installation to the cluster.
Note:
If you are currently using OCFS for Windows as your shared storage, then you must migrate to using Oracle ASM before the upgrade of Oracle Database and Oracle Grid Infrastructure.This section describes the supported storage options for Oracle Grid Infrastructure for a cluster, and for features running on Oracle Grid Infrastructure.
See Also:
The Certification page in My Oracle Support for the most current information about certified storage options: https://support.oracle.comBoth Oracle Clusterware and the Oracle RAC database use files that must be available to all the nodes in the cluster. The following table shows the storage options supported for storing Oracle Clusterware and Oracle RAC files.
Table 6-1 Supported Storage Options for Oracle Clusterware and Oracle RAC Files and Home Directories
Storage Option | OCR and Voting Files | Oracle Grid Infrastructure Home | Oracle RAC Home | Oracle RAC Database Files | Oracle Recovery Files |
---|---|---|---|---|---|
Oracle Automatic Storage Management (Oracle ASM) |
Yes |
No |
No |
Yes |
Yes |
Oracle Automatic Storage Management Cluster File System (Oracle ACFS) |
No |
No |
Yes |
Yes (Oracle Database 12c Release 12.1.0.2 and later) |
Yes (Oracle Database 12c Release 12.1.0.2 and later) |
Direct NFS Client access to a certified network attached storage (NAS) filer Note: NFS or Direct NFS Client cannot be used for Oracle Clusterware files. |
No |
No |
No |
Yes |
Yes |
Shared disk partitions (raw devices) |
No |
No |
No |
No |
No |
Local file system (NTFS formatted disk) |
No |
Yes |
Yes |
No |
No |
Oracle Clusterware uses voting files to monitor cluster node status, and the Oracle Cluster Registry (OCR) is a file that contains the configuration information and status of the cluster. The installer automatically initializes the OCR during the Oracle Clusterware installation. Oracle Database Configuration Assistant (DBCA) uses the OCR for storing the configurations for the cluster databases that it creates.
Use the following guidelines when choosing storage options for the Oracle Clusterware files:
The Oracle Grid Infrastructure home (Grid home) cannot be stored on a shared file system; it must be installed on a local disk.
You can choose any combination of the supported storage options for each file type if you satisfy all requirements listed for the chosen storage options.
You can store Oracle Cluster Registry (OCR) and voting files in Oracle ASM disk groups. You can also store a backup of the OCR file in a disk group.
Note:
Oracle Database Upgrade Guide for information about how to prepare for upgrading an existing databaseFor a storage option to meet high availability requirements, the files stored on the disk must be protected by data redundancy, so that if one or more disks fail, then the data stored on the failed disks can be recovered. This redundancy can be provided externally using Redundant Array of Independent Disks (RAID) devices, or logical volumes on multiple physical devices and implement the stripe-and-mirror- everything methodology, also known as SAME.
If you do not have a RAID devices or logical volumes to provide redundancy, then you can create additional copies, or mirrors, of the files on different file systems. If you choose to mirror the files, then you must provide disk space for additional OCR files and at least two additional voting files.
Each OCR ___location should be placed on a different disk.
For voting files, ensure that each file does not share any hardware device or disk, or other single point of failure with the other voting files. Any node that does not have available to it an absolute majority of voting files (more than half, or a quorum) configured will be restarted.
If you do not have a storage option that provides external file redundancy, then you must use Oracle ASM, or configure at least three voting file locations to provide redundancy.
Except when using external redundancy, Oracle ASM mirrors all Oracle Clusterware files in separate failure groups within a disk group. A quorum failure group, a special type of failure group, contains mirror copies of voting files when voting files are stored in normal or high redundancy disk groups.
Be aware of the following requirements and recommendations when using Oracle ASM to store the Oracle Clusterware files:
You can store Oracle Cluster Registry (OCR) and voting files in Oracle ASM disk groups. You can also store a backup of the OCR file in a disk group.
The Oracle ASM instance must be clustered and the disks available to all the nodes in the cluster. Any node that does not have access to an absolute majority of voting files (more than half) will be restarted.
To store the Oracle Clusterware files in an Oracle ASM disk group, the disk group compatibility must be at least 11.2.
Note:
If you are upgrading an Oracle ASM installation, then see Oracle Automatic Storage Management Administrator's Guide for more information about disk group compatibility.For all Oracle RAC installations, you must choose the shared storage options to use for Oracle Database files. Oracle Database files include data files, control files, redo log files, the server parameter file, and the password file.
If you plan to configure automated backups, then you must also choose a shared storage option to use for recovery files (the fast recovery area). Oracle recommends that you choose Oracle ASM as the shared storage option for the database data files and recovery files. The shared storage option that you choose for recovery files can be the same as or different from the shared storage option that you choose for the data files.
If you do not use Oracle ASM, then Oracle recommends that you place the data files and the Fast Recovery Area in shared storage located outside of the Oracle home, in separate locations, so that a hardware failure does not affect availability.
Use the following guidelines when choosing storage options for the Oracle RAC files:
You can choose any combination of the supported storage options for each file type if you satisfy all requirements listed for the chosen storage options.
Oracle recommends that you choose Oracle ASM as the storage option for database and recovery files.
If you intend to use Oracle ASM with Oracle RAC, and you are configuring a new Oracle ASM instance, then your system must meet the following conditions:
All the nodes in the cluster must have Oracle Clusterware and Oracle ASM 12c Release 1 (12.1) installed as part of an Oracle Grid Infrastructure for a cluster installation.
Any existing Oracle ASM instance on any node in the cluster is shut down before installing Oracle RAC or creating the Oracle RAC database.
For Standard Edition and Standard Edition 2 (SE2) Oracle RAC installations, Oracle ASM is the only supported shared storage option for database or recovery files. You must use Oracle ASM for the storage of Oracle RAC data files, online redo logs, archived redo logs, control files, server parameter file (SPFILE), and the fast recovery area.
Oracle Automatic Storage Management Cluster File System (Oracle ACFS) and Oracle ASM Dynamic Volume Manager (Oracle ADVM) are the main components of Oracle Cloud File System (Oracle CloudFS).
Oracle ACFS extends Oracle ASM technology to support all your application data in both single instance and cluster configurations. Oracle ADVM provides volume management services and a standard disk device driver interface to clients. Oracle Automatic Storage Management Cluster File System communicates with Oracle ASM through the Oracle Automatic Storage Management Dynamic Volume Manager interface.
See Also:
My Oracle Support Note 1369107.1 for more information about platforms and releases that support Oracle ACFS and Oracle ADVM: https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=1369107.1
You can choose any combination of the supported shared storage options for each file type if you satisfy all requirements listed for the chosen storage option.
During Oracle Grid Infrastructure installation, you can create one or two Oracle ASM disk groups. After the Oracle Grid Infrastructure installation, you can create additional disk groups using Oracle Automatic Storage Management Configuration Assistant (ASMCA), SQL*Plus, or Automatic Storage Management Command-Line Utility (ASMCMD).
Note that with Oracle Database 11g Release 2 (11.2) and later releases, Oracle Database Configuration Assistant (DBCA) does not have the functionality to create disk groups for Oracle ASM.
If you choose to create a second disk group during Oracle Grid Infrastructure installation, then the second disk group stores the Grid Infrastructure Management Repository (GIMR) data files and Oracle Cluster Registry (OCR) backup files. It is strongly recommended that you store the OCR backup files in a different disk group from the disk group where you store OCR files. Storing OCR backups in a separate ___location is also advisable for performance, availability, sizing, and manageability of storage.
Note:
You can choose the Grid Infrastructure Management Repository (GIMR) ___location at the time of installing Oracle Grid Infrastructure. You cannot migrate a Grid Infrastructure Management Repository (GIMR) from one disk group to another later.If you install Oracle Database or Oracle RAC after you install Oracle Grid Infrastructure, then you can either use the same disk group for database files, OCR, and voting files, or you can use different disk groups. If you create multiple disk groups before installing Oracle RAC or before creating a database, then you can do one of the following:
Place the data files in the same disk group as the Oracle Clusterware files.
Use the same Oracle ASM disk group for data files and recovery files.
Use different disk groups for each file type.
If you create only one disk group for storage, then the OCR and voting files, database files, and recovery files are contained in the one disk group. If you create multiple disk groups for storage, then you can place files in different disk groups.
See Also:
Oracle Automatic Storage Management Administrator's Guide for information about creating disk groups
Network-attached storage (NAS) systems use a network file system (NFS) to access data. You can store Oracle RAC data files and recovery files on a supported NAS server using Direct NFS Client.
NFS file systems must be mounted and available over NFS mounts before you start the Oracle RAC installation. See your vendor documentation for NFS configuration and mounting information.
Note that the performance of Oracle Database software and the databases that use NFS storage depend on the performance of the network connection between the database server and the NAS device. For this reason, Oracle recommends that you connect the database server (or cluster node) to the NAS device using a private, dedicated, network connection, which should be Gigabit Ethernet or better.
Oracle Automatic Storage Management Cluster File System (Oracle ACFS) provides a general purpose file system. Oracle ACFS extends Oracle ASM technology to support all your application data in both single instance and cluster configurations.
You can place the Oracle home for Oracle Database 12c Release 1 (12.1) software on Oracle ACFS, but you cannot place Oracle Clusterware files on Oracle ACFS.
Note the following about Oracle ACFS:
You cannot place Oracle Clusterware executable files or shared files on Oracle ACFS.
You must use a ___domain user when installing Oracle Grid Infrastructure if you plan to use Oracle ACFS.
Starting with Oracle Grid Infrastructure 12c Release 1 (12.1) for a cluster, creating Oracle data files on an Oracle ACFS file system is supported.
You can put Oracle Database binaries and administrative files (for example, trace files) on Oracle ACFS.
Oracle ACFS does not support replication or encryption with Oracle Database data files, tablespace files, control files, and redo logs.
For policy-managed Oracle Flex Cluster databases, Oracle ACFS can run on Hub Nodes, but cannot run on Leaf Nodes. For this reason, Oracle RAC binaries cannot be placed on Oracle ACFS located on Leaf Nodes.
When creating Oracle ACFS file systems on Windows, log on as a Windows ___domain user. Also, when creating files in an Oracle ACFS file system on Windows, you should be logged in as a Windows ___domain user to ensure that the files are accessible by all nodes.
When using a file system across cluster nodes, the best practice is to mount the file system using a ___domain user, to ensure that the security identifier is the same across cluster nodes. Windows security identifiers, which are used in defining access rights to files and directories, use information which identifies the user. Local users are only known in the context of the local node. Oracle ACFS uses this information during the first file system mount to set the default access rights to the file system.
Oracle Restart does not support root-based Oracle Clusterware resources. For this reason, the following restrictions apply if you run Oracle ACFS on an Oracle Restart Configuration:
Oracle Restart does not support Oracle ACFS resources on all platforms.
Starting with Oracle Database 12c, Oracle Restart configurations do not support the Oracle ACFS registry.
You must manually load Oracle ACFS drivers after a system restart.
You must manually mount an Oracle ACFS file system, and unmount it after the Oracle ASM instance has finished running.
Creating Oracle data files on an Oracle ACFS file system is not supported in Oracle Restart configurations. Creating Oracle data files on an Oracle ACFS file system is supported on Oracle Grid Infrastructure for a Cluster configurations.
If you choose to place the recovery files on a cluster file system, then use the following guidelines when deciding where to place them:
To prevent disk failure from making the database files and the recovery files unavailable, place the recovery files on a cluster file system that is on a different physical disk from the database files.
Note:
Alternatively use an Oracle ASM disk group with a normal or high redundancy level for data files, recovery files, or both file types, or use external redundancy.The cluster file system that you choose should have at least 3 GB of free disk space.
The disk space requirement is the default disk quota configured for the fast recovery area (specified by the DB_RECOVERY_FILE_DEST_SIZE
initialization parameter).
If you choose the Advanced database configuration option, then you can specify a different disk quota value. After you create the database, you can also use Oracle Enterprise Manager to specify a different value.
See Also:
Oracle Database Backup and Recovery User's Guide for more information about sizing the fast recovery area.Each supported file system type has additional requirements that must be met to support Oracle Clusterware and Oracle RAC. Use the following sections to help you select your storage option.
If you choose to place your Oracle Database software or data files on a clustered file system, then one of the following should be true:
The disks used for the file system are on a highly available storage device, (for example, a RAID device).
You use at least two independent file systems, with the database files on one file system, and the recovery files on a different file system.
The user account with which you perform the installation must be able to create the files in the path that you specify.
Note:
On Windows platforms, the only supported method for storing Oracle Clusterware files is Oracle ASM.Before installing Oracle Grid Infrastructure, you must identify and determine how many devices are available for use by Oracle ASM, the amount of free disk space available on each disk, and the redundancy level to use with Oracle ASM. When Oracle ASM provides redundancy, you must have sufficient capacity in each disk group to manage a re-creation of data that is lost after a failure of one or two failure groups.
Tip:
As you progress through the following steps, make a list of the raw device names you intend to use to create the Oracle ASM disk groups and have this information available during the Oracle Grid Infrastructure installation or when creating your Oracle RAC database.See Also:
Oracle Automatic Storage Management Administrator's Guide for more information about Oracle ASM failure groupsBe aware of the following restrictions when configuring disk partitions for use with Oracle ASM:
With x64 Windows, you can create up to 128 primary partitions for each disk.
Oracle recommends that you limit the number of partitions you create on a single disk to prevent disk contention. Therefore, you may prefer to use extended partitions rather than primary partitions.
To use Oracle ASM as the shared storage solution for Oracle Clusterware or Oracle RAC files, you must perform certain tasks before you begin the software installation.
ASM_DISKSTRING
initialization parameter.To use Oracle ASM as the storage option for either database or recovery files, you must use an existing Oracle ASM disk group, or use ASMCA to create the necessary disk groups before installing Oracle Database 12c Release 1 (12.1) and creating an Oracle RAC database.
asmcmd
), or Oracle Enterprise Manager Cloud Control. Alternatively, you can use the following procedure:If you are sure that a suitable disk group does not exist on the system, then install or identify appropriate disk devices to add to a new disk group.
Use the following guidelines when identifying appropriate disk devices:
All of the devices in an Oracle ASM disk group should be the same size and have the same performance characteristics.
Do not specify multiple partitions on a single physical disk as a disk group device. Oracle ASM expects each disk group device to be on a separate physical disk.
Nonshared logical partitions are not supported with Oracle RAC. To use logical partitions for your Oracle RAC database, you must use shared logical volumes created by a logical volume manager such as diskpart.msc
.
Although you can specify a logical volume as a device in an Oracle ASM disk group, Oracle does not recommend their use because it adds a layer of complexity that is unnecessary with Oracle ASM. In addition, Oracle RAC requires a cluster logical volume manager in case you decide to use a logical volume with Oracle ASM and Oracle RAC.
When an Oracle ASM instance is initialized, Oracle ASM discovers and examines the contents of all of the disks that are in the paths that you designated with values in the ASM_DISKSTRING
initialization parameter.
The value for the ASM_DISKSTRING
initialization parameter is an operating system–dependent value that Oracle ASM uses to limit the set of paths that the discovery process uses to search for disks. The exact syntax of a discovery string depends on the platform, ASMLib libraries, and whether Oracle Exadata disks are used. The path names that an operating system accepts are always usable as discovery strings.
The default value of ASM_DISKSTRING
might not find all disks in all situations. If your site is using a third-party vendor ASMLib, then the vendor might have discovery string conventions that you must use for ASM_DISKSTRING
. In addition, if your installation uses multipathing software, then the software might place pseudo-devices in a path that is different from the operating system default.
See Also:
Oracle Automatic Storage Management Administrator's Guide for more information about the initialization parameter ASM_DISKSTRING
See "Oracle ASM and Multipathing" in Oracle Automatic Storage Management Administrator's Guide for information about configuring Oracle ASM to work with multipathing, and consult your multipathing vendor documentation for details.
To use a shared file system for Oracle RAC, the file system must comply with the following requirements:
To use NFS, it must be on a certified network attached storage (NAS) device. Access the My Oracle Support website as described in "Checking Hardware and Software Certification on My Oracle Support" to find a list of certified NAS devices.
If placing the Oracle RAC data files on a shared file system, then one of the following should be true:
The disks used for the file system are on a highly available storage device, (for example, a RAID device).
The file systems consist of at least two independent file systems, with the data files on one file system, and the recovery files on a different file system.
The user account with which you perform the installation must be able to create the files in the path that you specify for the shared storage.
Note:
If you are upgrading Oracle Clusterware, and your existing cluster uses 100 MB OCR and 20 MB voting file partitions, then you must extend these partitions to at least 300 MB. Oracle recommends that you do not use partitions, but instead place OCR and voting files in Oracle ASM disk groups marked as QUORUM disk groups.
All storage products must be supported by both your server and storage vendors.
Use the following table to determine the minimum size for shared file systems:
Table 6-4 Oracle RAC Shared File System Volume Size Requirements
File Types Stored | Number of Volumes | Volume Size |
---|---|---|
Oracle Database data files |
1 |
At least 1.5 GB for each volume |
Recovery files Note: Recovery files must be on a different volume than database files |
1 |
At least 2 GB for each volume |
The total required volume size is cumulative. For example, if you use one volume for data files and one volume for recovery files, then you should have at least 3.5 GB of available storage over two volumes.
If you use Oracle ASM for your database files, then Oracle creates the database with Oracle Managed files by default. When using the Oracle Managed files feature, you need specify only the database object name instead of file names when creating or deleting database files.
Configuration procedures are required to enable Oracle Managed Files.
When you have determined your disk storage options, you then start the configuration. The exact steps to perform depend on the type of shared storage you use.
When using shared storage on a Windows platform, there are additional preinstallation tasks to complete.
You must disable write caching on all disks that will be used to share data between the nodes in your cluster.
Note:
Any disks that you use to store files, including database files, that will be shared between nodes, must have write caching disabled.Even though the automount feature is enabled by default, you should verify that automount is enabled.
You must enable automounting when using:
Raw partitions for Oracle ASM
Oracle Clusterware
Logical drives for Oracle ASM
Note:
Raw partitions are supported only when upgrading an existing installation using the configured partitions. On new installations, using raw partitions is not supported by ASMCA or OUI, but is supported by the software if you perform manual configurationNote:
All nodes in the cluster must have automatic mounting enabled to correctly install Oracle RAC and Oracle Clusterware. Oracle recommends that you enable automatic mounting before creating any logical partitions for use by the database or Oracle ASM.You must restart each node after enabling disk automounting.
After disk automounting is enabled and the node is restarted, automatic mounting remains active until it is disabled.
The installer does not suggest a default ___location for the OCR or the voting file. If you choose to create these files on Oracle ASM, then you must first create and configure disk partitions to be used in the Oracle ASM disk group.
asmtool
.The following steps outline the procedure for creating disk partitions for use with Oracle ASM.
The only partitions that OUI displays for Windows systems are logical drives that are on disks that do not contain a primary partition, and have been marked (or stamped) with asmtool
.
Configure the disks before installation either by using asmtoolg
(graphical user interface (GUI) version) or using asmtool
(command line version). You also have the option of using the asmtoolg utility during Oracle Grid Infrastructure for a cluster installation.
All disk names created by asmtoolg
or asmtool
begin with the prefix ORCLDISK
followed by a user-defined prefix (the default is DATA
), and by a disk number for identification purposes. You can use them as raw devices in the Oracle ASM instance by specifying a name \\.\ORCLDISKprefixn
, where prefixn either can be DATA, or a value you supply, and where n represents the disk number.
The asmtoolg
and asmtool
utilities only work on partitioned disks; you cannot use Oracle ASM on unpartitioned disks. You can also use these tools to reconfigure the disks after installation. These utilities are installed automatically as part of Oracle Grid Infrastructure.
Note:
If user account control (UAC) is enabled, then runningasmtoolg
or asmtool
requires administrator-level permissions.asmtoolg
and asmtool
tools associate meaningful, persistent names with disks to facilitate using those disks with Oracle ASM.asmtoolg
(GUI version) to delete disk stamps.asmtool
is a command-line interface for marking (or stamping) disks to be used with Oracle ASM.The asmtoolg
and asmtool
tools associate meaningful, persistent names with disks to facilitate using those disks with Oracle ASM.
Oracle ASM uses disk strings to operate more easily on groups of disks at the same time. The names that asmtoolg or asmtool create make this easier than using Windows drive letters.
All disk names created by asmtoolg
or asmtool
begin with the prefix ORCLDISK
followed by a user-defined prefix (the default is DATA
), and by a disk number for identification purposes. You can use them as raw devices in the Oracle ASM instance by specifying a name \\.\ORCLDISKprefixn
, where prefixn either can be DATA
, or a value you supply, and where n represents the disk number.
Use asmtoolg (GUI version) to create device names; use asmtoolg to add, change, delete, and examine the devices available for use in Oracle ASM.
You can use asmtoolg
(GUI version) to delete disk stamps.
asmtool
is a command-line interface for marking (or stamping) disks to be used with Oracle ASM.
Option | Description | Example |
---|---|---|
|
Adds or changes stamps. You must specify the hard disk, partition, and new stamp name. If the disk is a raw device or has an existing Oracle ASM stamp, then you must specify the If necessary, follow the steps under "Create Disk Partitions for Use With Oracle ASM" to create disk partitions for the Oracle ASM instance. |
asmtool -add [-force] \Device\Harddisk1\Partition1 ORCLDISKASM0 \Device\Harddisk2\Partition1 ORCLDISKASM2 ... |
|
Adds or changes stamps using a common prefix to generate stamps automatically. The stamps are generated by concatenating a number with the prefix specified. If the disk is a raw device or has an existing Oracle ASM stamp, then you must specify the |
asmtool -addprefix ORCLDISKASM [-force] \Device\Harddisk1\Partition1 \Device\Harddisk2\Partition1 ... |
|
Creates an Oracle ASM disk device from a file instead of a partition. Note: Usage of this command is not supported for production environments. |
asmtool -create \\server\share\file 1000 asmtool -create D:\asm\asmfile02.asm 240 |
|
List available disks. The stamp, windows device name, and disk size in MB are shown. |
asmtool -list |
|
Removes existing stamps from disks. |
asmtool -delete ORCLDISKASM0 ORCLDISKASM1... |
If user access control (UAC) is enabled, then you must create a desktop shortcut to a command window. Open the command window using the Run as Administrator, right-click the context menu, and launch asmtool
.
Note:
If you use-add
, -addprefix
, or -delete
, asmtool
notifies the Oracle ASM instance on the local node and on other nodes in the cluster, if available, to rescan the available disks.To create disk partitions, use the disk administration tools provided by the operating system or third party vendors. You can create the disk partitions using either the Disk Management Interface or the DiskPart utility, both of which are provided by the operating system.
To use shared disks not managed by Oracle ASM for the Oracle home and data files, the following partitions, at a minimum, must exist before you run OUI to install Oracle Clusterware:
5.5 GB or larger partition for the Oracle home, if you want a shared Oracle home
3 GB or larger partitions for the Oracle Database data files and recovery files
Direct NFS Client is an interface for NFS systems provided by Oracle.
oranfstab
file.oranfstab
file to Oracle_home\dbs
.With Oracle Database, instead of using the operating system NFS client or third-party NFS client, you can configure Oracle Database to use Direct NFS Client NFS to access NFS servers directly.
Direct NFS Client supports NFSv3, NFSv4 and NFSv4.1 protocols (excluding the Parallel NFS extension) to access the NFS server. Direct NFS Client tunes itself to make optimal use of available resources and enables the storage of data files on supported NFS servers.
Note:
Use NFS servers supported for Oracle RAC. Check My Oracle Support, as described in "Checking Hardware and Software Certification on My Oracle Support" for support information.To enable Oracle Database to use Direct NFS Client, the NFS file systems must be mounted and available over regular NFS mounts before you start installation. Direct NFS Client manages settings after installation. If Oracle Database cannot open an NFS server using Direct NFS Client, then an informational message is logged into the Oracle alert log. A trace file is also created, indicating that Direct NFS Client could not connect to an NFS server.
Note:
Direct NFS does not work if the backend NFS server does not support a write size (wtmax
) of 32768 or larger.The Oracle files resident on the NFS server that are accessed by Direct NFS Client can also be accessed through a third party NFS client. Management of Oracle data files created with Direct NFS Client should be done according to the guidelines specified in the "Managing Datafiles and Tempfiles" chapter of Oracle Database Administrator's Guide.
Volumes mounted through Common Internet File System (CIFS) can not be used for storing Oracle database files without configuring Direct NFS Client. The atomic write requirements needed for database writes are not guaranteed through the CIFS protocol, consequently CIFS can only be used for OS level access, for example, for commands such as copy.
Some NFS file servers require NFS clients to connect using reserved ports. If your filer is running with reserved port checking, then you must disable it for Direct NFS to operate. To disable reserved port checking, consult your NFS file server documentation.
For NFS servers that restrict port range, you can use the insecure option to enable clients other than an Administrator user to connect to the NFS server. Alternatively, you can disable Direct NFS Client as described in "Disabling Oracle Disk Management Control of NFS for Direct NFS Client"
If you use Direct NFS Client, then you must create a configuration file, oranfstab
, to specify the options, attributes, and parameters that enable Oracle Database to use Direct NFS Client. Direct NFS Client looks for the mount point entries in oranfstab
. It uses the first matched entry as the mount point. You must create the oranfstab
file in the Oracle_home\dbs
directory.
When the oranfstab
file is placed in Oracle_home\dbs
, the entries in the file are specific to a single database. For Oracle RAC installations in a shared Oracle home, the oranfstab
file is globally available to all database instances.
All instances that use the shared Oracle home use the same Oracle_home\dbs\oranfstab
file. For a nonshared Oracle home, because all the Oracle RAC instances use the same oranfstab
file, you must replicate the oranfstab
file on all of the nodes. Also, you must keep the oranfstab
file synchronized on all the nodes.
Note:
If you remove an NFS path fromoranfstab
that Oracle Database is using, then you must restart the database for the change to be effective. In addition, the mount point that you use for the file system must be identical on each node.Related Topics
You can configure various settings in the oranfstab
file.
Table 6-5 Configurable Attributes for the oranfstab File
Attribute | Description |
---|---|
server |
The NFS server name |
path |
Up to four network paths to the NFS server, specified either by internet protocol (IP) address, or by name, as displayed using the ifconfig command on the NFS server |
local |
Up to 4 network interfaces on the database host, specified by IP address, or by name, as displayed using the ipconfig command on the database host. |
export |
The exported path from the NFS server. Use a UNIX-style path |
mount |
The corresponding local mount point for the exported volume. Use a Windows-style path |
mnt_timeout |
(Optional) Specifies the time (in seconds) for which Direct NFS Client should wait for a successful mount before timing out. The default timeout is 10 minutes (600). |
uid |
(Optional) The UNIX user ID to be used by Direct NFS Client to access all NFS servers listed in |
gid |
(Optional) The UNIX group ID to be used by Direct NFS Client to access all NFS servers listed in |
nfs_version |
(Optional) Specifies the NFS protocol that Direct NFS Client uses. Possible values are NFSv3, NFSv4, and NFSv4.1. The default version is NFSv3. To specify NFSv4 or NFSv4.1, you must set the |
management |
Enables Direct NFS Client to use the management interface for SNMP queries. You can use this parameter if SNMP is running on separate management interfaces on the NFS server. The default value is |
community |
Specifies the community string for use in SNMP queries. Default value is |
See Also:
"Limiting Asynchronous I/O in NFS Server Environments" in Oracle Database Performance Tuning GuideDirect NFS Client determines mount point settings for NFS storage devices based on the configuration information in oranfstab. Direct NFS Client uses the first matching entry as the mount point.
If Oracle Database cannot open an NFS server using Direct NFS Client, then an error message is written into the Oracle alert and trace files indicating that Direct NFS Client could not be established.
Note:
You can have only one active Direct NFS Client implementation for each instance. Using Direct NFS Client on an instance will prevent another Direct NFS Client implementation.Direct NFS Client requires an NFS server supporting NFS read/write buffers of at least 16384 bytes.
Direct NFS Client issues writes at wtmax granularity to the NFS server. Direct NFS Client does not serve an NFS server with a wtmax less than 16384. Oracle recommends that you use the value 32768.
See Also:
"Supported Storage Options for Oracle Grid Infrastructure and Oracle RAC" for a list of the file types that are supported with Direct NFS Client.Direct NFS Client can use up to four network paths defined in the oranfstab file for an NFS server.
Direct NFS Client performs load balancing across all specified paths. If a specified path fails, then Direct NFS Client re-issues all outstanding requests over any remaining paths.
Note:
You can have only one active Direct NFS Client implementation for each instance. Using Direct NFS Client on an instance prevents the use of another Direct NFS Client implementation.Example 6-1 and Example 6-2 provide examples of configuring network paths for Direct NFS Client attributes in an oranfstab
file.
To enable Direct NFS Client, you must add an oranfstab
file to Oracle_home\dbs
.
oranfstab
is placed in the Oracle_home\dbs
directory, the entries in this file are specific to one particular database. Direct NFS Client searches for the mount point entries as they appear in oranfstab
. Direct NFS Client uses the first matched entry as the mount point.Example 6-1 oranfstab File Using Local and Path NFS Server Entries
The following example of an oranfstab
file shows an NFS server entry, where the NFS server, MyDataServer1
, uses two network paths specified with IP addresses.
server: MyDataServer1 local: 192.0.2.0 path: 192.0.2.1 local: 192.0.100.0 path: 192.0.100.1 nfs_version: nfsv3 export: /vol/oradata1 mount: C:\APP\ORACLE\ORADATA\ORCL
Example 6-2 oranfstab File Using Network Connection Names
The following example of an oranfstab
file shows an NFS server entry, where the NFS server, MyDataServer2
, uses four network paths specified by the network interface to use, or the network connection name. Multiple export paths are also used in this example.
server: MyDataServer2 local: LocalInterface1 path: NfsPath1 local: LocalInterface2 path: NfsPath2 local: LocalInterface3 path: NfsPath3 local: LocalInterface4 path: NfsPath4 nfs_version: nfsv4 export: /vol/oradata2 mount: C:\APP\ORACLE\ORADATA\ORCL2 export: /vol/oradata3 mount: C:\APP\ORACLE\ORADATA\ORCL3 management: MgmtPath1 community: private
ORADNFS is a utility which enables the database administrators to perform basic file operations over Direct NFS Client on Microsoft Windows platforms.
ORA_DBA
group to use ORADNFS. A valid copy of the oranfstab
configuration file must be present in Oracle_home\dbs
for ORADNFS to operate.You use data dictionary view to monitor the Direct NFS client.
If you no longer want to use the Direct NFS client, you can disable it.
oraodm12.dll
file by reversing the process you completed in "Enabling Direct NFS Client".oranfstab
file.If you have an Oracle ASM installation from a prior release installed on your server, or in an existing Oracle Clusterware installation, then you can use Oracle Automatic Storage Management Configuration Assistant (ASMCA) to upgrade the existing Oracle ASM instance to Oracle ASM 12c Release 1 (12.1).
The ASMCA utility is located in the path Grid_home\bin
. You can also use ASMCA to configure failure groups, Oracle ASM volumes and Oracle Automatic Storage Management Cluster File System (Oracle ACFS).
Note:
You must first shut down all database instances and applications on the node with the existing Oracle ASM instance before upgrading it.During installation, if you are upgrading from an Oracle ASM release before 11.2, and you chose to use Oracle ASM and ASMCA detects that there is a prior Oracle ASM release installed in another Oracle ASM home, then after installing the Oracle ASM 12c Release 1 (12.1) binaries, you can start ASMCA to upgrade the existing Oracle ASM instance. You can then choose to configure an Oracle ACFS deployment by creating Oracle ASM volumes and using the upgraded Oracle ASM to create the Oracle ACFS.
If you are upgrading from Oracle ASM 11g Release 2 (11.2.0.1) or later, then Oracle ASM is always upgraded with Oracle Grid Infrastructure as part of the rolling upgrade, and ASMCA is started during the upgrade. ASMCA cannot perform a separate upgrade of Oracle ASM from a prior release to the current release.
On an existing Oracle Clusterware or Oracle RAC installation, if the prior release of Oracle ASM instances on all nodes is Oracle ASM 11g Release 1 or later, then you are provided with the option to perform a rolling upgrade of Oracle ASM instances. If the prior release of the Oracle ASM instances for an Oracle RAC installation are from a release prior to Oracle ASM 11g Release 1, then rolling upgrades cannot be performed. Oracle ASM on all nodes will be upgraded to Oracle ASM 12c Release 1 (12.1).
If you want to install Oracle RAC on Oracle ACFS, you must first create the Oracle home directory in Oracle ACFS.
COMPATIBLE.ASM
and COMPATIBLE.ADVM
must be set to 11.2 or higher for the disk group to contain an Oracle ADVM volume.
See Also:
"Restrictions and Guidelines for Oracle ACFS"See Also:
Oracle Automatic Storage Management Administrator's Guide for more information about configuring and managing your storage with Oracle ACFS