This appendix provides an overview of concepts and terms that may be necessary to complete your installation.
Oracle Universal Installer uses various directories when installing the software.
SYSTEM_DRIVE:\app\user_name\
.Oracle Optimal Flexible Architecture (OFA) rules help you to organize database software and configure databases to allow multiple databases, of different versions, owned by different users to coexist.
For installations with Oracle Grid Infrastructure only, Oracle recommends that you create an Oracle base and Grid home path compliant with Oracle Optimal Flexible Architecture (OFA) guidelines, so that Oracle Universal Installer (OUI) can select that directory during installation.
The OFA path for an Oracle base is X:\app\user
, where user is the name of the Oracle Installation user account.
The OFA path for an Oracle Grid Infrastructure Oracle home is X:\app\release\grid
where release is the three-digit Oracle Grid Infrastructure release (for example, 12.1.0).
When OUI finds an OFA-compliant software path, it creates the Oracle Grid Infrastructure Grid home and Oracle Inventory (oraInventory
) directories for you. For example, the path C:\app
and G:\app
are OFA-compliant paths.
The Oracle Grid Infrastructure home must be in a path that is different from the Grid home for the Oracle Installation user for Oracle Grid Infrastructure. If you create an Oracle Grid Infrastructure base path manually, then ensure that it is in a separate path specific for this release, and not under an existing Oracle base path.
Note:
If you choose to create an Oracle Grid Infrastructure home manually, then do not create the Oracle Grid Infrastructure home for a cluster under either:
The Oracle base directory for the Oracle Installation user for Oracle Grid Infrastructure (for example, grid
)
The Oracle base directory for the Oracle Installation user for Oracle Database (for example, oracle
)
Creating an Oracle Clusterware installation in an Oracle base directory will cause succeeding Oracle installations to fail.
Oracle Grid Infrastructure homes can be placed in a local home on servers, even if your existing Oracle Clusterware home from a prior release is in a shared ___location.
The Oracle Inventory directory is the central inventory ___location for all Oracle software installed on a server.
The ___location of the Oracle Inventory directory is System_drive:\Program Files\Oracle\Inventory
.
The first time you install Oracle software on a system, the installer checks to see if an Oracle Inventory directory exists. The ___location of the Oracle Inventory directory is determined by the Windows Registry key HKEY_LOCAL_MACHINE\SOFTWARE\Oracle\inst_loc
. If an Oracle Inventory directory does not exist, then the installer creates one in the default ___location of C:\Program Files\Oracle\Inventory
.
Note:
Changing the value for inst_loc
in the Windows registry is not supported.
By default, the Oracle Inventory directory is not installed under the Oracle base directory for the Oracle Installation user. This is because all Oracle software installations share a common Oracle Inventory, so there is only one Oracle Inventory for all users, whereas there is a separate Oracle Base directory for each user.
The Oracle Inventory directory contains the following:
A registry of the Oracle home directories (Oracle Grid Infrastructure and Oracle Database) on the system
Installation logs and trace files from installations of Oracle software. These files are also copied to the respective Oracle homes for future reference.
Other metadata inventory information regarding Oracle installations are stored in the individual Oracle home inventory directories, and are separate from the central inventory.
The central inventory is created on all cluster nodes. On core nodes, the inventory does not include processing nodes in the Oracle home's nodelist. For processing nodes, the nodelist in the central inventory includes only the local node. The list of core nodes is updated only in the central inventory on core nodes.
During installation, you are prompted to specify an Oracle base ___location, which is based on the Oracle Installation user. You can choose a ___location with an existing Oracle home, or choose another directory ___location that does not have the structure for an Oracle base directory. The default ___location for the Oracle base directory is SYSTEM_DRIVE:\app\user_name\
.
Using the Oracle base directory path helps to facilitate the organization of Oracle installations, and helps to ensure that installations of multiple databases maintain an Optimal Flexible Architecture (OFA) configuration.
Multiple Oracle Database installations can use the same Oracle base directory. The Oracle Grid Infrastructure installation uses a different directory path, one outside of Oracle base. If you use different operating system users to perform the Oracle software installations, then each user will have a different default Oracle base ___location.
However, Oracle Restart can be in the same ___location as other Oracle software.
Because you can have only one Oracle Grid Infrastructure installation on a cluster, and all upgrades are out-of-place upgrades, Oracle recommends that you have an Oracle base for the Oracle Installation user for Oracle Grid Infrastructure (for example, C:\app\grid
), and use an Oracle home for the Oracle Grid Infrastructure binaries using the release number of that installation (for example, C:\app\12.1.0\grid
).
Oracle Grid Infrastructure uses various operating system groups. These operating system groups are designated with the logical role of granting operating system group authentication for administration system privilege for Oracle Clusterware and Oracle ASM.
ORA_DBA
group, the ORA_HOMENAME_DBA
group. Two optional groups, ORA_OPER
and ORA_HOMENAME_OPER
can also be used.ORA_DBA
group are not granted SYSASM privileges.You must have a group whose members are given access to write to the Oracle Inventory directory, which is the central inventory record of all Oracle software installations on a server.
Members of the Oracle Inventory group have write privileges to the Oracle central inventory directory, and are also granted permissions for various Oracle Clusterware resources, OCR keys, directories in the Oracle Clusterware home to which DBAs need write access, and other necessary privileges. By default, this group is called ORA_INSTALL
.
All Oracle software install users must be members of the Oracle Inventory group. Members of this group can talk to Cluster Synchronization Service (CSS).
Note:
If Oracle software is already installed on the system, then the existing Oracle Inventory group is used instead of creating a new Inventory group.On the Windows platform, the operating system groups used to manage Oracle Database are the ORA_DBA
group, the ORA_HOMENAME_DBA
group. Two optional groups, ORA_OPER
and ORA_HOMENAME_OPER
can also be used.
When you install Oracle Database, a special Windows local group called ORA_DBA
is created (if it does not already exist from an earlier Oracle Database installation), and the Oracle Installation user is automatically added to this group. Members of the ORA_DBA
group automatically receive the SYSDBA privilege. Membership in the ORA_DBA
group allows a user to:
Connect to Oracle Database instances without a password
Perform database administration procedures such as starting and shutting down local databases
Add additional Windows users to ORA_DBA
, enabling them to have the SYSDBA privilege
Membership in the ORA_DBA
group grants full access to all databases on the server. Membership in the ORA_HOMENAME_DBA
group grants full access to all databases that run from the specific Oracle home. Belonging to either group does not grant any special privileges for the user with respect to the Oracle ASM instance. Members of these groups will not be able to connect to the Oracle ASM instance.
The ORA_OPER
and ORA_HOMENAME_OPER
groups are also created during installation. However, these are optional groups. You assign users to these groups if you want a separate group of users that have a limited set of database administrative privileges (the SYSOPER privilege) for either all databases or databases that run from one Oracle home.
By default, members of the database ORA_DBA
group are not granted SYSASM privileges.
SYSASM is a system privilege that enables the separation of the Oracle ASM storage administration privilege from SYSDBA. The following table lists the groups created during installation for managing Oracle ASM privileges:
Table C-1 Oracle ASM Operating System Groups and Privileges for Windows
Role | Operating System Group | Description |
---|---|---|
OSASM |
|
Members of this group can connect to the Oracle ASM instance with the SYSASM privilege using operating system authentication. During installation, the Oracle Installation User for Oracle Grid Infrastructure and Oracle Database Service IDs are configured as members of this group. |
OSDBA for ASM |
|
This group grants database access to Oracle ASM. Any client of ASM that needs to access storage managed by Oracle ASM needs to be in this group. |
OSOPER for ASM |
|
Members of this group can connect to the Oracle ASM instance with the SYSOPER privilege using operating system authentication. This group does not have any members after installation, but you can manually add users to this group after the installation completes |
See Also:
"Oracle ASM Groups for Job Role Separation"Oracle Database and Oracle RAC provide two special administrative privileges, SYSDBA and SYSOPER, which allow grantees to perform critical database administration tasks, such as creating databases and starting up or shutting down database instances.
In Windows, the operating system groups for the SYSDBA and SYSOPER database administrative privileges are fixed to ORA_DBA
and ORA_OPER
groups and these group names cannot be changed. However, while the SYSDBA privilege is convenient to use, the authority provided by SYSDBA privilege provides more access than what is necessary for the assigned tasks of administrators.
Oracle Database 12c introduces new administrative privileges that are more task-specific and least privileged to support those tasks. Four new administrative privileges are created to satisfy the needs of common database tasks. These new privileges and their related operating systems groups are described in the following table:
Table C-2 Oracle Database Administrative Privileges and Roles for Specific Tasks
Privilege | Operating System Group | Description |
---|---|---|
SYSBACKUP |
|
Administrative privilege for backup and recovery tasks |
SYSDG |
|
Administrative privilege for Data Guard administrative tasks |
SYSKM |
|
Administrative privilege for encryption key management tasks |
You cannot change the name of these operating system groups. These groups do not have any members after database creation, but an Administrator user can assign users to these groups after installation. Each operating system group identifies a group of operating system users that are granted the associated set of database privileges.
During installation, you are asked to identify the planned use for each network interface that Oracle Universal Installer (OUI) detects on your cluster node.
Identify each interface as a public or private interface, or as an interface that you do not want Oracle Grid Infrastructure or Oracle ASM to use. Public and virtual internet protocol (VIP) addresses are configured on public interfaces. Private addresses are configured on private interfaces.
Refer to the following sections for detailed information about each address type.
The public IP address uses the public interface (the interface with access available to clients).
The public IP address is assigned dynamically using Dynamic Host Configuration Protocol (DHCP), or defined statically in a ___domain name system (DNS) or hosts
file. The public IP address is the primary address for a cluster member node, and should be the address that resolves to the name returned when you enter the command hostname
.
If you configure IP addresses manually, then avoid changing host names after you complete the Oracle Grid Infrastructure installation, including adding or deleting ___domain qualifications. A node with a new host name is considered a new host, and must be added to the cluster. A node under the old name will appear to be down until it is removed from the cluster.
Oracle Clusterware uses interfaces marked as private for internode communication.
Each cluster node must have an interface that you identify during installation as a private interface. Private interfaces must have addresses configured for the interface itself, but no additional configuration is required. Oracle Clusterware uses the interfaces you identify as private for the cluster interconnect. If you identify multiple interfaces during information for the private network, then Oracle Clusterware configures them with Redundant Interconnect Usage. Any interface that you identify as private must be on a subnet that connects to every node of the cluster. Oracle Clusterware uses all the interfaces you identify for use as private interfaces.
For the private interconnects, because of Cache Fusion and other traffic between nodes, Oracle strongly recommends using a physically separate, private network. If you configure addresses using a DNS, then you should ensure that the private IP addresses are reachable only by the cluster nodes.
After installation, if you modify the interconnect for Oracle Real Application Clusters (Oracle RAC) with the CLUSTER_INTERCONNECTS
initialization parameter, then you must change the interconnect to a private IP address, on a subnet that is not used with a public IP address, nor marked as a public subnet by oifcfg
. Oracle does not support changing the interconnect to an interface using a subnet that you have designated as a public subnet.
You should not use a firewall on the network with the private network IP addresses, because this can block interconnect traffic.
The virtual IP (VIP) address is registered in the grid naming service (GNS), or the DNS.
Select an address for your VIP that meets the following requirements:
The IP address and host name are currently unused (it can be registered in a DNS, but should not be accessible by a ping command)
The VIP is on the same subnet as your public interface
If you are not using Grid Naming Service (GNS), then determine a virtual host name for each node. A virtual host name is a public node name that reroutes client requests sent to the node if the node is down. Oracle Database uses VIPs for client-to-database connections, so the VIP address must be publicly accessible. Oracle recommends that you provide a name in the format hostname-vip. For example: myclstr2-vip
.
The GNS virtual IP address is a static IP address configured in the DNS.
The DNS delegates queries to the GNS virtual IP address, and the GNS daemon responds to incoming name resolution requests at that address.Within the subdomain, the GNS uses multicast Domain Name Service (mDNS), included with Oracle Clusterware, to enable the cluster to map host names and IP addresses dynamically as nodes are added and removed from the cluster, without requiring additional host configuration in the DNS.
To enable GNS, you must have your network administrator provide a set of IP addresses for a subdomain assigned to the cluster (for example, grid.example.com
), and delegate DNS requests for that subdomain to the GNS virtual IP address for the cluster, which GNS serves. DHCP provides the set of IP addresses to the cluster; DHCP must be available on the public network for the cluster.
See Also:
Oracle Clusterware Administration and Deployment Guide for more information about GNSOracle Database clients connect to the database using a single client access name (SCAN).
The SCAN and its associated IP addresses provide a stable name for clients to use for connections, independent of the nodes that make up the cluster. SCAN addresses, virtual IP addresses, and public IP addresses must all be on the same subnet.
The SCAN is a virtual IP name, similar to the names used for virtual IP addresses, such as node1-vip
. However, unlike a virtual IP, the SCAN is associated with the entire cluster, rather than an individual node, and associated with multiple IP addresses, not just one address.
The SCAN resolves to multiple IP addresses reflecting multiple listeners in the cluster that handle public client connections. When a client submits a request, the SCAN listener listening on a SCAN IP address and the SCAN port is made available to a client. Because all services on the cluster are registered with the SCAN listener, the SCAN listener replies with the address of the local listener on the least-loaded node where the service is currently being offered. Finally, the client establishes connection to the service through the listener on the node where service is offered. All of these actions take place transparently to the client without any explicit configuration required in the client.
During installation, listeners are created. These SCAN listeners listen on the SCAN IP addresses. The SCAN listeners are started on nodes determined by Oracle Clusterware. Oracle Net Services routes application requests to the least-loaded instance providing the service. Because the SCAN addresses resolve to the cluster, rather than to a node address in the cluster, nodes can be added to or removed from the cluster without affecting the SCAN address configuration.
The SCAN should be configured so that it is resolvable either by using Grid Naming Service (GNS) within the cluster, or by using Domain Name Service (DNS) resolution. For high availability and scalability, Oracle recommends that you configure the SCAN name so that it resolves to three IP addresses. At a minimum, the SCAN must resolve to at least one address.
If you specify a GNS ___domain, then the SCAN name defaults to clustername-scan.GNS_domain
. Otherwise, it defaults to clustername-scan.current_domain
. For example, if you start Oracle Grid Infrastructure installation from the server node1
, the cluster name is mycluster
, and the GNS ___domain is grid.example.com
, then the SCAN Name is mycluster-scan.grid.example.com
.
Clients configured to use IP addresses for Oracle Database releases prior to Oracle Database 11g Release 2 can continue to use their existing connection addresses; using SCAN is not required. When you upgrade to Oracle Clusterware 12c Release 1 (12.1), the SCAN becomes available, and you should use the SCAN for connections to Oracle Database 11g Release 2 or later databases. When an earlier release of Oracle Database is upgraded, it registers with the SCAN listeners, and clients can start using the SCAN to connect to that database. The database registers with the SCAN listener through the remote listener parameter in the init.ora
file. The REMOTE_LISTENER parameter must be set to SCAN:PORT. Do not set it to a TNSNAMES alias with a single address for the SCAN, for example, using HOST= SCAN_name.
The SCAN is optional for most deployments. However, clients using Oracle Database 11g Release 2 and later policy-managed databases using server pools must access the database using the SCAN. This is required because policy-managed databases can run on different servers at different times, so connecting to a particular node by using the virtual IP address for a policy-managed database is not possible.
Provide SCAN addresses for client access to the cluster. These addresses should be configured as round robin addresses on the ___domain name service (DNS). Oracle recommends that you supply three SCAN addresses.
Note:
The following is a list of additional information about node IP addresses:
For the local node only, OUI automatically fills in public and VIP fields. If your system uses vendor clusterware, then OUI may fill additional fields.
Host names and virtual host names are not ___domain-qualified. If you provide a ___domain in the address field during installation, then OUI removes the ___domain from the address.
Interfaces identified as private for private IP addresses should not be accessible as public interfaces. Using public interfaces for Cache Fusion can cause performance problems.
Identify public and private interfaces. OUI configures public interfaces for use by public and virtual IP addresses, and configures private IP addresses on private interfaces.
The private subnet that the private interfaces use must connect all the nodes you intend to have as cluster members.
Oracle Clusterware 12c Release 1 (12.1) is automatically configured with Cluster Time Synchronization Service (CTSS).
CTSS provides automatic synchronization of the time settings on all cluster nodes. CTSS uses the optimal synchronization strategy for the type of cluster you deploy.
If you have an existing cluster synchronization service, such as network time protocol (NTP) or Windows Time Service, then CTSS starts in an observer mode. Otherwise, CTSS starts in an active mode to ensure that time is synchronized between cluster nodes. CTSS will not cause compatibility issues.
The CTSS module is installed as a part of Oracle Grid Infrastructure installation. CTSS daemons are started by the Oracle High Availability Services daemon (ohasd
), and do not require a command-line interface.
The following sections describe concepts related to Oracle Automatic Storage Management (Oracle ASM) storage.
Oracle ASM has been extended to include a general purpose file system, called Oracle Automatic Storage Management Cluster File System (Oracle ACFS).
Oracle ACFS is a new multi-platform, scalable file system, and storage management technology that extends Oracle ASM functionality to support customer files maintained outside of the Oracle Database. Files supported by Oracle ACFS include application executable files, data files, and application reports. Other supported files are video, audio, text, images, engineering drawings, and other general-purpose application file data.
Note:
Starting with Oracle ASM 11g Release 2 (11.2.0.2), Oracle ACFS is supported on Windows Server 2008 R2 x64.Oracle ACFS can provide optimized storage for all Oracle files, including Oracle Database binaries. It can also store other application files. However, it cannot be used for Oracle Clusterware binaries or Oracle Clusterware files.
See Also:
Oracle Automatic Storage Management Administrator's Guide for more information about Oracle ACFSYou can use Oracle Automatic Storage Management Configuration Assistant (ASMCA) to upgrade the existing Oracle ASM instance to Oracle ASM 12c Release 1 (12.1) or higher.
ASMCA is located in the path Grid_home\bin
. You can also use ASMCA to configure failure groups, Oracle ASM volumes, and Oracle ACFS.
Note:
You must first shut down all database instances and applications on the node with the existing Oracle ASM instance before upgrading it.During installation, if you chose to use Oracle ASM and ASMCA detects that there is a prior Oracle ASM release installed in another Oracle ASM home, then after installing the Oracle ASM 12c Release 1 (12.1) software, you can start ASMCA to upgrade the existing Oracle ASM instance. You can then configure an Oracle ACFS deployment by creating Oracle ASM volumes and using the upgraded Oracle ASM to create the Oracle ACFS.
On an existing Oracle Clusterware or Oracle RAC installation, if the prior release of Oracle ASM instances on all nodes is Oracle ASM 11g Release 1 or higher, then you are provided with the option to perform a rolling upgrade of Oracle ASM instances. If the prior release of Oracle ASM instances on an Oracle RAC installation are from an Oracle ASM release prior to Oracle ASM 11g Release 1, then rolling upgrades cannot be performed. Oracle ASM is then upgraded on all nodes to 12c Release 1 (12.1).
If you have an existing standalone Oracle ASM installation on one or more nodes that are member nodes of the cluster, then OUI proceeds to install Oracle Grid Infrastructure for a cluster. If you place Oracle Clusterware files (OCR and voting files) on Oracle ASM, then ASMCA is started at the end of the clusterware installation, and provides prompts for you to migrate and upgrade the Oracle ASM instance on the local node, so that you have an Oracle ASM 12c Release 1 (12.1) installation.
On remote nodes, ASMCA identifies any running standalone Oracle ASM instances, and prompts you to shut down those Oracle ASM instances, and any database instances that use them. ASMCA then extends clustered Oracle ASM instances to all nodes in the cluster. However, disk group names on the cluster-enabled Oracle ASM instances must be different from existing standalone disk group names.
During an out-of-place upgrade of Oracle Grid Infrastructure, the installer installs the newer release of the software in a separate Grid home. Both releases of Oracle Clusterware are on each cluster member node, but only one release is active.
Rolling upgrade avoids downtime and ensure continuous availability while the software is upgraded to a new release.
If you have separate Oracle Clusterware homes on each node, then you can perform an out-of-place upgrade on all nodes, or perform an out-of-place rolling upgrade, so that some nodes run Oracle Clusterware from the earlier release Oracle Clusterware home, and other nodes run Oracle Clusterware from the new Oracle Clusterware home.
An in-place upgrade of Oracle Clusterware 11g Release 2 or later is not supported.
Related Topics
Requirements for Oracle Grid Infrastructure for a cluster are different from Oracle Grid Infrastructure on a single instance in an Oracle Restart configuration.
See Also:
Oracle Database Installation Guide for information about Oracle Restart requirements