This procedure describes how to add a node to your cluster. This procedure assumes that:
There is an existing cluster with two nodes named node1
and node2
You are adding a node named node3
using a virtual node name, node3-vip
, that resolves to an IP address, if you are not using DHCP and Grid Naming Service (GNS)
You have successfully installed Oracle Clusterware on node1
and node2
in a local (non-shared) home, where Grid_home
represents the successfully installed home
Ensure that you have successfully installed Oracle Clusterware on at least one node in your cluster environment. To perform the following procedure, Grid_home
must identify your successfully installed Oracle Clusterware home.
See Also:
Oracle Grid Infrastructure Installation Guide for Oracle Clusterware installation instructions
Verify the integrity of the cluster and node3
:
$ cluvfy stage -pre nodeadd -n node3 [-fixup] [-verbose]
You can specify the -fixup
option to attempt to fix the cluster or node if the verification fails.
To extend the Oracle Grid Infrastructure home to the node3
, navigate to the Grid_home
/addnode
directory on node1
and run the addnode.sh
script as the user that installed Oracle Clusterware.
To run addnode.sh
in interactive mode, run addnode.sh
from Grid_home
/addnode
.
You can also run addnode.sh
in silent mode for both Oracle Clusterware standard Clusters and Oracle Flex Clusters.
For an Oracle Clusterware standard Cluster:
./addnode.sh -silent "CLUSTER_NEW_NODES={node3}" "CLUSTER_NEW_VIRTUAL_ HOSTNAMES={node3-vip}"
If you are adding node3
to an Oracle Flex Cluster, then you can specify the node role on the command line, as follows:
./addnode.sh -silent "CLUSTER_NEW_NODES={node3}" "CLUSTER_NEW_VIRTUAL_ HOSTNAMES={node3-vip}" "CLUSTER_NEW_NODE_ROLES={hub}"
Note:
Hub Nodes always have VIPs but Leaf Nodes may not. If you use the preceding syntax to add multiple nodes to the cluster, then you can use syntax similar to the following, where node3
is a Hub Node and node4
is a Leaf Node:
./addnode.sh -silent "CLUSTER_NEW_NODES={node3,node4}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={node3-vip,}" "CLUSTER_NEW_NODE_ROLES={hub,leaf}"
If prompted, then run the orainstRoot.sh
script as root to populate the /etc/oraInst.loc
file with the ___location of the central inventory. For example:
# /opt/oracle/oraInventory/orainstRoot.sh
If you have an Oracle RAC or Oracle RAC One Node database configured on the cluster and you have a local Oracle home, then do the following to extend the Oracle database home to node3
:
Navigate to the Oracle_home
/addnode
directory on node1
and run the addnode.sh
script as the user that installed Oracle RAC using the following syntax:
$ ./addnode.sh "CLUSTER_NEW_NODES={node3}"
Run the Oracle_home
/root.sh
script on node3
as root
, where Oracle_home
is the Oracle RAC home.
If you have a shared Oracle home that is shared using Oracle Automatic Storage Management Cluster File System (Oracle ACFS), then do the following to extend the Oracle database home to node3
:
Run the Grid_home
/root.sh
script on node3
as root
, where Grid_home
is the Oracle Grid Infrastructure home.
Run the following command as the user that installed Oracle RAC from the Oracle_home
/oui/bin
directory on the node you are adding to add the Oracle RAC database home:
$ ./runInstaller -attachHome ORACLE_HOME="ORACLE_HOME" "CLUSTER_NODES={node3}"
LOCAL_NODE="node3" ORACLE_HOME_NAME="home_name" -cfs
Navigate to the Oracle_home
/addnode
directory on node1
and run the addnode.sh
script as the user that installed Oracle RAC using the following syntax:
$ ./addnode.sh -noCopy "CLUSTER_NEW_NODES={node3}"
Note:
Use the -noCopy
option because the Oracle home on the destination node is already fully populated with software.
If you have a shared Oracle home on a shared file system that is not Oracle ACFS, then you must first create a mount point for the Oracle RAC database home on the target node, mount and attach the Oracle RAC database home, and update the Oracle Inventory, as follows:
Run the srvctl config database -db
db_name
command on an existing node in the cluster to obtain the mount point information.
Run the following command as root
on node3
to create the mount point:
# mkdir -p mount_point_path
Mount the file system that hosts the Oracle RAC database home.
Run the following command as the user that installed Oracle RAC from the Oracle_home
/oui/bin
directory on the node you are adding to add the Oracle RAC database home:
$ ./runInstaller -attachHome ORACLE_HOME="ORACLE_HOME" "CLUSTER _NODES={local_node_name}" LOCAL_NODE="node_name" ORACLE_HOME_NAME="home_name" -cfs
Navigate to the Oracle_home
/addnode
directory on node1
and run the addnode.sh
script as the user that installed Oracle RAC using the following syntax:
$ ./addnode.sh -noCopy "CLUSTER_NEW_NODES={node3}"
Note:
Use the -noCopy
option because the Oracle home on the destination node is already fully populated with software.
Note:
After runningaddnode.sh
, ensure the Grid_home/network/admin/samples
directory has permissions set to 750.Run the Grid_home
/root.sh
script on the node3
as root
and run the subsequent script, as instructed.
Note:
If you ran the root.sh
script in the step 5, then you do not need to run it again.
If you have a policy-managed database, then you must ensure that the Oracle home is cloned to the new node before you run the root.sh
script.
Start the Oracle ACFS resource on the new node by running the following command as root
from the Grid_home
/bin
directory:
# srvctl start filesystem -device volume_device_name -node node3
Note:
Ensure the Oracle ACFS resources, including Oracle ACFS registry resource and Oracle ACFS file system resource where the Oracle home is located, are online on the newly added node.
Run the following CVU command as the user that installed Oracle Clusterware to check cluster integrity. This command verifies that any number of specified nodes has been successfully added to the cluster at the network, shared storage, and clusterware levels:
$ cluvfy stage -post nodeadd -n node3 [-verbose]
See Also:
"cluvfy stage [-pre | -post] nodeadd"
for more information about this CVU command
Check whether either a policy-managed or administrator-managed Oracle RAC database is configured to run on node3
(the newly added node). If you configured an administrator-managed Oracle RAC database, you may need to use DBCA to add an instance to the database to run on this newly added node.
See Also:
Oracle Real Application Clusters Administration and Deployment Guide for more information about using DBCA to add administrator-managed Oracle RAC database instances
Oracle Real Application Clusters Administration and Deployment Guide to add an Oracle RAC database instance to the target node if you configured a policy-managed Oracle RAC database