3.7 Adding Servers to a Cluster
This procedure describes how to add servers to a cluster.
For adding nodes to an Oracle VM cluster, refer to Expanding an Oracle VM RAC Cluster on Exadata in Oracle Exadata Database Machine Maintenance Guide.
Caution:
If Oracle Clusterware manages additional services that are not yet installed on the new nodes, such as Oracle GoldenGate, then note the following:
-
It may be necessary to stop those services on the existing node before running the
addNode.shscript. -
It is necessary to create any users and groups on the new database servers that run these additional services.
-
It may be necessary to disable those services from auto-start so that Oracle Clusterware does not try to start the services on the new nodes.
Note:
To prevent problems with transferring files between existing and new nodes, you need to set up SSH equivalence. See Step 4 in Expanding an Oracle VM Oracle RAC Cluster on Exadata in for details.-
Ensure the
/etc/oracle/cell/network-config/*.orafiles are correct and consistent on all database servers. Thecellip.orafile all database server should include the older and newer database servers and storage servers. -
Ensure the
ORACLE_BASEanddiagdestination directories have been created on the Oracle Grid Infrastructure destination home.The following is an example for Oracle Grid Infrastructure 11
g:# dcli -g /root/new_group_files/dbs_group -l root mkdir -p \ /u01/app/11.2.0/grid /u01/app/oraInventory /u01/app/grid/diag # dcli -g /root/new_group_files/dbs_group -l root chown -R grid:oinstall \ /u01/app/11.2.0 /u01/app/oraInventory /u01/app/grid # dcli -g /root/new_group_files/dbs_group -l root chmod -R 770 \ /u01/app/oraInventory # dcli -g /root/new_group_files/dbs_group -l root chmod -R 755 \ /u01/app/11.2.0 /u01/app/11.2.0/gridThe following is an example for Oracle Grid Infrastructure 12
c:# cd / # rm -rf /u01/app/* # mkdir -p /u01/app/12.1.0.2/grid # mkdir -p /u01/app/oracle/product/12.1.0.2/dbhome_1 # chown -R oracle:oinstall /u01 -
Ensure the
inventorydirectory and Grid home directories have been created and have the proper permissions. The directories should be owned by the Grid user and theOINSTALLgroup. Theinventorydirectory should have 770 permission, and the Oracle Grid Infrastructure home directories should have 755.If you are running Oracle Grid Infrastructure 12
cor later:-
Make sure
oraInventorydoes not exist inside/u01/app. -
Make sure
/etc/oraInst.locdoes not exist.
-
-
Create users and groups on the new nodes with the same user identifiers and group identifiers as on the existing nodes.
Note:
If Oracle Exadata Deployment Assistant (OEDA) was used earlier, then these users and groups should have been created. Check that they do exist, and have the correct UID and GID values. -
Log in as the Grid user on an existing host.
-
Verify the Oracle Cluster Registry (OCR) backup exists.
ocrconfig -showbackup -
Verify that the additional database servers are ready to be added to the cluster using commands similar to following:
$ cluvfy stage -post hwos -n \ dm02db01,dm02db02,dm02db03,dm02db04,dm02db05,dm02db06,dm02db07,dm02db08 \ -verbose $ cluvfy comp peer -refnode dm01db01 -n \ dm02db01,dm02db02,dm02db03,dm02db04,dm02db05,dm02db06,dm02db07,dm02db08 \ -orainv oinstall -osdba dba | grep -B 3 -A 2 mismatched $ cluvfy stage -pre nodeadd -n \ dm02db01,dm02db02,dm02db03,dm02db04,dm02db05,dm02db06,dm02db07,dm02db08 \ -verbose -fixup -fixupdir /home/grid_owner_name/fixup.dIn the preceding commands, grid_owner_name is the name of the Oracle Grid Infrastructure software owner, dm02db01 through db02db08 are the new database servers, and refnode is an existing database server.
Note:
-
The second and third commands do not display output if the commands complete correctly.
-
An error about a voting disk, similar to the following, may be displayed:
ERROR: PRVF-5449 : Check of Voting Disk location "o/192.168.73.102/ \ DATA_CD_00_dm01cel07(o/192.168.73.102/DATA_CD_00_dm01cel07)" \ failed on the following nodes: Check failed on nodes: dm01db01 dm01db01:No such file or directory … PRVF-5431 : Oracle Cluster Voting Disk configuration checkIf such an error occurs:
- If you are running Oracle Grid Infrastructure 11
g, set the environment variable as follows:$ export IGNORE_PREADDNODE_CHECKS=YSetting the environment variable does not prevent the error when running the
cluvfycommand, but it does allow theaddNode.shscript to complete successfully.- If you are running Oracle Grid Infrastructure 12
cor later, use the followingaddnodeparameters:-ignoreSysPrereqs -ignorePrereqIn Oracle Grid Infrastructure 12
cand later,addnodedoes not use theIGNORE_PREADDNODE_CHECKSenvironment variable. -
If a database server was installed with a certain image and subsequently patched to a later image, then some operating system libraries may be older than the version expected by the
cluvfycommand. This causes thecluvfycommand and possibly theaddNode.shscript to fail.It is permissible to have an earlier version as long as the difference in versions is minor. For example,
glibc-common-2.5-81.el5_8.2versusglibc-common-2.5-49. The versions are different, but both are at version 2.5, so the difference is minor, and it is permissible for them to differ.Set the environment variable
IGNORE_PREADDNODE_CHECKS=Ybefore running theaddNode.shscript or use theaddnodeparameters-ignoreSysPrereqs -ignorePrereqwith theaddNode.shscript to workaround this problem.
-
-
Ensure that all directories inside the Oracle Grid Infrastructure home on the existing server have their executable bits set. Run the following commands as the
rootuser.find /u01/app/11.2.0/grid -type d -user root ! -perm /u+x ! \ -perm /g+x ! -perm o+x find /u01/app/11.2.0/grid -type d -user grid_owner_name ! -perm /u+x ! \ -perm /g+x ! -perm o+xIn the preceding commands, grid_owner_name is the name of the Oracle Grid Infrastructure software owner, and
/u01/app/11.2.0/gridis the Oracle Grid Infrastructure home directory.If any directories are listed, then ensure the group and others permissions are
+x. TheGrid_home/network/admin/samples,$GI_HOME/crf/admin/run/crfmond, andGrid_home/crf/admin/run/crflogddirectories may need the+xpermissions set.If you are running Oracle Grid Infrastructure 12
cor later, run commands similar to the following:# chmod -R u+x /u01/app/12.1.0.2/grid/gpnp/gpnp_bcp* # chmod -R o+rx /u01/app/12.1.0.2/grid/gpnp/gpnp_bcp* # chmod o+r /u01/app/12.1.0.2/grid/bin/oradaemonagent /u01/app/12.1.0.2/grid/srvm/admin/logging.properties # chmod a+r /u01/app/oracle/product/12.1.0.2/dbhome_1/bin/*O # chmod a+r /u01/app/oracle/product/12.1.0.2/dbhome_1/bin/*0 # chown -f gi_owner_name:dba /u01/app/12.1.0.2/grid/OPatch/ocm/bin/emocmrspThe
Grid_home/network/admin/samplesdirectory needs the+xpermission:chmod -R a+x /u01/app/12.1.0.2/grid/network/admin/samples -
Run the following command. It is assumed that the Oracle Grid Infrastructure home is owned by the Grid user.
$ dcli -g old_db_nodes -l root chown -f grid_owner_name:dba \ /u01/app/11.2.0/grid/OPatch/ocm/bin/emocmrsp -
This step is needed only if you are running Oracle Grid Infrastructure 11
g. In Oracle Grid Infrastructure 12c, no response file is needed because the values are specified on the command line.Create a response file,
add-cluster-nodes.rsp, as the Grid user to add the new servers similar to the following:RESPONSEFILE_VERSION=2.2.1.0.0 CLUSTER_NEW_NODES={dm02db01,dm02db02, \ dm02db03,dm02db04,dm02db05,dm02db06,dm02db07,dm02db08} CLUSTER_NEW_VIRTUAL_HOSTNAMES={dm0201-vip,dm0202-vip,dm0203-vip,dm0204-vip, \ dm0205-vip,dm0206-vip,dm0207-vip,dm0208-vip}In the preceding file, the host names
dm02db01throughdb02db08are the new nodes being added to the cluster.Note:
The lines listing the server names should appear on one continuous line. They are wrapped in the documentation due to page limitations. -
Ensure most of the files in the
Grid_home/rdbms/auditandGrid_home/log/diag/*directories have been moved or deleted before extending a cluster. -
Refer to My Oracle Support note 744213.1 if the installer runs out of memory. The note describes how to edit the
Grid_home/oui/ora-param.inifile, and change theJRE_MEMORY_OPTIONSparameter to-Xms512m-Xmx2048m. -
Add the new servers by running the
addNode.shscript from an existing server as the Grid user.-
If you are running Oracle Grid Infrastructure 11
g:$ cd Grid_home/oui/bin $ ./addNode.sh -silent -responseFile /path/to/add-cluster-nodes.rsp -
If you are running Oracle Grid Infrastructure 12
cor later, run theaddnode.shcommand with theCLUSTER_NEW_NODESandCLUSTER_NEW_VIRTUAL_HOSTNAMESparameters. The syntax is:$ ./addnode.sh -silent "CLUSTER_NEW_NODES={comma_delimited_new_nodes}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={comma_delimited_new_node_vips}"For example:
$ cd Grid_home/addnode/ $ ./addnode.sh -silent "CLUSTER_NEW_NODES={dm02db01,dm02db02,dm02db03,dm02db04,dm02db05, dm02db06,dm02db07,dm02db08}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={dm02db01-vip,dm02db02-vip, dm02db03-vip,dm02db04-vip,dm02db05-vip,dm02db06-vip,dm02db07-vip,dm02db08-vip}" -ignoreSysPrereqs -ignorePrereq
-
-
Verify the grid disks are visible from each of the new database servers.
$ Grid_home/grid/bin/kfod disks=all dscvgroup=true -
Run the
orainstRoot.shscript as therootuser when prompted using thedcliutility.$ dcli -g new_db_nodes -l root \ /u01/app/oraInventory/orainstRoot.sh -
Disable HAIP on the new servers.
Before running the
root.shscript, on each new server, set theHAIP_UNSUPPORTEDenvironment variable toTRUE.$ export HAIP_UNSUPPORTED=true -
Run the
Grid_home/root.shscript on each server sequentially. This simplifies the process, and ensures that any issues can be clearly identified and addressed.Note:
The node identifier is set in order of the nodes where theroot.shscript is run. Typically, the script is run from the lowest numbered node name to the highest. -
Check the log file from the
root.shscript and verify there are no problems on the server before proceeding to the next server. If there are problems, then resolve them before continuing. -
Check the status of the cluster after adding the servers.
$ cluvfy stage -post nodeadd -n \ dm02db01,dm02db02,dm02db03,dm02db04,dm02db05,dm02db06,dm02db07,dm02db08 \ -verbose -
Check that all servers have been added and have basic services running.
crsctl stat res -tNote:
It may be necessary to mount disk groups on the new servers. The following commands must be run as the
oracleuser.$ srvctl start diskgroup -g data $ srvctl start diskgroup -g reco -
If you are running Oracle Grid Infrastructure releases 11.2.0.2 and later, then perform the following steps:
-
Manually add the
CLUSTER_INTERCONNECTSparameter to the SPFILE for each Oracle ASM instance.ALTER SYSTEM SET cluster_interconnects = '192.168.10.x' \ sid='+ASMx' scope=spfile -
Restart the cluster on each new server.
-
Verify the parameters were set correctly.
-
Parent topic: Configuring the New Hardware