11g R2 Grid 添加节点

来源:互联网 发布:凌渡车钥匙淘宝图片 编辑:程序博客网 时间:2024/06/07 05:28

最近测试了下Oracle 11gRAC的节点添加操作。简单概述下其过程。

为了叙述需要,假设目前拥有一套两节点11G R2的RAC:节点一命名为db1,节点二命名为db2。现在需要将一个新节点添加到集群中,新节点命名为db3。


1、首先,物理链路的准备。这过程包括对db3进行存储映射、心跳互联等物理环境的准备;

2、根据db1、db2的操作系统配置,安装、配置db3的操作系统;注意此处需要配置的操作系统内容较多。大致包括确认RAC需要的系统安装包、系统核心参数配置、ASMLIB的配置、/etc/hosts配置等等。详细可参考官方的安装指导手册。

3、根据db1、db2的操作系统组、用户的信息,在db3上创建相应的组、用户;创建对于的目录信息;

注意:创建的组、用户,其ID要与db1、db2上的一致!

4、确保每个节点均配置ssh互信;

5、采用CVU,验证db3与db1、db2的连通性等。注意:以下操作指令在db1或者db2上执行:

[grid@db1 bin]$ ./cluvfy stage -post hwos -n db3Performing post-checks for hardware and operating system setupChecking node reachability...Node reachability check passed from node "db1"Checking user equivalence...User equivalence check passed for user "grid"Checking node connectivity...Checking hosts config file...Verification of the hosts config file successfulNode connectivity passed for subnet "172.18.33.0" with node(s) db3TCP connectivity check passed for subnet "172.18.33.0"Node connectivity passed for subnet "10.10.10.0" with node(s) db3TCP connectivity check passed for subnet "10.10.10.0"Interfaces found on subnet "172.18.33.0" that are likely candidates for VIP are:db3 eth0:172.18.33.237Interfaces found on subnet "10.10.10.0" that are likely candidates for a private interconnect are:db3 eth1:10.10.10.237Node connectivity check passedCheck for multiple users with UID value 0 passedPost-check for hardware and operating system setup was successful.

上述命令的语法是:

cluvfy stage -post hwos -n node_list | all [-verbose].

其中,node_list为需要被添加的节点,如果有多个节点需要被添加,可用逗号隔开。

当然,还可以通过以下命令将db3与db1、db2的系统配置进行比较。一致的,显示为“matched”、不一致的显示为“mismatched”

[grid@db1 bin]$ ./cluvfy comp peer -refnode db1 -n db3 -orainv oinstall -osdba asmdba -verboseVerifying peer compatibilityChecking peer compatibility...Compatibility check: Physical memory [reference node: db1]  Node Name     Status                    Ref. node status          Comment    ------------  ------------------------  ------------------------  ----------  db3           2.95GB (3090732.0KB)      2.95GB (3090732.0KB)      matched  Physical memory check passedCompatibility check: Available memory [reference node: db1]  Node Name     Status                    Ref. node status          Comment    ------------  ------------------------  ------------------------  ----------  db3           2.78GB (2915572.0KB)      1.5GB (1570068.0KB)       mismatchedAvailable memory check failed节省篇幅,略去若干。Checking to make sure user "grid" is not in "root" group  Node Name     Status                    Comment                  ------------  ------------------------  ------------------------  db3           does not exist            passed                 Result: User "grid" is not part of "root" group. Check passedStarting Clock synchronization checks using Network Time Protocol(NTP)...NTP Configuration file check started...Network Time Protocol(NTP) configuration file not found on any of the nodes. Oracle Cluster Time Synchronization Service(CTSS) can be used instead of NTP for time synchronization on the cluster nodesResult: Clock synchronization check using Network Time Protocol(NTP) passedPre-check for node addition was unsuccessful on all the nodes.

上述配置比较的操作指令语法为:

cluvfy comp peer [-refnode ref_node ] -n  node_list[-orainv orainventory_group] [-osdba osdba_group] [-verbose]

其中,ref_node为参考的节点,node_list为需要被添加的节点,orainventory_group为Inventory组,osdba_group为osdba组。

6、在db1或者db2上,进入$GRID_HOME/oui/bin目录下。执行以下命令,进行节点添加操作:

[grid@db1 bin]$ ./addNode.sh "CLUSTER_NEW_NODES={db3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={db3-vip}"

注意:此处省略中间部分输出信息。另外,如果有多个节点需要添加请将CLUSTER_NEW_NODES、CLUSTER_NEW_VIRTUAL_HOSTNAMES在大括号内的值用逗号隔开。

在该命令执行完之后,会提示需要手工执行几个脚本:

Saving inventory on nodes (Thursday, September 20, 2012 11:54:03 AM CST).                                                               100% Done.Save inventory completeWARNING:A new inventory has been created on one or more nodes in this session. However, it has not yet been registered as the central inventory of this system.To register the new inventory please run the script at '/u01/app/oraInventory/orainstRoot.sh' with root privileges on nodes 'db3'.If you do not register the inventory, you may not be able to update or patch the products you installed.The following configuration scripts need to be executed as the "root" user in each cluster node./u01/app/oraInventory/orainstRoot.sh #On nodes db3/u01/app/11.2.0/grid_1/root.sh #On nodes db3To execute the configuration scripts:    1. Open a terminal window    2. Log in as "root"    3. Run the scripts in each cluster nodeThe Cluster Node Addition of /u01/app/11.2.0/grid_1 was successful.Please check '/var/tmp/silentInstall.log' for more details.

因此,根据提示,在db3上新开一个terminal,并以root执行orainstRoot.sh以及root.sh脚本。以下分别是两个脚本执行的输出:

执行orainstRoot.sh输出:

[root@db3 grid_1]# /u01/app/oraInventory/orainstRoot.shCreating the Oracle inventory pointer file (/etc/oraInst.loc)Changing permissions of /u01/app/oraInventory.Adding read,write permissions for group.Removing read,write,execute permissions for world.Changing groupname of /u01/app/oraInventory to oinstall.The execution of the script is complete.

执行root.sh输出:

[root@db3 ~]# /u01/app/11.2.0/grid_1/root.shRunning Oracle 11g root.sh script...The following environment variables are set as:    ORACLE_OWNER= grid    ORACLE_HOME=  /u01/app/11.2.0/grid_1Enter the full pathname of the local bin directory: [/usr/local/bin]:   Copying dbhome to /usr/local/bin ...   Copying oraenv to /usr/local/bin ...   Copying coraenv to /usr/local/bin ...Creating /etc/oratab file...Entries will be added to the /etc/oratab file as needed byDatabase Configuration Assistant when a database is createdFinished running generic part of root.sh script.Now product-specific root actions will be performed.2012-09-20 12:32:16: Parsing the host name2012-09-20 12:32:16: Checking for super user privileges2012-09-20 12:32:16: User has super user privilegesUsing configuration parameter file: /u01/app/11.2.0/grid_1/crs/install/crsconfig_paramsCreating trace directoryLOCAL ADD MODECreating OCR keys for user 'root', privgrp 'root'..Operation successful.Adding daemon to inittabCRS-4123: Oracle High Availability Services has been started.ohasd is startingCRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node db1, number 1, and is terminatingAn active cluster was found during exclusive startup, restarting to join the clusterCRS-2672: Attempting to start 'ora.mdnsd' on 'db3'CRS-2676: Start of 'ora.mdnsd' on 'db3' succeededCRS-2672: Attempting to start 'ora.gipcd' on 'db3'CRS-2676: Start of 'ora.gipcd' on 'db3' succeededCRS-2672: Attempting to start 'ora.gpnpd' on 'db3'CRS-2676: Start of 'ora.gpnpd' on 'db3' succeededCRS-2672: Attempting to start 'ora.cssdmonitor' on 'db3'CRS-2676: Start of 'ora.cssdmonitor' on 'db3' succeededCRS-2672: Attempting to start 'ora.cssd' on 'db3'CRS-2672: Attempting to start 'ora.diskmon' on 'db3'CRS-2676: Start of 'ora.diskmon' on 'db3' succeededCRS-2676: Start of 'ora.cssd' on 'db3' succeededCRS-2672: Attempting to start 'ora.ctssd' on 'db3'CRS-2676: Start of 'ora.ctssd' on 'db3' succeededCRS-2672: Attempting to start 'ora.drivers.acfs' on 'db3'CRS-2676: Start of 'ora.drivers.acfs' on 'db3' succeededCRS-2672: Attempting to start 'ora.asm' on 'db3'CRS-2676: Start of 'ora.asm' on 'db3' succeededCRS-2672: Attempting to start 'ora.crsd' on 'db3'CRS-2676: Start of 'ora.crsd' on 'db3' succeededCRS-2672: Attempting to start 'ora.evmd' on 'db3'CRS-2676: Start of 'ora.evmd' on 'db3' succeededclscfg: EXISTING configuration version 5 detected.clscfg: version 5 is 11g Release 2.Successfully accumulated necessary OCR keys.Creating OCR keys for user 'root', privgrp 'root'..Operation successful.db3     2012/09/20 12:35:49     /u01/app/11.2.0/grid_1/cdata/db3/backup_20120920_123549.olrPreparing packages for installation...cvuqdisk-1.0.7-1Configure Oracle Grid Infrastructure for a Cluster ... succeededUpdating inventory properties for clusterwareStarting Oracle Universal Installer...Checking swap space: must be greater than 500 MB.   Actual 6118 MB    PassedThe inventory pointer is located at /etc/oraInst.locThe inventory is located at /u01/app/oraInventory'UpdateNodeList' was successful.

至此,集群节点添加成功。下面需要对新添加的节点添加RAC软件

7、在db1或者db2上,以Oracle用户登陆,进入$ORACLE_HOME/oui/bin,执行以下添加软件的命令:

[oracle@db1 bin]$ ./addNode.sh "CLUSTER_NEW_NODES={db3}"

注意:此处省略中间部分输出信息。另外,如果有多个节点需要添加请将CLUSTER_NEW_NODES在大括号内的值用逗号隔开。

在该命令执行完之后,会提示需要手工执行几个脚本:

Saving inventory on nodes (Thursday, September 20, 2012 12:58:12 PM CST).                                                               100% Done.Save inventory completeWARNING:The following configuration scripts need to be executed as the "root" user in each cluster node./u01/app/oracle/product/11.2.0/db_1/root.sh #On nodes db3To execute the configuration scripts:    1. Open a terminal window    2. Log in as "root"    3. Run the scripts in each cluster nodeThe Cluster Node Addition of /u01/app/oracle/product/11.2.0/db_1 was successful.Please check '/var/tmp/silentInstall.log' for more details.

因此,根据提示,在db3上新开一个terminal,并以root执行root.sh脚本。此处不记录详细输出了,该过程与单机部署时一致。

8、在db1或者db2上执行dbca命令,添加实例。此过程根据提示下一步即可,不做细述!

9、检查验证:

[grid@db1 bin]$cluvfy stage -post nodeadd -n node3 [-verbose]转载来自<http://blog.csdn.net/u010098331/article/details/50766870>
原创粉丝点击