RAC 更改主机名及IP地址

来源:互联网 发布:周杰伦懦夫知乎 编辑:程序博客网 时间:2024/06/04 17:49

由于安装RAC时脚本执行顺序错了,导致实例orcl1装到了rac2节点上,orcl2装到了rac1节点上,看起来很别扭,趁这个机会练习下更改主机名和IP地址。

原IP及主机名设置:

#public IP172.12.1.11  rac1.oracle.com  rac1172.12.1.12  rac2.oracle.com  rac2#private IP10.10.10.1      rac1-priv.oracle.com  rac1-priv10.10.10.2      rac2-priv.oracle.com  rac2-priv#vritual IP172.12.1.21  rac1-vip.oracle.com  rac1-vip172.12.1.22  rac2-vip.oracle.com  rac2-vip#scan IP172.12.1.31  rac-scan.oracle.com  rac-scan修改后的设置:#public IP172.12.1.101  node1.oracle.com  node1172.12.1.102  node2.oracle.com  ndoe2#private IP10.10.10.11      node1-priv.oracle.com  node1-priv10.10.10.12      node2-priv.oracle.com  node2-priv#vritual IP172.12.1.201  node1-vip.oracle.com  node1-vip172.12.1.202  node2-vip.oracle.com  node2-vip#scan IP172.12.1.110  node-scan.oracle.com  node-scan

修改流程:
删除rac2节点,改rac2节点主机名、IP地址,两个节点的/etc/hosts,再把此节点加入删集群;删除rac1节点,改rac1节点主机名、IP地址,两个节点的/etc/hosts,再把此节点加入集群。

具体步骤:

1、检查2个节点是否是active和Unpinned ,如果是pinned的,用crsctl unpin css

以下是关于pinned的解释:
When Oracle Clusterware 11g release 11.2 is installed on a cluster with no previous Oracle software version,
it configures cluster nodes dynamically, which is compatible with Oracle Database Release 11.2 and later,
but Oracle Database 10g and 11.1 require a persistent configuration.
This process of association of a node name with a node number is called pinning.

Note:
During an upgrade, all cluster member nodes are pinned automatically, and no manual pinning is required for
existing databases. This procedure is required only if you install older database versions after installing
Oracle Grid Infrastructure release 11.2 software.

pinned的实验过程:

[root@rac2 ~]# /u01/app/11.2.0/grid/bin/crsctl pin css -n rac2CRS-4664: Node rac2 successfully pinned.[grid@rac2 ~]$ olsnodes -n -s -trac2    1       Active  Pinnedrac1    2       Active  Unpinned[root@rac2 ~]# /u01/app/11.2.0/grid/bin/crsctl unpin css -n rac2CRS-4667: Node rac2 successfully unpinned.[grid@rac2 ~]$ olsnodes -n -s -trac2    1       Active  Unpinnedrac1    2       Active  Unpinned

2、root用户在rac2节点 GRID_HOME 上执行

[root@rac2 ~]# /u01/app/11.2.0/grid/crs/install/rootcrs.pl -deconfig -force   --删除集群配置,如果是最后一个节点/u01/app/11.2.0/grid/crs/install/rootcrs.pl -deconfig -force -lastnode2017-02-26 06:43:57: Parsing the host name2017-02-26 06:43:57: Checking for super user privileges2017-02-26 06:43:57: User has super user privilegesUsing configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_paramsVIP exists.:rac1VIP exists.: /rac1-vip/172.12.1.21/255.255.255.0/eth0VIP exists.:rac2VIP exists.: /rac2-vip/172.12.1.22/255.255.255.0/eth0GSD exists.ONS daemon exists. Local port 6100, remote port 6200eONS daemon exists. Multicast port 22702, multicast IP address 234.112.191.105, listening port 2016ACFS-9200: SupportedCRS-2673: Attempting to stop 'ora.registry.acfs' on 'rac2'CRS-2677: Stop of 'ora.registry.acfs' on 'rac2' succeededCRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac2'CRS-2673: Attempting to stop 'ora.crsd' on 'rac2'CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac2'CRS-2673: Attempting to stop 'ora.OCR_VOTEDISK.dg' on 'rac2'CRS-2673: Attempting to stop 'ora.orcl.db' on 'rac2'CRS-2677: Stop of 'ora.OCR_VOTEDISK.dg' on 'rac2' succeededCRS-2677: Stop of 'ora.orcl.db' on 'rac2' succeededCRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac2'CRS-2673: Attempting to stop 'ora.FRA.dg' on 'rac2'CRS-2677: Stop of 'ora.DATA.dg' on 'rac2' succeededCRS-2677: Stop of 'ora.FRA.dg' on 'rac2' succeededCRS-2673: Attempting to stop 'ora.asm' on 'rac2'CRS-2677: Stop of 'ora.asm' on 'rac2' succeededCRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac2' has completedCRS-2677: Stop of 'ora.crsd' on 'rac2' succeededCRS-2673: Attempting to stop 'ora.mdnsd' on 'rac2'CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac2'CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'rac2'CRS-2673: Attempting to stop 'ora.ctssd' on 'rac2'CRS-2673: Attempting to stop 'ora.evmd' on 'rac2'CRS-2673: Attempting to stop 'ora.asm' on 'rac2'CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac2'CRS-2677: Stop of 'ora.cssdmonitor' on 'rac2' succeededCRS-2677: Stop of 'ora.mdnsd' on 'rac2' succeededCRS-2677: Stop of 'ora.gpnpd' on 'rac2' succeededCRS-2677: Stop of 'ora.evmd' on 'rac2' succeededCRS-2677: Stop of 'ora.ctssd' on 'rac2' succeededCRS-2677: Stop of 'ora.drivers.acfs' on 'rac2' succeededCRS-2677: Stop of 'ora.asm' on 'rac2' succeededCRS-2673: Attempting to stop 'ora.cssd' on 'rac2'CRS-2677: Stop of 'ora.cssd' on 'rac2' succeededCRS-2673: Attempting to stop 'ora.diskmon' on 'rac2'CRS-2673: Attempting to stop 'ora.gipcd' on 'rac2'CRS-2677: Stop of 'ora.gipcd' on 'rac2' succeededCRS-2677: Stop of 'ora.diskmon' on 'rac2' succeededCRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac2' has completedCRS-4133: Oracle High Availability Services has been stopped.Successfully deconfigured Oracle clusterware stack on this node

另一个节点的状态如下,只有rac1:

[grid@rac1 ~]$ crsctl stat res -t--------------------------------------------------------------------------------NAME           TARGET  STATE        SERVER                   STATE_DETAILS       --------------------------------------------------------------------------------Local Resources--------------------------------------------------------------------------------ora.DATA.dg               ONLINE  ONLINE       rac1                                         ora.FRA.dg               ONLINE  ONLINE       rac1                                         ora.LISTENER.lsnr               ONLINE  ONLINE       rac1                                         ora.OCR_VOTEDISK.dg               ONLINE  ONLINE       rac1                                         ora.asm               ONLINE  ONLINE       rac1                     Started             ora.eons               ONLINE  ONLINE       rac1                                         ora.gsd               OFFLINE OFFLINE      rac1                                         ora.net1.network               ONLINE  ONLINE       rac1                                         ora.ons               ONLINE  ONLINE       rac1                                         ora.registry.acfs               ONLINE  ONLINE       rac1                                         --------------------------------------------------------------------------------Cluster Resources--------------------------------------------------------------------------------ora.LISTENER_SCAN1.lsnr      1        ONLINE  ONLINE       rac1                                         ora.oc4j      1        OFFLINE OFFLINE                                                   ora.orcl.db      1        ONLINE  ONLINE       rac1                                               2        ONLINE  OFFLINE                               Instance Shutdown   ora.rac1.vip      1        ONLINE  ONLINE       rac1                                         ora.scan1.vip      1        ONLINE  ONLINE       rac1       

3、在rac1节点上root执行

[root@rac1 ~]# /u01/app/11.2.0/grid/bin/crsctl delete node -n rac2CRS-4661: Node rac2 successfully deleted.

4、在要删除的节点上用grid用户执行,更新节点信息

[grid@rac2 ~]$ echo $ORACLE_HOME/u01/app/11.2.0/grid[grid@rac2 ~]$ /u01/app/11.2.0/grid/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0/grid "CLUSTER_NODES={rac2}" CRS=TRUE -silent -localStarting Oracle Universal Installer...Checking swap space: must be greater than 500 MB.   Actual 4094 MB    PassedThe inventory pointer is located at /etc/oraInst.locThe inventory is located at /u01/app/oraInventory'UpdateNodeList' was successful.

5、清理要删除节点的Clusterware home安装文件,grid用户执行:
$ Grid_home/deinstall/deinstall –local
注意一定要添加 -local 选项 ,否者会删除 所有节点的Clusterware home 安装目录.
期间会有交互,一直回车用默认值,最后一个选择[y]继续。

[grid@rac2 deinstall]$ ./deinstall -localChecking for required files and bootstrapping ...Please wait ...Location of logs /u01/app/oraInventory/logs/############ ORACLE DEINSTALL & DECONFIG TOOL START #################################### CHECK OPERATION START ########################Install check configuration STARTChecking for existence of the Oracle home location /u01/app/11.2.0/gridOracle Home type selected for de-install is: CRSOracle Base selected for de-install is: /u01/app/gridChecking for existence of central inventory location /u01/app/oraInventoryChecking for existence of the Oracle Grid Infrastructure home The following nodes are part of this cluster: rac2Install check configuration ENDTraces log file: /u01/app/oraInventory/logs//crsdc.logEnter an address or the name of the virtual IP used on node "rac2"[rac2-vip] > The following information can be collected by running ifconfig -a on node "rac2"Enter the IP netmask of Virtual IP "172.12.1.22" on node "rac2"[255.255.255.0] > Enter the network interface name on which the virtual IP address "172.12.1.22" is active > Enter an address or the name of the virtual IP[] > Network Configuration check config STARTNetwork de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check1620682285648405807.logSpecify all RAC listeners that are to be de-configured [LISTENER,LISTENER_SCAN1]:Network Configuration check config ENDAsm Check Configuration STARTASM de-configuration trace file location: /u01/app/oraInventory/logs/asmcadc_check4251457041802335046.log######################### CHECK OPERATION END ################################################ CHECK OPERATION SUMMARY #######################Oracle Grid Infrastructure Home is: The cluster node(s) on which the Oracle home exists are: (Please input nodes seperated by ",", eg: node1,node2,...)rac2Since -local option has been specified, the Oracle home will be de-installed only on the local node, 'rac2', and the global configuration will be removed.Oracle Home selected for de-install is: /u01/app/11.2.0/gridInventory Location where the Oracle home registered is: /u01/app/oraInventoryFollowing RAC listener(s) will be de-configured: LISTENER,LISTENER_SCAN1Option -local will not modify any ASM configuration.Do you want to continue (y - yes, n - no)? [n]: yA log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2017-02-26_07-15-19-AM.out'Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2017-02-26_07-15-19-AM.err'######################## CLEAN OPERATION START ########################ASM de-configuration trace file location: /u01/app/oraInventory/logs/asmcadc_clean1736507612183916135.logASM Clean Configuration ENDNetwork Configuration clean config STARTNetwork de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean7845023467414677312.logDe-configuring RAC listener(s): LISTENER,LISTENER_SCAN1De-configuring listener: LISTENER    Stopping listener on node "rac2": LISTENER    Warning: Failed to stop listener. Listener may not be running.Listener de-configured successfully.De-configuring listener: LISTENER_SCAN1    Stopping listener on node "rac2": LISTENER_SCAN1    Warning: Failed to stop listener. Listener may not be running.Listener de-configured successfully.De-configuring Naming Methods configuration file...Naming Methods configuration file de-configured successfully.De-configuring backup files...Backup files de-configured successfully.The network configuration has been cleaned up successfully.Network Configuration clean config END---------------------------------------->Remove the directory: /tmp/deinstall2017-02-26_07-15-08-AM on node: Oracle Universal Installer clean STARTDetach Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node : DoneDelete directory '/u01/app/11.2.0/grid' on the local node : DoneDelete directory '/u01/app/grid' on the local node : DoneOracle Universal Installer cleanup was successful.Oracle Universal Installer clean ENDOracle install clean STARTClean install operation removing temporary directory '/tmp/install' on node 'rac2'Oracle install clean ENDMoved default properties file /tmp/deinstall2017-02-26_07-15-08-AM/response/deinstall_Ora11g_gridinfrahome1.rsp as /tmp/deinstall2017-02-26_07-15-08-AM/response/deinstall_Ora11g_gridinfrahome1.rsp3######################### CLEAN OPERATION END ################################################ CLEAN OPERATION SUMMARY #######################Following RAC listener(s) were de-configured successfully: LISTENER,LISTENER_SCAN1Oracle Clusterware was already stopped and de-configured on node "rac2"Oracle Clusterware is stopped and de-configured successfully.Successfully detached Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node.Successfully deleted directory '/u01/app/11.2.0/grid' on the local node.Successfully deleted directory '/u01/app/grid' on the local node.Oracle Universal Installer cleanup was successful.Oracle install successfully cleaned up the temporary directories.#################################################################################### ORACLE DEINSTALL & DECONFIG TOOL END #############[grid@rac2 deinstall]$ 

6、使用grid用户在rac1节点执行以下命令,更新节点信息:

[grid@rac1 ~]$ /u01/app/11.2.0/grid/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0/grid "CLUSTER_NODES={rac1}" CRS=TRUE -silent -localStarting Oracle Universal Installer...Checking swap space: must be greater than 500 MB.   Actual 3110 MB    PassedThe inventory pointer is located at /etc/oraInst.locThe inventory is located at /u01/app/oraInventory'UpdateNodeList' was successful.

7、使用grid用户在rac1节点验证rac2节点是否已被删除

[grid@rac1 ~]$ cluvfy stage -post nodedel -n rac2 -verbosePerforming post-checks for node removal Checking CRS integrity...The Oracle clusterware is healthy on node "rac1"CRS integrity check passedResult: Node removal check passedPost-check for node removal was successful. 

8、在rac2被正确删除后,修改rac2的主机名和IP,修改两个节点的/etc/hosts

[root@node1 ~]# cat /etc/sysconfig/networkNETWORKING=yesNETWORKING_IPV6=noHOSTNAME=node1[root@node1 ~]# cat /etc/hosts# Do not remove the following line, or various programs# that require network functionality will fail.127.0.0.1       localhost.localdomain localhost::1             localhost6.localdomain6 localhost6#public IP172.12.1.101  node1.oracle.com  node1172.12.1.11   rac1.oracle.com   rac1#private IP10.10.10.11      node1-priv.oracle.com  node1-priv10.10.10.1       rac1-priv.oracle.com   rac1-priv#vritual IP172.12.1.201  node1-vip.oracle.com  node1-vip172.12.1.21  rac1-vip.oracle.com  rac1-vip#scan IP172.12.1.31  rac-scan.oracle.com  rac-scan

9、在rac1上使用grid用户检查节点2是否满足

[grid@rac1 ~]$ cluvfy stage -pre nodeadd -n node1 -fixup -fixupdir /tmp -verbose Performing pre-checks for node addition Checking node reachability...Check: Node reachability from node "rac1"  Destination Node                      Reachable?                ------------------------------------  ------------------------  node1                                 yes                     Result: Node reachability check passed from node "rac1"Checking user equivalence...Check: User equivalence for user "grid"  Node Name                             Comment                   ------------------------------------  ------------------------  node1                                 failed                  Result: PRVF-4007 : User equivalence check failed for user "grid"ERROR: User equivalence unavailable on all the specified nodesVerification cannot proceedPre-check for node addition was unsuccessful on all the nodes. 

因为主机名修改了,两节点间grid用户信任关系需要重建

/u01/app/11.2.0/grid/deinstall/sshUserSetup.sh -user grid -hosts rac1 node1 -noPromptPassphrase 

10、添加Node1节点,在rac1上执行下面的命令,使用Grid用户,但在此之前需先修改addNode.sh文件

[grid@rac1 ~]$ $ORACLE_HOME/oui/bin/addNode.sh "CLUSTER_NEW_NODES={node1}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={node1-vip}" -silent

一些无关紧要的小问题检查不通过,在图形界面安装时是可以忽略的,这里是不能直接忽略的,需要修改一下addNode.sh文件

#!/bin/shOHOME=/u01/app/11.2.0/gridINVPTRLOC=$OHOME/oraInst.locEXIT_CODE=0ADDNODE="$OHOME/oui/bin/runInstaller -addNode -invPtrLoc $INVPTRLOC ORACLE_HOME=$OHOME $*"if [ "$IGNORE_PREADDNODE_CHECKS" = "Y" -o ! -f "$OHOME/cv/cvutl/check_nodeadd.pl" ]then        $ADDNODE        EXIT_CODE=$?;else        CHECK_NODEADD="$OHOME/perl/bin/perl $OHOME/cv/cvutl/check_nodeadd.pl -pre ORACLE_HOME=$OHOME $*"        $CHECK_NODEADD        EXIT_CODE=$?;EXIT_CODE=0   ##在这里添加一行,用于忽略一些小错误        if [ $EXIT_CODE -eq 0 ]        then                $ADDNODE                EXIT_CODE=$?;        fifiexit $EXIT_CODE ;执行结束后,在结尾有提示需用root用户在node1上执行脚本:WARNING:The following configuration scripts need to be executed as the "root" user in each cluster node./u01/app/11.2.0/grid/root.sh #On nodes node1To execute the configuration scripts:    1. Open a terminal window    2. Log in as "root"    3. Run the scripts in each cluster nodeThe Cluster Node Addition of /u01/app/11.2.0/grid was successful.Please check '/tmp/silentInstall.log' for more details.

执行报错:

[root@node1 ~]# /u01/app/11.2.0/grid/root.sh Running Oracle 11g root.sh script...The following environment variables are set as:    ORACLE_OWNER= grid    ORACLE_HOME=  /u01/app/11.2.0/gridEnter the full pathname of the local bin directory: [/usr/local/bin]: The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n) [n]: The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n) [n]: The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n) [n]: Entries will be added to the /etc/oratab file as needed byDatabase Configuration Assistant when a database is createdFinished running generic part of root.sh script.Now product-specific root actions will be performed.2017-02-26 09:28:09: Parsing the host name2017-02-26 09:28:09: Checking for super user privileges2017-02-26 09:28:09: User has super user privilegesUsing configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_paramsCreating trace directory/u01/app/11.2.0/grid/bin/cluutil -sourcefile /etc/oracle/ocr.loc -sourcenode rac2 -destfile /u01/app/11.2.0/grid/srvm/admin/ocrloc.tmp -nodelist rac2 ... failedUnable to copy OCR locationsvalidateOCR failed for +OCR_VOTEDISK at /u01/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 7979.

修改node1节点的/u01/app/11.2.0/grid/crs/install/crsconfig_params,将rac2都修改为node1

[root@node1 install]# cat crsconfig_params | grep node1HOST_NAME_LIST=rac1,node1NODE_NAME_LIST=rac1,node1CRS_NODEVIPS='rac1-vip/255.255.255.0/eth0,node1-vip/255.255.255.0/eth0'NODELIST=rac1,node1NEW_NODEVIPS='rac1-vip/255.255.255.0/eth0,node1-vip/255.255.255.0/eth0'

在node1的grid用户下建立ssh互信

[grid@node1 grid]$ /u01/app/11.2.0/grid/deinstall/sshUserSetup.sh -user grid -hosts rac1 node1 -noPromptPassphrase

重新执行[root@node1 ~]# /u01/app/11.2.0/grid/root.sh

修改实例所在节点:

[grid@node1 grid]$ srvctl modify instance -d orcl -i orcl1 -n node1[grid@node1 grid]$ srvctl status database -d orclInstance orcl1 is not running on node node1Instance orcl2 is running on node rac1[grid@node1 grid]$ srvctl start instance -d orcl -i orcl1[grid@node1 grid]$ srvctl status database -d orclInstance orcl1 is running on node node1Instance orcl2 is running on node rac1

删除rac1节点,修改为node2,重复以上步骤。

上述步骤一步都不可省略,否则会出错,在操作中遇到问题就检查上述步骤是否做错或者有遗漏。

0 0
原创粉丝点击