rhel7.2 + Oracle 11.2.0.4 RAC删除添加节点操作

来源:互联网 发布:淘宝运费模板 编辑:程序博客网 时间:2024/06/03 04:06

因一次实验意外,误删除了部分系统文件导致rac中节点1操作系统无法启动。


版本信息:

[root@rac2 bin]# cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 7.2 (Maipo)

SQL> select * from v$version;
BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
PL/SQL Release 11.2.0.4.0 - Production
CORE    11.2.0.4.0      Production
TNS for Linux: Version 11.2.0.4.0 - Production
NLSRTL Version 11.2.0.4.0 - Production

节点信息:
[grid@rac2 ~]$ cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.80.150  rac1
192.168.80.151  rac2

192.168.60.10   rac1-priv
192.168.60.11   rac2-priv

192.168.80.180  rac1-vip
192.168.80.181  rac2-vip

192.168.80.99   scan


因rac1节点误删除某些系统文件,导致RAC节点1操作系统无法启动。
采用清除添加节点的方式修复rac1节点。


删除节点rac1参考文章:
How to remove/delete a node from Grid Infrastructure Clusterware when the node has failed (文档 ID 1262925.1)


使用dbca静默方式删除rac1中的数据库实例:
[oracle@rac2 bin]$ dbca -silent -deleteInstance -nodeList rac1 -gdbName orcl -instanceName orcl1 -sysDBAUserName sys -sysDBAPassword oracle
Deleting instance
1% complete
2% complete
6% complete
13% complete
20% complete
26% complete
33% complete
40% complete
46% complete
53% complete
60% complete
66% complete
Completing instance management.
100% complete
Look at the log file "/u02/app/oracle/cfgtoollogs/dbca/orcl.log" for further details.
[oracle@rac2 bin]$ 
[oracle@rac2 bin]$ 
[oracle@rac2 bin]$ 
[oracle@rac2 bin]$ srvctl config database -d orcl
Database unique name: orcl
Database name: orcl
Oracle home: /u02/app/oracle/product/11.2.0/db_home
Oracle user: oracle
Spfile: +DATA/orcl/spfileorcl.ora
Domain: 
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: orcl
Database instances: orcl2
Disk Groups: DATA,FRA
Mount point paths: 
Services: 
Type: RAC
Database is administrator managed


检查rac1节点数据库实例已经从集群服务中删除:
[grid@rac2 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       rac2                                         
ora.FRA.dg
               ONLINE  ONLINE       rac2                                         
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac2                                         
ora.OCR.dg
               ONLINE  ONLINE       rac2                                         
ora.asm
               ONLINE  ONLINE       rac2                     Started             
ora.gsd
               OFFLINE OFFLINE      rac2                                         
ora.net1.network
               ONLINE  ONLINE       rac2                                         
ora.ons
               ONLINE  ONLINE       rac2                                         
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac2                                         
ora.cvu
      1        ONLINE  ONLINE       rac2                                         
ora.oc4j
      1        ONLINE  ONLINE       rac2                                         
ora.orcl.db
      2        ONLINE  ONLINE       rac2                     Open                
ora.rac1.vip
      1        ONLINE  INTERMEDIATE rac2                     FAILED OVER         
ora.rac2.vip
      1        ONLINE  ONLINE       rac2                                         
ora.scan1.vip
      1        ONLINE  ONLINE       rac2    
 


更新RAC节点信息:
[oracle@rac2 bin]$ echo $ORACLE_HOME
/u02/app/oracle/product/11.2.0/db_home
[oracle@rac2 bin]$  ./runInstaller -updateNodeList ORACLE_HOME=/u02/app/oracle/product/11.2.0/db_home "CLUSTER_NODES=rac2" 
Starting Oracle Universal Installer...


Checking swap space: must be greater than 500 MB.   Actual 3903 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.


删除rac1节点的vip:
[root@rac2 bin]#  cd /u01/app/11.2.0/grid/bin/
[root@rac2 bin]# ./srvctl stop vip -i rac1
PRCC-1016 : rac1-vip was already stopped
PRCR-1005 : Resource ora.rac1.vip is already stopped


[root@rac2 bin]# ./srvctl remove vip -i rac1 -f
[root@rac2 bin]# ./crsctl stat res -t |grep vip
ora.rac2.vip
ora.scan1.vip


查看集群节点信息:
[root@rac2 bin]# ./olsnodes -s -t
rac1    Inactive        Unpinned
rac2    Active  Unpinned


从集群中删除节点1:
[root@rac2 bin]# ./crsctl delete node -n rac1
CRS-4661: Node rac1 successfully deleted.




[grid@rac2 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       rac2                                         
ora.FRA.dg
               ONLINE  ONLINE       rac2                                         
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac2                                         
ora.OCR.dg
               ONLINE  ONLINE       rac2                                         
ora.asm
               ONLINE  ONLINE       rac2                     Started             
ora.gsd
               OFFLINE OFFLINE      rac2                                         
ora.net1.network
               ONLINE  ONLINE       rac2                                         
ora.ons
               ONLINE  ONLINE       rac2                                         
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac2                                         
ora.cvu
      1        ONLINE  ONLINE       rac2                                         
ora.oc4j
      1        ONLINE  ONLINE       rac2                                         
ora.orcl.db
      2        ONLINE  ONLINE       rac2                     Open                
ora.rac2.vip
      1        ONLINE  ONLINE       rac2                                         
ora.scan1.vip
      1        ONLINE  ONLINE       rac2          
 
更新集群节点信息:
[grid@rac2 bin]$ pwd
/u01/app/11.2.0/grid/oui/bin
[grid@rac2 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0/grid "CLUSTER_NODES=rac2" CRS=TRUE -silent
Starting Oracle Universal Installer...


Checking swap space: must be greater than 500 MB.   Actual 3903 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.




检查确认仅剩余rac2节点:
[grid@rac2 bin]$ olsnodes
rac2




[grid@rac2 bin]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       rac2                                         
ora.FRA.dg
               ONLINE  ONLINE       rac2                                         
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac2                                         
ora.OCR.dg
               ONLINE  ONLINE       rac2                                         
ora.asm
               ONLINE  ONLINE       rac2                     Started             
ora.gsd
               OFFLINE OFFLINE      rac2                                         
ora.net1.network
               ONLINE  ONLINE       rac2                                         
ora.ons
               ONLINE  ONLINE       rac2                                         
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac2                                         
ora.cvu
      1        ONLINE  ONLINE       rac2                                         
ora.oc4j
      1        ONLINE  ONLINE       rac2                                         
ora.orcl.db
      2        ONLINE  ONLINE       rac2                     Open                
ora.rac2.vip
      1        ONLINE  ONLINE       rac2                                         
ora.scan1.vip
      1        ONLINE  ONLINE       rac2         


至此,从rac2集群中清理rac1信息完毕。


重新安装rac1操作系统,并配置系统参数及创建用户、识别存储磁盘、用户等效性等(略)
注:新安装的rac1服务器中无需安装集群软件和数据库软件。


添加节点rac1参考:
https://docs.oracle.com/cd/E18283_01/rac.112/e16795/adddelunix.htm#BEICADHD
http://koumm.blog.51cto.com/703525/1729915/


在rac2节点执行如下,在集群中添加rac1:
[grid@rac2 ~]$ $ORACLE_HOME/oui/bin/addNode.sh "CLUSTER_NEW_NODES={rac1}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac1-vip}"


Performing pre-checks for node addition 


Checking node reachability...
Node reachability check passed from node "rac2"




Checking user equivalence...
User equivalence check passed for user "grid"


Checking CRS integrity...


Clusterware version consistency passed


CRS integrity check passed


Checking shared resources...


Checking CRS home location...
"/u01/app/11.2.0/grid" is shared
Shared resources check for node addition passed




Checking node connectivity...


Checking hosts config file...


Verification of the hosts config file successful


Check: Node connectivity for interface "eth0"
Node connectivity passed for interface "eth0"
TCP connectivity check passed for subnet "192.168.80.0"




Check: Node connectivity for interface "eth1"
Node connectivity passed for interface "eth1"
TCP connectivity check passed for subnet "192.168.60.0"


Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.80.0".
Subnet mask consistency check passed for subnet "192.168.60.0".
Subnet mask consistency check passed.


Node connectivity check passed


Checking multicast communication...


Checking subnet "192.168.80.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.80.0" for multicast communication with multicast group "230.0.1.0" passed.


Checking subnet "192.168.60.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.60.0" for multicast communication with multicast group "230.0.1.0" passed.


Check of multicast communication passed.
Total memory check passed
Available memory check passed
Swap space check passed
Free disk space check passed for "rac2:/u01/app/11.2.0/grid,rac2:/tmp"
Free disk space check passed for "rac1:/u01/app/11.2.0/grid,rac1:/tmp"
Check for multiple users with UID value 1100 passed 
User existence check passed for "grid"
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Package existence check passed for "make"
Package existence check passed for "binutils"
Package existence check passed for "gcc(x86_64)"
Package existence check passed for "libaio(x86_64)"
Package existence check passed for "glibc(x86_64)"
Package existence check failed for "compat-libstdc++-33(x86_64)"
Check failed on nodes: 
        rac1
Package existence check passed for "elfutils-libelf(x86_64)"


WARNING: 
PRVF-7584 : Multiple versions of package "elfutils-libelf-devel" found on node rac2: elfutils-libelf-devel(x86_64)-0.163-3.el7,elfutils-libelf-devel(i686)-0.163-3.el7


WARNING: 
PRVF-7584 : Multiple versions of package "elfutils-libelf-devel" found on node rac1: elfutils-libelf-devel(x86_64)-0.163-3.el7,elfutils-libelf-devel(i686)-0.163-3.el7
Package existence check passed for "elfutils-libelf-devel"
Package existence check passed for "glibc-common"
Package existence check passed for "glibc-devel(x86_64)"
Package existence check passed for "glibc-headers"
Package existence check passed for "gcc-c++(x86_64)"
Package existence check passed for "libaio-devel(x86_64)"
Package existence check passed for "libgcc(x86_64)"
Package existence check passed for "libstdc++(x86_64)"
Package existence check passed for "libstdc++-devel(x86_64)"
Package existence check passed for "sysstat"
Package existence check failed for "pdksh"
Check failed on nodes: 
        rac2,rac1
Package existence check passed for "expat(x86_64)"
Check for multiple users with UID value 0 passed 
Current group ID check passed


Starting check for consistency of primary group of root user


Check for consistency of root user's primary group passed


Checking OCR integrity...


OCR integrity check passed


Checking Oracle Cluster Voting Disk configuration...


Oracle Cluster Voting Disk configuration check passed
Time zone consistency check passed


Starting Clock synchronization checks using Network Time Protocol(NTP)...


NTP Configuration file check started...
No NTP Daemons or Services were found to be running


Clock synchronization check using Network Time Protocol(NTP) passed




User "grid" is not part of "root" group. Check passed
Checking consistency of file "/etc/resolv.conf" across nodes


File "/etc/resolv.conf" does not have both domain and search entries defined
domain entry in file "/etc/resolv.conf" is consistent across nodes
search entry in file "/etc/resolv.conf" is consistent across nodes
PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: rac1


File "/etc/resolv.conf" is not consistent across nodes




Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed


Checking VIP configuration.
Checking VIP Subnet configuration.
Check for VIP Subnet configuration passed.
Checking VIP reachability
Check for VIP reachability passed.


Pre-check for node addition was unsuccessful on all the nodes. 
[grid@rac2 ~]$ 






如果环境中有一个小问题会检查不通过,在图形界面安装时可以忽略的,这里是不能直接忽略的,需要修改一下addNode.sh文件
[grid@db1 ~]$ vi $ORACLE_HOME/oui/bin/addNode.sh   
#!/bin/sh    
OHOME=/u01/app/11.2.0/grid    
INVPTRLOC=$OHOME/oraInst.loc    
EXIT_CODE=0    
ADDNODE="$OHOME/oui/bin/runInstaller -addNode -invPtrLoc $INVPTRLOC ORACLE_HOME=$OHOME $*"    
if [ "$IGNORE_PREADDNODE_CHECKS" = "Y" -o ! -f "$OHOME/cv/cvutl/check_nodeadd.pl" ]    
then    
        $ADDNODE    
        EXIT_CODE=$?;    
else    
        CHECK_NODEADD="$OHOME/perl/bin/perl $OHOME/cv/cvutl/check_nodeadd.pl -pre ORACLE_HOME=$OHOME $*"    
        $CHECK_NODEADD    
        EXIT_CODE=$?;    
EXIT_CODE=0  ----------------------------------------------加入此行     
        if [ $EXIT_CODE -eq 0 ]    
        then    
                $ADDNODE    
                EXIT_CODE=$?;    
        fi    
fi    
exit $EXIT_CODE ;




再次执行正常:
[grid@rac2 ~]$ $ORACLE_HOME/oui/bin/addNode.sh "CLUSTER_NEW_NODES={rac1}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac1-vip}"


Performing pre-checks for node addition 


Checking node reachability...
Node reachability check passed from node "rac2"




Checking user equivalence...
User equivalence check passed for user "grid"


Checking CRS integrity...


Clusterware version consistency passed


CRS integrity check passed


Checking shared resources...


Checking CRS home location...
"/u01/app/11.2.0/grid" is shared
Shared resources check for node addition passed




Checking node connectivity...


Checking hosts config file...


Verification of the hosts config file successful


Check: Node connectivity for interface "eth0"
Node connectivity passed for interface "eth0"
TCP connectivity check passed for subnet "192.168.80.0"




Check: Node connectivity for interface "eth1"
Node connectivity passed for interface "eth1"
TCP connectivity check passed for subnet "192.168.60.0"


Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.80.0".
Subnet mask consistency check passed for subnet "192.168.60.0".
Subnet mask consistency check passed.


Node connectivity check passed


Checking multicast communication...


Checking subnet "192.168.80.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.80.0" for multicast communication with multicast group "230.0.1.0" passed.


Checking subnet "192.168.60.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.60.0" for multicast communication with multicast group "230.0.1.0" passed.


Check of multicast communication passed.
Total memory check passed
Available memory check passed
Swap space check passed
Free disk space check passed for "rac2:/u01/app/11.2.0/grid,rac2:/tmp"
Free disk space check passed for "rac1:/u01/app/11.2.0/grid,rac1:/tmp"
Check for multiple users with UID value 1100 passed 
User existence check passed for "grid"
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Package existence check passed for "make"
Package existence check passed for "binutils"
Package existence check passed for "gcc(x86_64)"
Package existence check passed for "libaio(x86_64)"
Package existence check passed for "glibc(x86_64)"
Package existence check failed for "compat-libstdc++-33(x86_64)"
Check failed on nodes: 
        rac1
Package existence check passed for "elfutils-libelf(x86_64)"


WARNING: 
PRVF-7584 : Multiple versions of package "elfutils-libelf-devel" found on node rac2: elfutils-libelf-devel(x86_64)-0.163-3.el7,elfutils-libelf-devel(i686)-0.163-3.el7


WARNING: 
PRVF-7584 : Multiple versions of package "elfutils-libelf-devel" found on node rac1: elfutils-libelf-devel(x86_64)-0.163-3.el7,elfutils-libelf-devel(i686)-0.163-3.el7
Package existence check passed for "elfutils-libelf-devel"
Package existence check passed for "glibc-common"
Package existence check passed for "glibc-devel(x86_64)"
Package existence check passed for "glibc-headers"
Package existence check passed for "gcc-c++(x86_64)"
Package existence check passed for "libaio-devel(x86_64)"
Package existence check passed for "libgcc(x86_64)"
Package existence check passed for "libstdc++(x86_64)"
Package existence check passed for "libstdc++-devel(x86_64)"
Package existence check passed for "sysstat"
Package existence check failed for "pdksh"
Check failed on nodes: 
        rac2,rac1
Package existence check passed for "expat(x86_64)"
Check for multiple users with UID value 0 passed 
Current group ID check passed


Starting check for consistency of primary group of root user


Check for consistency of root user's primary group passed


Checking OCR integrity...


OCR integrity check passed


Checking Oracle Cluster Voting Disk configuration...


Oracle Cluster Voting Disk configuration check passed
Time zone consistency check passed


Starting Clock synchronization checks using Network Time Protocol(NTP)...


NTP Configuration file check started...
No NTP Daemons or Services were found to be running


Clock synchronization check using Network Time Protocol(NTP) passed




User "grid" is not part of "root" group. Check passed
Checking consistency of file "/etc/resolv.conf" across nodes


File "/etc/resolv.conf" does not have both domain and search entries defined
domain entry in file "/etc/resolv.conf" is consistent across nodes
search entry in file "/etc/resolv.conf" is consistent across nodes
PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: rac1


File "/etc/resolv.conf" is not consistent across nodes




Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed


Checking VIP configuration.
Checking VIP Subnet configuration.
Check for VIP Subnet configuration passed.
Checking VIP reachability
Check for VIP reachability passed.


Pre-check for node addition was unsuccessful on all the nodes. 
Starting Oracle Universal Installer...


Checking swap space: must be greater than 500 MB.   Actual 3858 MB    Passed
Oracle Universal Installer, Version 11.2.0.4.0 Production
Copyright (C) 1999, 2013, Oracle. All rights reserved.




Performing tests to see whether nodes rac1 are available
............................................................... 100% Done.


.
-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
   Source: /u01/app/11.2.0/grid
   New Nodes
Space Requirements
   New Nodes
      rac1
         /: Required 5.79GB : Available 41.55GB
Installed Products
   Product Names
      Oracle Grid Infrastructure 11g 11.2.0.4.0 
      Java Development Kit 1.5.0.51.10 
      Installer SDK Component 11.2.0.4.0 
      Oracle One-Off Patch Installer 11.2.0.3.4 
      Oracle Universal Installer 11.2.0.4.0 
      Oracle RAC Required Support Files-HAS 11.2.0.4.0 
      Oracle USM Deconfiguration 11.2.0.4.0 
      Oracle Configuration Manager Deconfiguration 10.3.1.0.0 
      Enterprise Manager Common Core Files 10.2.0.4.5 
      Oracle DBCA Deconfiguration 11.2.0.4.0 
      Oracle RAC Deconfiguration 11.2.0.4.0 
      Oracle Quality of Service Management (Server) 11.2.0.4.0 
      Installation Plugin Files 11.2.0.4.0 
      Universal Storage Manager Files 11.2.0.4.0 
      Oracle Text Required Support Files 11.2.0.4.0 
      Automatic Storage Management Assistant 11.2.0.4.0 
      Oracle Database 11g Multimedia Files 11.2.0.4.0 
      Oracle Multimedia Java Advanced Imaging 11.2.0.4.0 
      Oracle Globalization Support 11.2.0.4.0 
      Oracle Multimedia Locator RDBMS Files 11.2.0.4.0 
      Oracle Core Required Support Files 11.2.0.4.0 
      Bali Share 1.1.18.0.0 
      Oracle Database Deconfiguration 11.2.0.4.0 
      Oracle Quality of Service Management (Client) 11.2.0.4.0 
      Expat libraries 2.0.1.0.1 
      Oracle Containers for Java 11.2.0.4.0 
      Perl Modules 5.10.0.0.1 
      Secure Socket Layer 11.2.0.4.0 
      Oracle JDBC/OCI Instant Client 11.2.0.4.0 
      Oracle Multimedia Client Option 11.2.0.4.0 
      LDAP Required Support Files 11.2.0.4.0 
      Character Set Migration Utility 11.2.0.4.0 
      Perl Interpreter 5.10.0.0.2 
      PL/SQL Embedded Gateway 11.2.0.4.0 
      OLAP SQL Scripts 11.2.0.4.0 
      Database SQL Scripts 11.2.0.4.0 
      Oracle Extended Windowing Toolkit 3.4.47.0.0 
      SSL Required Support Files for InstantClient 11.2.0.4.0 
      SQL*Plus Files for Instant Client 11.2.0.4.0 
      Oracle Net Required Support Files 11.2.0.4.0 
      Oracle Database User Interface 2.2.13.0.0 
      RDBMS Required Support Files for Instant Client 11.2.0.4.0 
      RDBMS Required Support Files Runtime 11.2.0.4.0 
      XML Parser for Java 11.2.0.4.0 
      Oracle Security Developer Tools 11.2.0.4.0 
      Oracle Wallet Manager 11.2.0.4.0 
      Enterprise Manager plugin Common Files 11.2.0.4.0 
      Platform Required Support Files 11.2.0.4.0 
      Oracle JFC Extended Windowing Toolkit 4.2.36.0.0 
      RDBMS Required Support Files 11.2.0.4.0 
      Oracle Ice Browser 5.2.3.6.0 
      Oracle Help For Java 4.2.9.0.0 
      Enterprise Manager Common Files 10.2.0.4.5 
      Deinstallation Tool 11.2.0.4.0 
      Oracle Java Client 11.2.0.4.0 
      Cluster Verification Utility Files 11.2.0.4.0 
      Oracle Notification Service (eONS) 11.2.0.4.0 
      Oracle LDAP administration 11.2.0.4.0 
      Cluster Verification Utility Common Files 11.2.0.4.0 
      Oracle Clusterware RDBMS Files 11.2.0.4.0 
      Oracle Locale Builder 11.2.0.4.0 
      Oracle Globalization Support 11.2.0.4.0 
      Buildtools Common Files 11.2.0.4.0 
      HAS Common Files 11.2.0.4.0 
      SQL*Plus Required Support Files 11.2.0.4.0 
      XDK Required Support Files 11.2.0.4.0 
      Agent Required Support Files 10.2.0.4.5 
      Parser Generator Required Support Files 11.2.0.4.0 
      Precompiler Required Support Files 11.2.0.4.0 
      Installation Common Files 11.2.0.4.0 
      Required Support Files 11.2.0.4.0 
      Oracle JDBC/THIN Interfaces 11.2.0.4.0 
      Oracle Multimedia Locator 11.2.0.4.0 
      Oracle Multimedia 11.2.0.4.0 
      Assistant Common Files 11.2.0.4.0 
      Oracle Net 11.2.0.4.0 
      PL/SQL 11.2.0.4.0 
      HAS Files for DB 11.2.0.4.0 
      Oracle Recovery Manager 11.2.0.4.0 
      Oracle Database Utilities 11.2.0.4.0 
      Oracle Notification Service 11.2.0.3.0 
      SQL*Plus 11.2.0.4.0 
      Oracle Netca Client 11.2.0.4.0 
      Oracle Advanced Security 11.2.0.4.0 
      Oracle JVM 11.2.0.4.0 
      Oracle Internet Directory Client 11.2.0.4.0 
      Oracle Net Listener 11.2.0.4.0 
      Cluster Ready Services Files 11.2.0.4.0 
      Oracle Database 11g 11.2.0.4.0 
-----------------------------------------------------------------------------




Instantiating scripts for add node (Friday, November 3, 2017 1:46:44 PM CST)
.                                                                 1% Done.
Instantiation of add node scripts complete


Copying to remote nodes (Friday, November 3, 2017 1:46:47 PM CST)
............................................................................................                                 96% Done.
Home copied to new nodes


Saving inventory on nodes (Friday, November 3, 2017 1:52:14 PM CST)
.                                                               100% Done.
Save inventory complete
WARNING:A new inventory has been created on one or more nodes in this session. However, it has not yet been registered as the central inventory of this system. 
To register the new inventory please run the script at '/u01/app/oraInventory/orainstRoot.sh' with root privileges on nodes 'rac1'.
If you do not register the inventory, you may not be able to update or patch the products you installed.
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/u01/app/oraInventory/orainstRoot.sh #On nodes rac1
/u01/app/11.2.0/grid/root.sh #On nodes rac1
To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts in each cluster node
    
The Cluster Node Addition of /u01/app/11.2.0/grid was successful.
Please check '/tmp/silentInstall.log' for more details.






根据提示在RAC1上执行脚本:
[root@rac1 ~]# /u01/app/oraInventory/orainstRoot.sh
Creating the Oracle inventory pointer file (/etc/oraInst.loc)
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.


Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.




[root@rac1 ~]# /u01/app/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g 


The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/11.2.0/grid


Enter the full pathname of the local bin directory: [/usr/local/bin]: 
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...




Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
Installing Trace File Analyzer
ohasd failed to start
Failed to start the Clusterware. Last 20 lines of the alert log follow: 
2017-11-03 13:56:06.957: 
[client(22489)]CRS-2101:The OLR was formatted using version 3.
2017-11-03 14:06:32.641: 
[ohasd(22736)]CRS-0715:Oracle High Availability Service has timed out waiting for init.ohasd to be started.
2017-11-03 14:29:31.977: 
[ohasd(56669)]CRS-0715:Oracle High Availability Service has timed out waiting for init.ohasd to be started.




出现如上报错,需要执行如下:
vi  /usr/lib/systemd/system/ohas.service  ,各个RAC节点均要创建ohas.service文件。


输入如下内容:
[Unit]
Description=Oracle High Availability Services
After=syslog.target


[Service]
ExecStart=/etc/init.d/init.ohasd run >/dev/null 2>&1 Type=simple
Restart=always


[Install]
WantedBy=multi-user.target
开机启动ohas服务
systemctl daemon-reload
systemctl enable ohas.service


如若之前执行root.sh报错,则需执行如下清理操作:
[root@rac1 ~]# /u01/app/11.2.0/grid/crs/install/rootcrs.pl -verbose -deconfig -force
Can't locate Env.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 . /u01/app/11.2.0/grid/crs/install) at /u01/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 703.
BEGIN failed--compilation aborted at /u01/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 703.
Compilation failed in require at /u01/app/11.2.0/grid/crs/install/rootcrs.pl line 305.
BEGIN failed--compilation aborted at /u01/app/11.2.0/grid/crs/install/rootcrs.pl line 305.
[root@rac1 ~]# 
[root@rac1 ~]# 
[root@rac1 ~]# /u01/app/11.2.0/grid/perl/bin/perl /u01/app/11.2.0/grid/crs/install/rootcrs.pl -verbose -deconfig -force
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
PRCR-1119 : Failed to look up CRS resources of ora.cluster_vip_net1.type type
PRCR-1068 : Failed to query resources
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.gsd is registered
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.ons is registered
Cannot communicate with crsd


CRS-4544: Unable to connect to OHAS
CRS-4000: Command Stop failed, or completed with errors.
error: package cvuqdisk is not installed
Successfully deconfigured Oracle clusterware stack on this node




再次执行:
[root@rac1 ~]# /u01/app/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g 


The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/11.2.0/grid


Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.


Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization - successful
Adding Clusterware entries to inittab
ohasd failed to start
Failed to start the Clusterware. Last 20 lines of the alert log follow: 
2017-11-03 13:56:06.957: 
[client(22489)]CRS-2101:The OLR was formatted using version 3.
2017-11-03 14:06:32.641: 
[ohasd(22736)]CRS-0715:Oracle High Availability Service has timed out waiting for init.ohasd to be started.
2017-11-03 14:29:31.977: 
[ohasd(56669)]CRS-0715:Oracle High Availability Service has timed out waiting for init.ohasd to be started.
[client(58840)]CRS-10001:03-Nov-17 14:40 ACFS-9459: ADVM/ACFS is not supported on this OS version: 'unknown'
[client(58842)]CRS-10001:03-Nov-17 14:40 ACFS-9201: Not Supported
2017-11-03 14:43:54.711: 
[client(59974)]CRS-2101:The OLR was formatted using version 3.
2017-11-03 14:45:04.571: 
[ohasd(58573)]CRS-0715:Oracle High Availability Service has timed out waiting for init.ohasd to be started.






如果出现上述ohasd failed to start的错误时,手动执行
/bin/systemctl start ohas.service
然后稍等片刻脚本会继续执行。


CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded


Disk Group OCR creation failed with the following message:
ORA-15018: diskgroup cannot be created
ORA-15031: disk specification '/u01/asm-disk/ocr5' matches no disks
ORA-15031: disk specification '/u01/asm-disk/ocr4' matches no disks
ORA-15031: disk specification '/u01/asm-disk/ocr3' matches no disks
ORA-15031: disk specification '/u01/asm-disk/ocr2' matches no disks
ORA-15031: disk specification '/u01/asm-disk/ocr1' matches no disks




Configuration of ASM ... failed
see asmca logs at /u01/app/grid/cfgtoollogs/asmca for details
Did not succssfully configure and start ASM at /u01/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 6912.
/u01/app/11.2.0/grid/perl/bin/perl -I/u01/app/11.2.0/grid/perl/lib -I/u01/app/11.2.0/grid/crs/install /u01/app/11.2.0/grid/crs/install/rootcrs.pl execution failed
[root@rac1 ~]# 




报以上错误是因为找不到存储磁盘了,解决存储磁盘后,执行清理操作,然后重新执行:
[root@rac1 ~]# /u01/app/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g 


The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/11.2.0/grid


Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.


Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization - successful
Adding Clusterware entries to inittab
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac2, number 2, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Preparing packages...
cvuqdisk-1.0.9-1.x86_64
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@rac1 ~]# 


[root@rac1 ~]# su - grid
Last login: Fri Nov  3 15:00:29 CST 2017 on pts/0
[grid@rac1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.FRA.dg
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.OCR.dg
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.asm
               ONLINE  ONLINE       rac1                     Started             
               ONLINE  ONLINE       rac2                     Started             
ora.gsd
               OFFLINE OFFLINE      rac1                                         
               OFFLINE OFFLINE      rac2                                         
ora.net1.network
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.ons
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac2                                         
ora.cvu
      1        ONLINE  ONLINE       rac2                                         
ora.oc4j
      1        ONLINE  ONLINE       rac2                                         
ora.orcl.db
      2        ONLINE  ONLINE       rac2                     Open                
ora.rac1.vip
      1        ONLINE  ONLINE       rac1                                         
ora.rac2.vip
      1        ONLINE  ONLINE       rac2                                         
ora.scan1.vip
      1        ONLINE  ONLINE       rac2          
 


在rac2节点执行如下,在集群中添加节点2的Oracle软件信息:
[oracle@rac2 ~]$ $ORACLE_HOME/oui/bin/addNode.sh -silent "CLUSTER_NEW_NODES={rac1}"


Performing pre-checks for node addition 


Checking node reachability...
Node reachability check passed from node "rac2"




Checking user equivalence...
User equivalence check passed for user "oracle"


WARNING: 
Node "rac1" already appears to be part of cluster


Pre-check for node addition was successful. 
Starting Oracle Universal Installer...


Checking swap space: must be greater than 500 MB.   Actual 3857 MB    Passed
Oracle Universal Installer, Version 11.2.0.4.0 Production
Copyright (C) 1999, 2013, Oracle. All rights reserved.




Performing tests to see whether nodes rac1 are available
............................................................... 100% Done.


.
-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
   Source: /u02/app/oracle/product/11.2.0/db_home
   New Nodes
Space Requirements
   New Nodes
      rac1
         /: Required 5.17GB : Available 37.25GB
Installed Products
   Product Names
      Oracle Database 11g 11.2.0.4.0 
      Java Development Kit 1.5.0.51.10 
      Installer SDK Component 11.2.0.4.0 
      Oracle One-Off Patch Installer 11.2.0.3.4 
      Oracle Universal Installer 11.2.0.4.0 
      Oracle USM Deconfiguration 11.2.0.4.0 
      Oracle Configuration Manager Deconfiguration 10.3.1.0.0 
      Oracle DBCA Deconfiguration 11.2.0.4.0 
      Oracle RAC Deconfiguration 11.2.0.4.0 
      Oracle Database Deconfiguration 11.2.0.4.0 
      Oracle Configuration Manager Client 10.3.2.1.0 
      Oracle Configuration Manager 10.3.8.1.0 
      Oracle ODBC Driverfor Instant Client 11.2.0.4.0 
      LDAP Required Support Files 11.2.0.4.0 
      SSL Required Support Files for InstantClient 11.2.0.4.0 
      Bali Share 1.1.18.0.0 
      Oracle Extended Windowing Toolkit 3.4.47.0.0 
      Oracle JFC Extended Windowing Toolkit 4.2.36.0.0 
      Oracle Real Application Testing 11.2.0.4.0 
      Oracle Database Vault J2EE Application 11.2.0.4.0 
      Oracle Label Security 11.2.0.4.0 
      Oracle Data Mining RDBMS Files 11.2.0.4.0 
      Oracle OLAP RDBMS Files 11.2.0.4.0 
      Oracle OLAP API 11.2.0.4.0 
      Platform Required Support Files 11.2.0.4.0 
      Oracle Database Vault option 11.2.0.4.0 
      Oracle RAC Required Support Files-HAS 11.2.0.4.0 
      SQL*Plus Required Support Files 11.2.0.4.0 
      Oracle Display Fonts 9.0.2.0.0 
      Oracle Ice Browser 5.2.3.6.0 
      Oracle JDBC Server Support Package 11.2.0.4.0 
      Oracle SQL Developer 11.2.0.4.0 
      Oracle Application Express 11.2.0.4.0 
      XDK Required Support Files 11.2.0.4.0 
      RDBMS Required Support Files for Instant Client 11.2.0.4.0 
      SQLJ Runtime 11.2.0.4.0 
      Database Workspace Manager 11.2.0.4.0 
      RDBMS Required Support Files Runtime 11.2.0.4.0 
      Oracle Globalization Support 11.2.0.4.0 
      Exadata Storage Server 11.2.0.1.0 
      Provisioning Advisor Framework 10.2.0.4.3 
      Enterprise Manager Database Plugin -- Repository Support 11.2.0.4.0 
      Enterprise Manager Repository Core Files 10.2.0.4.5 
      Enterprise Manager Database Plugin -- Agent Support 11.2.0.4.0 
      Enterprise Manager Grid Control Core Files 10.2.0.4.5 
      Enterprise Manager Common Core Files 10.2.0.4.5 
      Enterprise Manager Agent Core Files 10.2.0.4.5 
      RDBMS Required Support Files 11.2.0.4.0 
      regexp 2.1.9.0.0 
      Agent Required Support Files 10.2.0.4.5 
      Oracle 11g Warehouse Builder Required Files 11.2.0.4.0 
      Oracle Notification Service (eONS) 11.2.0.4.0 
      Oracle Text Required Support Files 11.2.0.4.0 
      Parser Generator Required Support Files 11.2.0.4.0 
      Oracle Database 11g Multimedia Files 11.2.0.4.0 
      Oracle Multimedia Java Advanced Imaging 11.2.0.4.0 
      Oracle Multimedia Annotator 11.2.0.4.0 
      Oracle JDBC/OCI Instant Client 11.2.0.4.0 
      Oracle Multimedia Locator RDBMS Files 11.2.0.4.0 
      Precompiler Required Support Files 11.2.0.4.0 
      Oracle Core Required Support Files 11.2.0.4.0 
      Sample Schema Data 11.2.0.4.0 
      Oracle Starter Database 11.2.0.4.0 
      Oracle Message Gateway Common Files 11.2.0.4.0 
      Oracle XML Query 11.2.0.4.0 
      XML Parser for Oracle JVM 11.2.0.4.0 
      Oracle Help For Java 4.2.9.0.0 
      Installation Plugin Files 11.2.0.4.0 
      Enterprise Manager Common Files 10.2.0.4.5 
      Expat libraries 2.0.1.0.1 
      Deinstallation Tool 11.2.0.4.0 
      Oracle Quality of Service Management (Client) 11.2.0.4.0 
      Perl Modules 5.10.0.0.1 
      JAccelerator (COMPANION) 11.2.0.4.0 
      Oracle Containers for Java 11.2.0.4.0 
      Perl Interpreter 5.10.0.0.2 
      Oracle Net Required Support Files 11.2.0.4.0 
      Secure Socket Layer 11.2.0.4.0 
      Oracle Universal Connection Pool 11.2.0.4.0 
      Oracle JDBC/THIN Interfaces 11.2.0.4.0 
      Oracle Multimedia Client Option 11.2.0.4.0 
      Oracle Java Client 11.2.0.4.0 
      Character Set Migration Utility 11.2.0.4.0 
      Oracle Code Editor 1.2.1.0.0I 
      PL/SQL Embedded Gateway 11.2.0.4.0 
      OLAP SQL Scripts 11.2.0.4.0 
      Database SQL Scripts 11.2.0.4.0 
      Oracle Locale Builder 11.2.0.4.0 
      Oracle Globalization Support 11.2.0.4.0 
      SQL*Plus Files for Instant Client 11.2.0.4.0 
      Required Support Files 11.2.0.4.0 
      Oracle Database User Interface 2.2.13.0.0 
      Oracle ODBC Driver 11.2.0.4.0 
      Oracle Notification Service 11.2.0.3.0 
      XML Parser for Java 11.2.0.4.0 
      Oracle Security Developer Tools 11.2.0.4.0 
      Oracle Wallet Manager 11.2.0.4.0 
      Cluster Verification Utility Common Files 11.2.0.4.0 
      Oracle Clusterware RDBMS Files 11.2.0.4.0 
      Oracle UIX 2.2.24.6.0 
      Enterprise Manager plugin Common Files 11.2.0.4.0 
      HAS Common Files 11.2.0.4.0 
      Precompiler Common Files 11.2.0.4.0 
      Installation Common Files 11.2.0.4.0 
      Oracle Help for the  Web 2.0.14.0.0 
      Oracle LDAP administration 11.2.0.4.0 
      Buildtools Common Files 11.2.0.4.0 
      Assistant Common Files 11.2.0.4.0 
      Oracle Recovery Manager 11.2.0.4.0 
      PL/SQL 11.2.0.4.0 
      Generic Connectivity Common Files 11.2.0.4.0 
      Oracle Database Gateway for ODBC 11.2.0.4.0 
      Oracle Programmer 11.2.0.4.0 
      Oracle Database Utilities 11.2.0.4.0 
      Enterprise Manager Agent 10.2.0.4.5 
      SQL*Plus 11.2.0.4.0 
      Oracle Netca Client 11.2.0.4.0 
      Oracle Multimedia Locator 11.2.0.4.0 
      Oracle Call Interface (OCI) 11.2.0.4.0 
      Oracle Multimedia 11.2.0.4.0 
      Oracle Net 11.2.0.4.0 
      Oracle XML Development Kit 11.2.0.4.0 
      Oracle Internet Directory Client 11.2.0.4.0 
      Database Configuration and Upgrade Assistants 11.2.0.4.0 
      Oracle JVM 11.2.0.4.0 
      Oracle Advanced Security 11.2.0.4.0 
      Oracle Net Listener 11.2.0.4.0 
      Oracle Enterprise Manager Console DB 11.2.0.4.0 
      HAS Files for DB 11.2.0.4.0 
      Oracle Text 11.2.0.4.0 
      Oracle Net Services 11.2.0.4.0 
      Oracle Database 11g 11.2.0.4.0 
      Oracle OLAP 11.2.0.4.0 
      Oracle Spatial 11.2.0.4.0 
      Oracle Partitioning 11.2.0.4.0 
      Enterprise Edition Options 11.2.0.4.0 
-----------------------------------------------------------------------------




Instantiating scripts for add node (Friday, November 3, 2017 3:02:34 PM CST)
.                                                                 1% Done.
Instantiation of add node scripts complete


Copying to remote nodes (Friday, November 3, 2017 3:02:39 PM CST)
...............................................................................................                                 96% Done.
Home copied to new nodes


Saving inventory on nodes (Friday, November 3, 2017 3:13:06 PM CST)
.                                                               100% Done.
Save inventory complete
WARNING:
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/u02/app/oracle/product/11.2.0/db_home/root.sh #On nodes rac1
To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts in each cluster node
    
The Cluster Node Addition of /u02/app/oracle/product/11.2.0/db_home was successful.
Please check '/tmp/silentInstall.log' for more details.
[oracle@rac2 ~]$ 




按照提示,在rac1节点使用root执行如下:
[root@rac1 asmca]# /u02/app/oracle/product/11.2.0/db_home/root.sh
Performing root user operation for Oracle 11g 


The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u02/app/oracle/product/11.2.0/db_home


Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.


Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
/bin/chown: cannot access ‘/u02/app/oracle/product/11.2.0/db_home/bin/nmhs’: No such file or directory
/bin/chmod: cannot access ‘/u02/app/oracle/product/11.2.0/db_home/bin/nmhs’: No such file or directory
Finished product-specific root actions.




以静默方式添加rac1节点数据库实例:
[oracle@rac2 ~]$ dbca -silent -addInstance -nodeList rac1 -gdbName orcl -instanceName orcl1 -sysDBAUserName sys -sysDBAPassword oracle
Adding instance
1% complete
2% complete
6% complete
13% complete
20% complete
26% complete
33% complete
40% complete
46% complete
53% complete
66% complete
Completing instance management.
76% complete
100% complete
Look at the log file "/u02/app/oracle/cfgtoollogs/dbca/orcl/orcl.log" for further details.
[oracle@rac2 ~]$ 


至此,添加完成,RAC修复:
[grid@rac1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.FRA.dg
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.OCR.dg
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.asm
               ONLINE  ONLINE       rac1                     Started             
               ONLINE  ONLINE       rac2                     Started             
ora.gsd
               OFFLINE OFFLINE      rac1                                         
               OFFLINE OFFLINE      rac2                                         
ora.net1.network
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.ons
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac2                                         
ora.cvu
      1        ONLINE  ONLINE       rac2                                         
ora.oc4j
      1        ONLINE  ONLINE       rac2                                         
ora.orcl.db
      1        ONLINE  ONLINE       rac1                     Open                
      2        ONLINE  ONLINE       rac2                     Open                
ora.rac1.vip
      1        ONLINE  ONLINE       rac1                                         
ora.rac2.vip
      1        ONLINE  ONLINE       rac2                                         
ora.scan1.vip
      1        ONLINE  ONLINE       rac2