11g RAC 删除节点详细记录

来源:互联网 发布:JAVA绘制矩形 编辑:程序博客网 时间:2024/05/02 05:31

当前数据库的节点信息查询

 


通过上面的信息,可以看到当前rac有三个节点,现删除第三个节点

1、首先备份OCR信息

[root@node1 script]# ocrcheck

Status of Oracle Cluster Registry is asfollows :

         Version                  :          3

         Total space (kbytes)     :    262120

         Used space (kbytes)      :      3320

         Available space (kbytes) :     258800

         ID                       : 1403549282

         Device/File Name         :   +GRIDDG

                                    Device/Fileintegrity check succeeded

                                    Device/Filenot configured

                                    Device/Filenot configured

                                    Device/Filenot configured

                                    Device/Filenot configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded

[root@node1 script]# ocrconfig-manualbackup

node3    2016/07/08 00:09:25    /u01/app/11.2.0/grid/cdata/scan-cluster/backup_20160708_000925.ocr

[root@node1 script]# ocrconfig -showbackup

 

node2    2016/07/05 04:08:01    /u01/app/11.2.0/grid/cdata/scan-cluster/backup00.ocr

node1    2016/06/30 00:38:57    /u01/app/11.2.0/grid/cdata/scan-cluster/backup01.ocr

node1    2016/06/29 02:19:18    /u01/app/11.2.0/grid/cdata/scan-cluster/backup02.ocr

node2    2016/07/05 04:08:01    /u01/app/11.2.0/grid/cdata/scan-cluster/day.ocr

node1    2016/06/28 22:19:15    /u01/app/11.2.0/grid/cdata/scan-cluster/week.ocr

node3    2016/07/08 00:09:25    /u01/app/11.2.0/grid/cdata/scan-cluster/backup_20160708_000925.ocr

2、删除实例

(切换到oracle用户执行)

可用DBCA,也可用后边的命令静默删除

Dbca方式删除:

运行dbca,根据向导删除相关信息
instance management-> delete an instance-> 输入sysdba用户的账户和密码->选定删除的节点实例

删除EM:

emca -deconfig dbcontrol db -repos drop-cluster

静态方式删除:

[oracle@node1 ~]$ dbca -silent-deleteinstance -nodelist node3 -gdbname devdb -instancename devdb3-sysdbausername sys -sysdbapassword tiger

 

--说明 nodelist表示要删除的节点名字

Deleting instance

20% complete

21% complete

22% complete

26% complete

33% complete

40% complete

46% complete

53% complete

54% complete

60% complete

66% complete

Completing instance management.

100% complete

Look at the log file"/u01/app/oracle/cfgtoollogs/dbca/devdb.log" for further details.

[oracle@node1 ~]$

3、确认实例信息从OCR已删除

通过上图可以看到,databae instance 中的devdb3 已经不存在。

 

4、查看数据库实例和redo情况

[oracle@node1 ~]$ sqlplus / as sysdba

 

SQL*Plus: Release 11.2.0.3.0 Production onFri Jul 8 00:36:33 2016

 

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

 

 

Connected to:

Oracle Database 11g Enterprise EditionRelease 11.2.0.3.0 - 64bit Production

With the Partitioning, Real ApplicationClusters, Automatic Storage Management, OLAP,

Data Mining and Real Application Testingoptions

 

SQL> set wrap off

SQL> set lin 200

SQL> select inst_id,instance_name from gv$instance;

 

  INST_ID INSTANCE_NAME

----------------------------------------------------------

         2 devdb2

         1 devdb1

 

SQL> select distinct thread# from v$log;

 

  THREAD#

----------

         1

         2

         3

5、禁用thread

SQL> alter database disable thread 3;

 

Database altered.

 

SQL>

6、查看节点监听配置信息

[oracle@node1 ~]$ srvctl config listener -a

Name: LISTENER

Network: 1, Owner: grid

Home: <CRS home>

 /u01/app/11.2.0/grid on node(s) node1,node2,node3

End points: TCP:1521

[oracle@node1 ~]$

7、停止node3节点的监听

srvctl config listener -a

srvctl disable listener -l listener -n node3

srvctl stop listener -l listener -n node3




8、更新oracle inventory

切换到oracle用户,在节点3上面执行,要删除的节点上面执行)

[oracle@node3 ~]$ $ORACLE_HOME/oui/bin/runInstaller-updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={node3}" –local

 

Starting Oracle Universal Installer...

 

Checking swap space: must be greater than500 MB.   Actual 4707 MB    Passed

The inventory pointer is located at/etc/oraInst.loc

The inventory is located at/u01/app/oraInventory

'UpdateNodeList' was successful.


9、删除node3节点的数据库软件,在node3节点上执行

(强调:在node3上面执行,要删除的节点执行)

su – oracle

[oracle@node3 ~]$ORACLE_HOME/deinstall/deinstall–local

 

Checking for required files and bootstrapping...

Please wait ...

Location of logs /u01/app/oraInventory/logs/

 

############ ORACLE DEINSTALL & DECONFIGTOOL START ############

 

 

######################### CHECK OPERATIONSTART #########################

## [START] Install check configuration ##

 

 

Checking for existence of the Oracle homelocation /u01/app/oracle/product/11.2.0/db_1

Oracle Home type selected for deinstall is:Oracle Real Application Cluster Database

Oracle Base selected for deinstall is:/u01/app/oracle

Checking for existence of central inventorylocation /u01/app/oraInventory

Checking for existence of the Oracle GridInfrastructure home /u01/app/11.2.0/grid

The following nodes are part of this cluster:node3

Checking for sufficient temp spaceavailability on node(s) : 'node3'

 

## [END] Install check configuration ##

 

 

Network Configuration check config START

 

Network de-configuration trace file location:/u01/app/oraInventory/logs/netdc_check2016-07-08_12-52-34-AM.log

 

Network Configuration check config END

 

Database Check Configuration START

 

Database de-configuration trace filelocation: /u01/app/oraInventory/logs/databasedc_check2016-07-08_12-52-41-AM.log

 

Database Check Configuration END

 

Enterprise Manager Configuration AssistantSTART

 

EMCA de-configuration trace file location:/u01/app/oraInventory/logs/emcadc_check2016-07-08_12-52-44-AM.log

 

Enterprise Manager Configuration AssistantEND

Oracle Configuration Manager check START

OCM check log file location :/u01/app/oraInventory/logs//ocm_check8029.log

Oracle Configuration Manager check END

 

######################### CHECK OPERATION END#########################

 

 

####################### CHECK OPERATIONSUMMARY #######################

Oracle Grid Infrastructure Home is:/u01/app/11.2.0/grid

The cluster node(s) on which the Oracle homedeinstallation will be performed are:node3

Since -local option has been specified, theOracle home will be deinstalled only on the local node, 'node3', and the globalconfiguration will be removed.

Oracle Home selected for deinstall is:/u01/app/oracle/product/11.2.0/db_1

Inventory Location where the Oracle homeregistered is: /u01/app/oraInventory

The option -local will not modify anydatabase configuration for this Oracle home.

 

No Enterprise Manager configuration to beupdated for any database(s)

No Enterprise Manager ASM targets to update

No Enterprise Manager listener targets tomigrate

Checking the config status for CCR

Oracle Home exists with CCR directory, butCCR is not configured

CCR check is finished

Do youwant to continue (y - yes, n - no)? [n]:y

A log of this session will be written to:'/u01/app/oraInventory/logs/deinstall_deconfig2016-07-08_12-52-23-AM.out'

Any error messages from this session will bewritten to: '/u01/app/oraInventory/logs/deinstall_deconfig2016-07-08_12-52-23-AM.err'

 

######################## CLEAN OPERATIONSTART ########################

 

Enterprise Manager Configuration AssistantSTART

 

EMCA de-configuration trace file location:/u01/app/oraInventory/logs/emcadc_clean2016-07-08_12-52-44-AM.log

 

Updating Enterprise Manager ASM targets (ifany)

Updating Enterprise Manager listener targets(if any)

Enterprise Manager Configuration AssistantEND

Database de-configuration trace filelocation: /u01/app/oraInventory/logs/databasedc_clean2016-07-08_12-53-27-AM.log

 

Network Configuration clean config START

 

Network de-configuration trace file location:/u01/app/oraInventory/logs/netdc_clean2016-07-08_12-53-27-AM.log

 

De-configuring backup files...

Backup files de-configured successfully.

 

The network configuration has been cleaned upsuccessfully.

 

Network Configuration clean config END

 

Oracle Configuration Manager clean START

OCM clean log file location :/u01/app/oraInventory/logs//ocm_clean8029.log

Oracle Configuration Manager clean END

Setting the force flag to false

Setting the force flag to cleanup the OracleBase

Oracle Universal Installer clean START

Detach Oracle home'/u01/app/oracle/product/11.2.0/db_1' from the central inventory on the local node: Done

 

Delete directory'/u01/app/oracle/product/11.2.0/db_1' on the local node : Done

 

The Oracle Base directory '/u01/app/oracle'will not be removed on local node. The directory is not empty.

 

Oracle Universal Installer cleanup wassuccessful.

 

Oracle Universal Installer clean END

 

 

## [START] Oracle install clean ##

 

Clean install operation removing temporarydirectory '/tmp/deinstall2016-07-08_00-51-19AM' on node 'node3'

 

## [END] Oracle install clean ##

 

 

######################### CLEAN OPERATION END#########################

 

 

####################### CLEAN OPERATIONSUMMARY #######################

Cleaning the config for CCR

As CCR is not configured, so skipping thecleaning of CCR configuration

CCR clean is finished

Successfully detached Oracle home'/u01/app/oracle/product/11.2.0/db_1' from the central inventory on the localnode.

Successfully deleted directory'/u01/app/oracle/product/11.2.0/db_1' on the local node.

Oracle Universal Installer cleanup wassuccessful.

 

Oracle deinstall tool successfully cleaned uptemporary directories.

#######################################################################

 

 

############# ORACLE DEINSTALL & DECONFIGTOOL END #############

 

 



可以看到,软件删除之后,空间已经释放

10、在保留节点使用oracle用户更新集群列表

su - oracle

[oracle@node1 ~]$ $ORACLE_HOME/oui/bin/runInstaller-updateNodeList ORACLE_HOME=$ORACLE_HOME"CLUSTER_NODES={node1,node2}"

Starting Oracle Universal Installer...

 

Checking swap space: must be greater than 500MB.  Actual 2433 MB    Passed

The inventory pointer is located at/etc/oraInst.loc

The inventory is located at/u01/app/oraInventory

'UpdateNodeList' was successful.


11、查看节点状态

[root@node1 script]# su - grid

[grid@node1 ~]$ olsnodes -s -t

node1      Active      Unpinned

node2      Active      Unpinned

node3      Active      Unpinned


12、删除节点集群软件

(node3上面执行,要删除的节点执行)

su – root

$CRS_HOME/crs/install/rootcrs.pl -deconfig-force

 

[root@node3 install]# pwd

/u01/app/11.2.0/grid/crs/install

[root@node3 install]#

[root@node3 install]# ./rootcrs.pl -deconfig -force

Using configuration parameter file: ./crsconfig_params

Network exists:1/192.168.40.0/255.255.255.0/eth0, type static

VIP exists:/node1-vip/192.168.40.193/192.168.40.0/255.255.255.0/eth0, hosting node node1

VIP exists:/node2-vip/192.168.40.194/192.168.40.0/255.255.255.0/eth0, hosting node node2

VIP exists:/node3-vip/192.168.40.197/192.168.40.0/255.255.255.0/eth0, hosting node node3

GSD exists

ONS exists: Local port 6100, remote port6200, EM port 2016

CRS-2791: Starting shutdown of Oracle HighAvailability Services-managed resources on 'node3'

CRS-2673: Attempting to stop 'ora.crsd' on'node3'

CRS-2790: Starting shutdown of ClusterReady Services-managed resources on 'node3'

CRS-2673: Attempting to stop 'ora.oc4j' on'node3'

CRS-2673: Attempting to stop 'ora.DATA.dg'on 'node3'

CRS-2673: Attempting to stop 'ora.FLASH.dg'on 'node3'

CRS-2673: Attempting to stop'ora.GRIDDG.dg' on 'node3'

CRS-2677: Stop of 'ora.DATA.dg' on 'node3'succeeded

CRS-2677: Stop of 'ora.FLASH.dg' on 'node3'succeeded

CRS-2677: Stop of 'ora.oc4j' on 'node3'succeeded

CRS-2672: Attempting to start 'ora.oc4j' on'node2'

CRS-2676: Start of 'ora.oc4j' on 'node2'succeeded

CRS-2677: Stop of 'ora.GRIDDG.dg' on'node3' succeeded

CRS-2673: Attempting to stop 'ora.asm' on'node3'

CRS-2677: Stop of 'ora.asm' on 'node3'succeeded

CRS-2792: Shutdown of Cluster ReadyServices-managed resources on 'node3' has completed

CRS-2677: Stop of 'ora.crsd' on 'node3'succeeded

CRS-2673: Attempting to stop 'ora.ctssd' on'node3'

CRS-2673: Attempting to stop 'ora.evmd' on'node3'

CRS-2673: Attempting to stop 'ora.asm' on'node3'

CRS-2673: Attempting to stop 'ora.mdnsd' on'node3'

CRS-2677: Stop of 'ora.evmd' on 'node3'succeeded

CRS-2677: Stop of 'ora.mdnsd' on 'node3'succeeded

CRS-2677: Stop of 'ora.ctssd' on 'node3'succeeded

CRS-2677: Stop of 'ora.asm' on 'node3'succeeded

CRS-2673: Attempting to stop'ora.cluster_interconnect.haip' on 'node3'

CRS-2677: Stop of'ora.cluster_interconnect.haip' on 'node3' succeeded

CRS-2673: Attempting to stop 'ora.cssd' on'node3'

CRS-2677: Stop of 'ora.cssd' on 'node3'succeeded

CRS-2673: Attempting to stop 'ora.gipcd' on'node3'

CRS-2677: Stop of 'ora.gipcd' on 'node3'succeeded

CRS-2673: Attempting to stop 'ora.gpnpd' on'node3'

CRS-2677: Stop of 'ora.gpnpd' on 'node3'succeeded

CRS-2793: Shutdown of Oracle HighAvailability Services-managed resources on 'node3' has completed

CRS-4133: Oracle High Availability Serviceshas been stopped.

Successfully deconfigured Oracleclusterware stack on this node


13、查看节点状态

[root@node1 ~]# olsnodes -s -t

node1      Active      Unpinned

node2      Active      Unpinned

node3      Inactive   Unpinned


14、删除节点

切换到root上执行

 [root@node1 script]# crsctl delete node -nnode3

CRS-4661: Node node3 successfully deleted.

[root@node1 script]#

 

15、查看节点状态

[root@node1 script]#  olsnodes -s -t

node1      Active      Unpinned

node2      Active      Unpinned

[root@node1 script]#


16、更新Oracle inventory

切换到grid用户

./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME"CLUSTER_NODES={node1,node2}" CRS=TRUE –local

 

[grid@node1 ~]$ cd $ORACLE_HOME

[grid@node1 grid]$ cd oui/

[grid@node1 oui]$ ls

admin_langs.xml  bin clusterparam.ini  instImages  jlib lib  nlsrtlmap.xml  oraparam.ini prov  runtime_langs.xml  schema

[grid@node1 oui]$ cd bin

[grid@node1 bin]$ pwd

/u01/app/11.2.0/grid/oui/bin

[grid@node1 bin]$ ./runInstaller-updateNodeList ORACLE_HOME=$ORACLE_HOME"CLUSTER_NODES={node1,node2}" CRS=TRUE -local

Starting Oracle Universal Installer...

 

Checking swap space: must be greater than500 MB.   Actual 2446 MB    Passed

The inventory pointer is located at/etc/oraInst.loc

The inventory is located at/u01/app/oraInventory

'UpdateNodeList' was successful.


17、删除GI软件

切换到grid用户,在node3执行,要删除的节点执行

[grid@node3 grid]$ cd $ORACLE_HOME

[grid@node3 grid]$ cd deinstall/

Checking for required files andbootstrapping ...

Please wait ...

Location of logs/tmp/deinstall2016-07-08_03-47-46AM/logs/

 

############ ORACLE DEINSTALL &DECONFIG TOOL START ############

 

 

######################### CHECK OPERATION START#########################

## [START] Install check configuration ##

 

 

Checking for existence of the Oracle homelocation /u01/app/11.2.0/grid

Oracle Home type selected for deinstall is:Oracle Grid Infrastructure for a Cluster

Oracle Base selected for deinstall is:/u01/app/grid

Checking for existence of central inventorylocation /u01/app/oraInventory

Checking for existence of the Oracle GridInfrastructure home

The following nodes are part of thiscluster: node1,node2,node3

Checking for sufficient temp spaceavailability on node(s) : 'node1,node2,node3'

 

## [END] Install check configuration ##

 

Traces log file:/tmp/deinstall2016-07-08_03-47-46AM/logs//crsdc.log

Enter an address or the name of the virtualIP used on node "node1"[node1-vip]

node1-vip 直接用回车代替即可  ,下面出现提示,全部用回车即可。

The following information can be collectedby running "/sbin/ifconfig -a" on node "node1"

Enter the IP netmask of Virtual IP"192.168.40.193" on node "node1"[255.255.255.0]

 >

192.168.40.193

Enter the network interface name on whichthe virtual IP address "192.168.40.193" is active

 >

192.168.40.193

Enter an address or the name of the virtualIP used on node "node2"[node2-vip]

 >

node2-vip

The following information can be collectedby running "/sbin/ifconfig -a" on node "node2"

Enter the IP netmask of Virtual IP"192.168.40.194" on node "node2"[192.168.40.193]

 >

192.168.40.194

Enter the network interface name on whichthe virtual IP address "192.168.40.194" is active[192.168.40.193]

 >

192.168.40.194

Enter an address or the name of the virtualIP used on node "node3"[node3-vip]

 >

node3-vip

The following information can be collectedby running "/sbin/ifconfig -a" on node "node3"

Enter the IP netmask of Virtual IP"192.168.40.197" on node "node3"[192.168.40.194]

 >

192.168.40.197

Enter the network interface name on whichthe virtual IP address "192.168.40.197" is active[192.168.40.194]

 >

192.168.40.197

Enter an address or the name of the virtualIP[]

 >

 

 

Network Configuration check config START

 

Network de-configuration trace filelocation:/tmp/deinstall2016-07-08_03-47-46AM/logs/netdc_check2016-07-08_03-52-42-AM.log

 

Specify all RAC listeners (do not includeSCAN listener) that are to be de-configured [LISTENER,LISTENER_SCAN1]:

 

Network Configuration check config END

 

Asm Check Configuration START

 

ASM de-configuration trace file location:/tmp/deinstall2016-07-08_03-47-46AM/logs/asmcadc_check2016-07-08_03-53-18-AM.log

 

 

######################### CHECK OPERATIONEND #########################

 

 

####################### CHECK OPERATIONSUMMARY #######################

Oracle Grid Infrastructure Home is:

The cluster node(s) on which the Oraclehome deinstallation will be performed are:node1,node2,node3

Since -local option has been specified, theOracle home will be deinstalled only on the local node, 'node3', and the globalconfiguration will be removed.

Oracle Home selected for deinstall is:/u01/app/11.2.0/grid

Inventory Location where the Oracle homeregistered is: /u01/app/oraInventory

Following RAC listener(s) will be de-configured:LISTENER,LISTENER_SCAN1

Option -local will not modify any ASMconfiguration.

Do you want to continue (y - yes, n - no)?[n]: y 

A log of this session will be written to:'/tmp/deinstall2016-07-08_03-47-46AM/logs/deinstall_deconfig2016-07-08_03-49-56-AM.out'

Any error messages from this session willbe written to:'/tmp/deinstall2016-07-08_03-47-46AM/logs/deinstall_deconfig2016-07-08_03-49-56-AM.err'

 

######################## CLEAN OPERATIONSTART ########################

ASM de-configuration trace file location:/tmp/deinstall2016-07-08_03-47-46AM/logs/asmcadc_clean2016-07-08_03-53-23-AM.log

ASM Clean Configuration END

 

Network Configuration clean config START

 

Network de-configuration trace filelocation: /tmp/deinstall2016-07-08_03-47-46AM/logs/netdc_clean2016-07-08_03-53-23-AM.log

 

De-configuring RAC listener(s):LISTENER,LISTENER_SCAN1

 

De-configuring listener: LISTENER

   Stopping listener on node "node3": LISTENER

   Warning: Failed to stop listener. Listener may not be running.

Listener de-configured successfully.

 

De-configuring listener: LISTENER_SCAN1

   Stopping listener on node "node3": LISTENER_SCAN1

   Warning: Failed to stop listener. Listener may not be running.

Listener de-configured successfully.

 

De-configuring Naming Methods configurationfile...

Naming Methods configuration filede-configured successfully.

 

De-configuring backup files...

Backup files de-configured successfully.

 

The network configuration has been cleanedup successfully.

 

Network Configuration clean config END

Asm Check Configuration START

 

ASM de-configuration trace file location:/tmp/deinstall2016-07-08_03-47-46AM/logs/asmcadc_check2016-07-08_03-53-18-AM.log

 

 

######################### CHECK OPERATIONEND #########################

 

 

####################### CHECK OPERATIONSUMMARY #######################

Oracle Grid Infrastructure Home is:

The cluster node(s) on which the Oraclehome deinstallation will be performed are:node1,node2,node3

Since -local option has been specified, theOracle home will be deinstalled only on the local node, 'node3', and the globalconfiguration will be removed.

Oracle Home selected for deinstall is:/u01/app/11.2.0/grid

Inventory Location where the Oracle homeregistered is: /u01/app/oraInventory

Following RAC listener(s) will bede-configured: LISTENER,LISTENER_SCAN1

Option -local will not modify any ASMconfiguration.

Do you want to continue (y - yes, n - no)?[n]: y 

A log of this session will be written to:'/tmp/deinstall2016-07-08_03-47-46AM/logs/deinstall_deconfig2016-07-08_03-49-56-AM.out'

Any error messages from this session willbe written to:'/tmp/deinstall2016-07-08_03-47-46AM/logs/deinstall_deconfig2016-07-08_03-49-56-AM.err'

 

######################## CLEAN OPERATIONSTART ########################

ASM de-configuration trace file location:/tmp/deinstall2016-07-08_03-47-46AM/logs/asmcadc_clean2016-07-08_03-53-23-AM.log

ASM Clean Configuration END

 

Network Configuration clean config START

 

Network de-configuration trace filelocation: /tmp/deinstall2016-07-08_03-47-46AM/logs/netdc_clean2016-07-08_03-53-23-AM.log

 

De-configuring RAC listener(s):LISTENER,LISTENER_SCAN1

 

De-configuring listener: LISTENER

   Stopping listener on node "node3": LISTENER

   Warning: Failed to stop listener. Listener may not be running.

Listener de-configured successfully.

 

De-configuring listener: LISTENER_SCAN1

   Stopping listener on node "node3": LISTENER_SCAN1

   Warning: Failed to stop listener. Listener may not be running.

Listener de-configured successfully.

 

De-configuring Naming Methods configurationfile...

Naming Methods configuration filede-configured successfully.

 

De-configuring backup files...

Backup files de-configured successfully.

 

The network configuration has been cleanedup successfully.

 

Network Configuration clean config END

 

 

---------------------------------------->

 

The deconfig command below can be executedin parallel on all the remote nodes. Execute the command on  the local node after the execution completeson all the remote nodes.

 

Run the following command as the root useror the administrator on node "node1".

 

/tmp/deinstall2016-07-08_03-47-46AM/perl/bin/perl-I/tmp/deinstall2016-07-08_03-47-46AM/perl/lib-I/tmp/deinstall2016-07-08_03-47-46AM/crs/install/tmp/deinstall2016-07-08_03-47-46AM/crs/install/rootcrs.pl -force -deconfig -paramfile"/tmp/deinstall2016-07-08_03-47-46AM/response/deinstall_Ora11g_gridinfrahome1.rsp"

 

Run the following command as the root useror the administrator on node "node2".

 

/tmp/deinstall2016-07-08_03-47-46AM/perl/bin/perl-I/tmp/deinstall2016-07-08_03-47-46AM/perl/lib-I/tmp/deinstall2016-07-08_03-47-46AM/crs/install/tmp/deinstall2016-07-08_03-47-46AM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2016-07-08_03-47-46AM/response/deinstall_Ora11g_gridinfrahome1.rsp"

 

Run the following command as the root useror the administrator on node "node3".

 

/tmp/deinstall2016-07-08_03-47-46AM/perl/bin/perl-I/tmp/deinstall2016-07-08_03-47-46AM/perl/lib-I/tmp/deinstall2016-07-08_03-47-46AM/crs/install/tmp/deinstall2016-07-08_03-47-46AM/crs/install/rootcrs.pl -force -deconfig -paramfile"/tmp/deinstall2016-07-08_03-47-46AM/response/deinstall_Ora11g_gridinfrahome1.rsp"

 

Press Enter after you finish running theabove commands

 

<----------------------------------------

 

 

---------------------------------------->

 

The deconfig command below can be executed in parallelon all the remote nodes. Execute the command on the local node after the execution completes on all the remote nodes.

 

Press Enter after you finish running theabove commands

 

节点一删除

[root@node1 ~]#/tmp/deinstall2016-07-08_03-47-46AM/perl/bin/perl \

>-I/tmp/deinstall2016-07-08_03-47-46AM/perl/lib \

>-I/tmp/deinstall2016-07-08_03-47-46AM/crs/install \

> /tmp/deinstall2016-07-08_03-47-46AM/crs/install/rootcrs.pl\

> -force \

> -deconfig \

> -paramfile"/tmp/deinstall2016-07-08_03-47-46AM/response/deinstall_Ora11g_gridinfrahome1.rsp"

Using configuration parameter file:/tmp/deinstall2016-07-08_03-47-46AM/response/deinstall_Ora11g_gridinfrahome1.rsp

Network exists:1/192.168.40.0/255.255.255.0/eth0, type static

VIP exists:/node1-vip/192.168.40.193/192.168.40.0/255.255.255.0/eth0, hosting node node1

VIP exists:/node2-vip/192.168.40.194/192.168.40.0/255.255.255.0/eth0, hosting node node2

GSD exists

ONS exists: Local port 6100, remote port6200, EM port 2016

CRS-2791: Starting shutdown of Oracle HighAvailability Services-managed resources on 'node1'

CRS-2673: Attempting to stop 'ora.crsd' on'node1'

CRS-2790: Starting shutdown of ClusterReady Services-managed resources on 'node1'

CRS-2673: Attempting to stop'ora.GRIDDG.dg' on 'node1'

CRS-2673: Attempting to stop 'ora.devdb.db'on 'node1'

CRS-2677: Stop of 'ora.devdb.db' on 'node1'succeeded

CRS-2673: Attempting to stop 'ora.DATA.dg'on 'node1'

CRS-2673: Attempting to stop 'ora.FLASH.dg'on 'node1'

CRS-2677: Stop of 'ora.DATA.dg' on 'node1'succeeded

CRS-2677: Stop of 'ora.FLASH.dg' on 'node1'succeeded

CRS-2677: Stop of 'ora.GRIDDG.dg' on'node1' succeeded

CRS-2673: Attempting to stop 'ora.asm' on'node1'

CRS-2677: Stop of 'ora.asm' on 'node1'succeeded

CRS-2792: Shutdown of Cluster ReadyServices-managed resources on 'node1' has completed

CRS-2677: Stop of 'ora.crsd' on 'node1'succeeded

CRS-2673: Attempting to stop 'ora.ctssd' on'node1'

CRS-2673: Attempting to stop 'ora.evmd' on'node1'

CRS-2673: Attempting to stop 'ora.asm' on'node1'

CRS-2673: Attempting to stop 'ora.mdnsd' on'node1'

CRS-2677: Stop of 'ora.evmd' on 'node1'succeeded

CRS-2677: Stop of 'ora.mdnsd' on 'node1'succeeded

CRS-2677: Stop of 'ora.ctssd' on 'node1'succeeded

CRS-2677: Stop of 'ora.asm' on 'node1'succeeded

CRS-2673: Attempting to stop'ora.cluster_interconnect.haip' on 'node1'

CRS-2677: Stop of'ora.cluster_interconnect.haip' on 'node1' succeeded

CRS-2673: Attempting to stop 'ora.cssd' on'node1'

CRS-2677: Stop of 'ora.cssd' on 'node1'succeeded

CRS-2673: Attempting to stop 'ora.gipcd' on'node1'

CRS-2677: Stop of 'ora.gipcd' on 'node1'succeeded

CRS-2673: Attempting to stop 'ora.gpnpd' on'node1'

CRS-2677: Stop of 'ora.gpnpd' on 'node1'succeeded

CRS-2793: Shutdown of Oracle HighAvailability Services-managed resources on 'node1' has completed

CRS-4133: Oracle High Availability Serviceshas been stopped.

Successfully deconfigured Oracleclusterware stack on this node

节点二删除

 

节点二删除

[root@node2 ~]#

[root@node2 ~]#/tmp/deinstall2016-07-08_03-47-46AM/perl/bin/perl \

>-I/tmp/deinstall2016-07-08_03-47-46AM/perl/lib \

>-I/tmp/deinstall2016-07-08_03-47-46AM/crs/install \

> /tmp/deinstall2016-07-08_03-47-46AM/crs/install/rootcrs.pl\

> -force \

> -deconfig \

> -paramfile"/tmp/deinstall2016-07-08_03-47-46AM/response/deinstall_Ora11g_gridinfrahome1.rsp"

Using configuration parameter file:/tmp/deinstall2016-07-08_03-47-46AM/response/deinstall_Ora11g_gridinfrahome1.rsp

Network exists:1/192.168.40.0/255.255.255.0/eth0, type static

VIP exists:/node2-vip/192.168.40.194/192.168.40.0/255.255.255.0/eth0, hosting node node2

GSD exists

ONS exists: Local port 6100, remote port 6200, EM port 2016

CRS-2613: Could not find resource 'ora.registry.acfs'.

CRS-4000: Command Stop failed, or completed with errors.

 

 

CRS-2791: Starting shutdown of Oracle HighAvailability Services-managed resources on 'node2'

CRS-2673: Attempting to stop 'ora.crsd' on'node2'

CRS-2790: Starting shutdown of ClusterReady Services-managed resources on 'node2'

CRS-2673: Attempting to stop 'ora.oc4j' on'node2'

CRS-2673: Attempting to stop'ora.GRIDDG.dg' on 'node2'

CRS-2673: Attempting to stop 'ora.devdb.db'on 'node2'

CRS-2677: Stop of 'ora.devdb.db' on 'node2'succeeded

CRS-2673: Attempting to stop 'ora.DATA.dg'on 'node2'

CRS-2673: Attempting to stop 'ora.FLASH.dg'on 'node2'

CRS-2677: Stop of 'ora.DATA.dg' on 'node2'succeeded

CRS-2677: Stop of 'ora.FLASH.dg' on 'node2'succeeded

CRS-2677: Stop of 'ora.oc4j' on 'node2'succeeded

CRS-2677: Stop of 'ora.GRIDDG.dg' on'node2' succeeded

CRS-2673: Attempting to stop 'ora.asm' on'node2'

CRS-2677: Stop of 'ora.asm' on 'node2'succeeded

CRS-2792: Shutdown of Cluster ReadyServices-managed resources on 'node2' has completed

CRS-2677: Stop of 'ora.crsd' on 'node2'succeeded

CRS-2673: Attempting to stop 'ora.mdnsd' on'node2'

CRS-2673: Attempting to stop'ora.drivers.acfs' on 'node2'

CRS-2673: Attempting to stop 'ora.ctssd' on'node2'

CRS-2673: Attempting to stop 'ora.evmd' on'node2'

CRS-2673: Attempting to stop 'ora.asm' on'node2'

CRS-2677: Stop of 'ora.ctssd' on 'node2'succeeded

CRS-2677: Stop of 'ora.evmd' on 'node2'succeeded

CRS-2677: Stop of 'ora.mdnsd' on 'node2' succeeded

CRS-2677: Stop of 'ora.drivers.acfs' on'node2' succeeded

CRS-2677: Stop of 'ora.asm' on 'node2'succeeded

CRS-2673: Attempting to stop'ora.cluster_interconnect.haip' on 'node2'

CRS-2677: Stop of'ora.cluster_interconnect.haip' on 'node2' succeeded

CRS-2673: Attempting to stop 'ora.cssd' on'node2'

CRS-2677: Stop of 'ora.cssd' on 'node2'succeeded

CRS-2673: Attempting to stop 'ora.gipcd' on'node2'

CRS-2677: Stop of 'ora.gipcd' on 'node2'succeeded

CRS-2673: Attempting to stop 'ora.gpnpd' on'node2'

CRS-2677: Stop of 'ora.gpnpd' on 'node2'succeeded

CRS-2793: Shutdown of Oracle HighAvailability Services-managed resources on 'node2' has completed

CRS-4133: Oracle High Availability Serviceshas been stopped.

Successfully deconfigured Oracle clusterwarestack on this node

节点三删除

[root@node3 ~]#/tmp/deinstall2016-07-08_03-47-46AM/perl/bin/perl-I/tmp/deinstall2016-07-08_03-47-46AM/perl/lib-I/tmp/deinstall2016-07-08_03-47-46AM/crs/install/tmp/deinstall2016-07-08_03-47-46AM/crs/install/rootcrs.pl -force  -deconfig -paramfile"/tmp/deinstall2016-07-08_03-47-46AM/response/deinstall_Ora11g_gridinfrahome1.rsp"

Using configuration parameter file:/tmp/deinstall2016-07-08_03-47-46AM/response/deinstall_Ora11g_gridinfrahome1.rsp

****Unable to retrieve Oracle Clusterwarehome.

Start Oracle Clusterware stack and tryagain.

CRS-4047: No Oracle Clusterware componentsconfigured.

CRS-4000: Command Stop failed, or completedwith errors.

################################################################

# You must kill processes or reboot thesystem to properly #

# cleanup the processes started by Oracleclusterware          #

################################################################

Either /etc/oracle/olr.loc does not existor is not readable

Make sure the file exists and it has readand execute access

Either /etc/oracle/olr.loc does not existor is not readable

Make sure the file exists and it has readand execute access

Failure in execution (rc=-1, 256, No suchfile or directory) for command /etc/init.d/ohasd deinstall

error: package cvuqdisk is not installed

Successfully deconfigured Oracleclusterware stack on this node

[root@node3 ~]#

 

三个节点执行之后,在原来节点按enter

Setting the force flag to false

Setting the force flag to cleanup theOracle Base

Oracle Universal Installer clean START

 

Detach Oracle home '/u01/app/11.2.0/grid'from the central inventory on the local node : Done

 

Delete directory '/u01/app/11.2.0/grid' onthe local node : Done

 

Delete directory '/u01/app/oraInventory' onthe local node : Done

 

Delete directory '/u01/app/grid' on thelocal node : Done

 

Oracle Universal Installer cleanup wassuccessful.

 

Oracle Universal Installer clean END

 

 

## [START] Oracle install clean ##

 

Clean install operation removing temporarydirectory '/tmp/deinstall2016-07-08_03-47-46AM' on node 'node3'

 

## [END] Oracle install clean ##

 

 

######################### CLEAN OPERATIONEND #########################

 

 

####################### CLEAN OPERATIONSUMMARY #######################

Following RAC listener(s) were de-configuredsuccessfully: LISTENER,LISTENER_SCAN1

Oracle Clusterware is stopped andsuccessfully de-configured on node "node1"

Oracle Clusterware is stopped andsuccessfully de-configured on node "node3"

Oracle Clusterware is stopped andsuccessfully de-configured on node "node2"

Oracle Clusterware is stopped andde-configured successfully.

Successfully detached Oracle home'/u01/app/11.2.0/grid' from the central inventory on the local node.

Successfully deleted directory'/u01/app/11.2.0/grid' on the local node.

Successfully deleted directory'/u01/app/oraInventory' on the local node.

Successfully deleted directory'/u01/app/grid' on the local node.

Oracle Universal Installer cleanup wassuccessful.

 

Oracle deinstall tool successfully cleanedup temporary directories.

#######################################################################

 

 

############# ORACLE DEINSTALL &DECONFIG TOOL END #############

 

[grid@node3 deinstall]$

18、删除相关目录

[root@node3 etc]# rm -rf oraInst.loc

[root@node3 etc]# rm -rf /opt/ORCLfmap/

[root@node3 etc]# rm -rf /u01

 

20、更新oracle inventory

在节点1或者节点2上执行

[root@node1 ~]# su - grid

[grid@node1 ~]$ cd $ORACLE_HOME

[grid@node1 grid]$ cd oui/bin/

[grid@node1 bin]$ pwd

/u01/app/11.2.0/grid/oui/bin


[grid@node1 bin]$ ./runInstaller-updateNodeList ORACLE_HOME=$ORACLE_HOME"CLUSTER_NODES={node1,node2}" CRS=TRUE

Starting Oracle Universal Installer...

 

Checking swap space: must be greater than500 MB.   Actual 2892 MB    Passed

The inventory pointer is located at/etc/oraInst.loc

The inventory is located at/u01/app/oraInventory

'UpdateNodeList' was successful.

21、重置crs注册表

1、节点一

[root@node1 ~]# cd /u01/app/11.2.0/grid/

[root@node1 grid]# ./root.sh

Performing root user operation for Oracle11g

 

The following environment variables are setas:

   ORACLE_OWNER= grid

   ORACLE_HOME=  /u01/app/11.2.0/grid

 

Enter the full pathname of the local bindirectory: [/usr/local/bin]:

The contents of "dbhome" have notchanged. No need to overwrite.

The contents of "oraenv" have notchanged. No need to overwrite.

The contents of "coraenv" havenot changed. No need to overwrite.

 

 

Creating /etc/oratab file...

Entries will be added to the /etc/oratabfile as needed by

Database Configuration Assistant when adatabase is created

Finished running generic part of rootscript.

Now product-specific root actions will beperformed.

Using configuration parameter file:/u01/app/11.2.0/grid/crs/install/crsconfig_params

User ignored Prerequisites duringinstallation

OLR initialization - successful

Adding Clusterware entries to upstart

CRS-2672: Attempting to start 'ora.mdnsd'on 'node1'

CRS-2676: Start of 'ora.mdnsd' on 'node1'succeeded

CRS-2672: Attempting to start 'ora.gpnpd'on 'node1'

CRS-2676: Start of 'ora.gpnpd' on 'node1'succeeded

CRS-2672: Attempting to start'ora.cssdmonitor' on 'node1'

CRS-2672: Attempting to start 'ora.gipcd'on 'node1'

CRS-2676: Start of 'ora.gipcd' on 'node1'succeeded

CRS-2676: Start of 'ora.cssdmonitor' on'node1' succeeded

CRS-2672: Attempting to start 'ora.cssd' on'node1'

CRS-2672: Attempting to start 'ora.diskmon'on 'node1'

CRS-2676: Start of 'ora.diskmon' on 'node1'succeeded

CRS-2676: Start of 'ora.cssd' on 'node1'succeeded

 

2、节点二执行

[root@node2 ~]# cd /u01/app/11.2.0/grid/

[root@node2 grid]# ./root.sh

Performing root user operation for Oracle11g

 

The following environment variables are setas:

   ORACLE_OWNER= grid

   ORACLE_HOME=  /u01/app/11.2.0/grid

 

Enter the full pathname of the local bindirectory: [/usr/local/bin]:

The contents of "dbhome" have notchanged. No need to overwrite.

The contents of "oraenv" have notchanged. No need to overwrite.

The contents of "coraenv" havenot changed. No need to overwrite.

 

 

Creating /etc/oratab file...

Entries will be added to the /etc/oratabfile as needed by

Database Configuration Assistant when adatabase is created

Finished running generic part of rootscript.

Now product-specific root actions will beperformed.

Using configuration parameter file:/u01/app/11.2.0/grid/crs/install/crsconfig_params

User ignored Prerequisites duringinstallation

OLR initialization - successful

Adding Clusterware entries to upstart

CRS-4402: The CSS daemon was started inexclusive mode but found an active CSS daemon on node node1, number 1, and isterminating

An active cluster was found duringexclusive startup, restarting to join the cluster

Preparing packages for installation...

cvuqdisk-1.0.9-1

Configure Oracle Grid Infrastructure for aCluster ... succeeded

[root@node2 grid]#

 

 

22、删除后的验证

[grid@node1 bin]$ cluvfy stage -post nodedel-n node3 -verbose

 

Performing post-checks for node removal

 

Checking CRS integrity...

 

Clusterware version consistency passed

The Oracle Clusterware is healthy on node"node2"

The Oracle Clusterware is healthy on node"node1"

 

CRS integrity check passed

Result:

Node removal check passed

 

Post-check for node removal was successful.

 

 

23、查看节点资源信息

[root@node1 script]# crsctl check cluster-all

**************************************************************

node1:

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Servicesis online

CRS-4533: Event Manager is online

**************************************************************

node2:

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Servicesis online

CRS-4533: Event Manager is online

**************************************************************

[root@node1 script]#

 


至此,节点删除成功。

1 0
原创粉丝点击