9 Adding and Deleting Nodes and Instances
来源:互联网 发布:淘宝店铺装修素材 编辑:程序博客网 时间:2024/05/22 16:57
9 Adding and Deleting Nodes and Instances
Verifying the New Node Meets the Prerequisites for Installation
After you have configured the new nodes, you should use cluvfy to verify that all the requirements for installation have been met. To verify the new node meets the hardware requirement, run the following command on an existing node (for example, either racnode1 or racnode2) from the Grid_home/bin directory:
cluvfy stage -pre crsinst -n racnode3 -verbose
To extend the Oracle Grid Infrastructure for a cluster home to include the new node:
1.Verify the new node has been properly prepared for an Oracle Clusterware installation by running the following CLUVFY command on the racnode1 node:
cluvfy stage -pre nodeadd -n racnode3 -verbose
2.As the oracle user (owner of the Oracle Grid Infrastructure for a cluster software installation) on racnode1, go to Grid_home/oui/bin and run the addNode.sh script in silent mode:
If you are using Grid Naming Service (GNS):
./addNode.sh -silent "CLUSTER_NEW_NODES={racnode3}"
If you are not using Grid Naming Service (GNS):
./addNode.sh -silent "CLUSTER_NEW_NODES={racnode3}" "CLUSTER_NEW_VIRTUAL_
HOSTNAMES={racnode3-vip}"
When running this command, the curly braces ( { } ) are not optional and must be included or the command returns an error.
3.When the script finishes, run the root.sh script as the root user on the new node, racnode3, from the Oracle home directory on that node.
4.If you are not using Oracle Grid Naming Service (GNS), then you must add the name and address for racnode3 to DNS.
You should now have Oracle Clusterware running on the new node. To verify the installation of Oracle Clusterware on the new node, you can run the following command on the newly configured node, racnode3:
$ cd /u01/app/11.2.0/grid/bin
$ ./cluvfy stage -post nodeadd -n racnode3 -verbose
Note:
Avoid changing host names after you complete the Oracle Clusterware installation, including adding or deleting domain qualifications. Nodes with changed host names must be deleted from the cluster and added back with the new name.
Extending the Oracle RAC Home Directory
To extend the Oracle RAC installation to include the new node:
1.Ensure that you have successfully installed the Oracle RAC software on at least one node in your cluster environment. To use these procedures as shown, replace Oracle_home with the location of your installed Oracle home directory.
2.Go to the Oracle_home/oui/bin directory on racnode1 and run the addNode.sh script in silent mode as shown in the following example:
$ cd /u01/app/oracle/product/11.2.0/dbhome_1/oui/bin
$ ./addNode.sh -silent "CLUSTER_NEW_NODES={racnode3}"
3.When the script finishes, run the root.sh script as the root user on the new node, racnode3, from the Oracle home directory on that node.
For policy-managed databases with Oracle Managed Files (OMF) enabled, no further actions are needed.
For a policy-managed database, when you add a new node to the cluster, it is placed in the Free pool by default. If you increase the cardinality of the database server pool, then an Oracle RAC instance is added to the new node, racnode3, and it is moved to the database server pool. No further action is necessary.
4.Add shared storage for the undo tablespace and redo log files.
If OMF is not enabled for your database, then you must manually add an undo tablespace and redo logs.
5.If you have an administrator-managed database, then add a new instance on the new node as described in "Creating an Instance on the New Node".
If you followed the installation instructions in this guide, then your cluster database is an administrator-managed database and stores the database files on Oracle Automatic Storage Management (Oracle ASM) with OMF enabled.
After completing these steps, you should have an installed Oracle home on the new node.
Updating the Node List for OPatch
If OPatch does not automatically detect Oracle RAC or its nodes, then investigate the contents of the inventory and ensure they are complete.
To update the node list for OPatch:
If the list of nodes for your cluster is not complete, then you can update it by using Oracle Universal Installer and the -updateNodeList flag, as demonstrated in the following example:
Oracle_home/oui/bin/runInstaller -updateNodeList
ORACLE_ HOME=/u01/app/oracle/product/11.2.0/dbhome_1
CLUSTER_NODES=racnode1,racnode2,racnode3 -noClusterEnabled
Resolving the "Unable to remove a partially installed interim patch" Error
If the patching process is interrupted, then you might get the error "Unable to remove a partially installed interim patch" when you try to install the patch a second time.
To resolve the partially installed patch error:
Ensure that the environment variable ORACLE_HOME is set to the Oracle home directory you are attempting to patch.
Go to the Oracle_home/.patch_storage/patch-id_timestamp directory and run the restore.sh script (or restore.bat on Windows platforms) as follows:
Oracle_home/.patch_storage/patch-id_timestamp/restore.sh
On Linux and UNIX systems, use the Oracle_home/.patch_storage/patch-id_timestamp/make.txt file (if available) to modify your operating system environment, as follows:
/bin/sh make.txt
Attempt to apply the patch again.
Verifying the New Node Meets the Prerequisites for Installation
After you have configured the new nodes, you should use cluvfy to verify that all the requirements for installation have been met. To verify the new node meets the hardware requirement, run the following command on an existing node (for example, either racnode1 or racnode2) from the Grid_home/bin directory:
cluvfy stage -pre crsinst -n racnode3 -verbose
To extend the Oracle Grid Infrastructure for a cluster home to include the new node:
1.Verify the new node has been properly prepared for an Oracle Clusterware installation by running the following CLUVFY command on the racnode1 node:
cluvfy stage -pre nodeadd -n racnode3 -verbose
2.As the oracle user (owner of the Oracle Grid Infrastructure for a cluster software installation) on racnode1, go to Grid_home/oui/bin and run the addNode.sh script in silent mode:
If you are using Grid Naming Service (GNS):
./addNode.sh -silent "CLUSTER_NEW_NODES={racnode3}"
If you are not using Grid Naming Service (GNS):
./addNode.sh -silent "CLUSTER_NEW_NODES={racnode3}" "CLUSTER_NEW_VIRTUAL_
HOSTNAMES={racnode3-vip}"
When running this command, the curly braces ( { } ) are not optional and must be included or the command returns an error.
3.When the script finishes, run the root.sh script as the root user on the new node, racnode3, from the Oracle home directory on that node.
4.If you are not using Oracle Grid Naming Service (GNS), then you must add the name and address for racnode3 to DNS.
You should now have Oracle Clusterware running on the new node. To verify the installation of Oracle Clusterware on the new node, you can run the following command on the newly configured node, racnode3:
$ cd /u01/app/11.2.0/grid/bin
$ ./cluvfy stage -post nodeadd -n racnode3 -verbose
Note:
Avoid changing host names after you complete the Oracle Clusterware installation, including adding or deleting domain qualifications. Nodes with changed host names must be deleted from the cluster and added back with the new name.
Extending the Oracle RAC Home Directory
To extend the Oracle RAC installation to include the new node:
1.Ensure that you have successfully installed the Oracle RAC software on at least one node in your cluster environment. To use these procedures as shown, replace Oracle_home with the location of your installed Oracle home directory.
2.Go to the Oracle_home/oui/bin directory on racnode1 and run the addNode.sh script in silent mode as shown in the following example:
$ cd /u01/app/oracle/product/11.2.0/dbhome_1/oui/bin
$ ./addNode.sh -silent "CLUSTER_NEW_NODES={racnode3}"
3.When the script finishes, run the root.sh script as the root user on the new node, racnode3, from the Oracle home directory on that node.
For policy-managed databases with Oracle Managed Files (OMF) enabled, no further actions are needed.
For a policy-managed database, when you add a new node to the cluster, it is placed in the Free pool by default. If you increase the cardinality of the database server pool, then an Oracle RAC instance is added to the new node, racnode3, and it is moved to the database server pool. No further action is necessary.
4.Add shared storage for the undo tablespace and redo log files.
If OMF is not enabled for your database, then you must manually add an undo tablespace and redo logs.
5.If you have an administrator-managed database, then add a new instance on the new node as described in "Creating an Instance on the New Node".
If you followed the installation instructions in this guide, then your cluster database is an administrator-managed database and stores the database files on Oracle Automatic Storage Management (Oracle ASM) with OMF enabled.
After completing these steps, you should have an installed Oracle home on the new node.
Updating the Node List for OPatch
If OPatch does not automatically detect Oracle RAC or its nodes, then investigate the contents of the inventory and ensure they are complete.
To update the node list for OPatch:
If the list of nodes for your cluster is not complete, then you can update it by using Oracle Universal Installer and the -updateNodeList flag, as demonstrated in the following example:
Oracle_home/oui/bin/runInstaller -updateNodeList
ORACLE_ HOME=/u01/app/oracle/product/11.2.0/dbhome_1
CLUSTER_NODES=racnode1,racnode2,racnode3 -noClusterEnabled
Resolving the "Unable to remove a partially installed interim patch" Error
If the patching process is interrupted, then you might get the error "Unable to remove a partially installed interim patch" when you try to install the patch a second time.
To resolve the partially installed patch error:
Ensure that the environment variable ORACLE_HOME is set to the Oracle home directory you are attempting to patch.
Go to the Oracle_home/.patch_storage/patch-id_timestamp directory and run the restore.sh script (or restore.bat on Windows platforms) as follows:
Oracle_home/.patch_storage/patch-id_timestamp/restore.sh
On Linux and UNIX systems, use the Oracle_home/.patch_storage/patch-id_timestamp/make.txt file (if available) to modify your operating system environment, as follows:
/bin/sh make.txt
Attempt to apply the patch again.
0 0
- 9 Adding and Deleting Nodes and Instances
- UITableView(Inserting and Deleting Rows and Sections)
- Creating and Deleting Custom Menus in Visio
- Creating and deleting threads dynamically in eCos
- VirtualBox taking, restoring and deleting snapshots
- SAP R/3 Instances and Clients
- Lisp.类与实例(Classes and Instances)
- SAP R/3 Instances and Clients
- Adding And Removing Remote Branches
- Adding and displaying a background
- Writing and Reading and Deleting a Cookie on the Browser
- UITableView(Inserting and Deleting Rows and Sections)
- Nodes, Sockets, Cores and FLOPS
- Lab 3: Adding Views and Unit Testing
- iText - Adding PDF bookmark and anchor (转)
- Adding and Removing Tags on GitHub
- Git: Adding and Removing Tags on GitHub
- Adding headers and footers to RecyclerView.
- yate学习--yatengine.h--class YATE_API MessageReceiver : public GenObject
- c++输出char型变量与字符串的地址
- mexHttpBinding协议 【发布元数据终结点】
- ArcGIS API for Javascript离线部署 (最新版本jsapi3.9)
- 怎样能将PDF转换成PPT呢
- 9 Adding and Deleting Nodes and Instances
- Java 内部类
- 理解Java对象序列化
- 修改Android的开机画面
- LeetCode OJ Reverse Linked List
- Tomcat启动一闪而过
- 去除数组重复的值
- 浙江大学PAT_乙级_1011. A+B和C (15)
- struts2学习之三(第一个小页面)