Adding a Node to a 10g RAC Cluster

来源:互联网 发布:照片修复软件免费版 编辑:程序博客网 时间:2024/06/05 16:48

Adding a Node to a 10g RAC Cluster


 

PURPOSE

-------

 

The purpose of this note is to provide the user witha  document that

can be used as a guide to add a cluster node from anOracle 10g Real

Applications environment.

 

SCOPE & APPLICATION

-------------------

 

This document can be used by DBAs and support analsytswho need to

either add a cluster node or assist another in addinga cluster

node in a 10g Unix Real Applications environment.  If you are on

10gR2 (10.2.0.2 or higher), please refer to thedocumentation for

more updated steps.

 

Prerequisite

----------

All nodes from initial cluster installation must beavailable up and running

for adding a new node.

If there is requirement for down node (due tohardware/os problem) to be

replaced with a new node using this addNode procedurethen first remove the bad

node using document Note 466975.1 before proceedingwith addNode.

 

 

ADDING A NODE TO A 10g RAC CLUSTER

----------------------------------

 

The most important steps that need to be followed are;

 

A.    Configurethe OS and hardware for the new node.

B.    Add thenode to the cluster.

C.    Add the RACsoftware to the new node.

D.    Reconfigurelisteners for new node.

E.    Addinstances via DBCA.

 

Here is a breakdown of the above steps.

 

 

A.    Configurethe OS and hardware for the new node.

-------------------------------------------------------

 

Please consult with available OS vendor documentationfor this step.

 

See Note 264847.1 for network requirements.  Also verify that the OCR and

voting files are visible from the new node withcorrect permissions.

 

 

B.    Add thenode to the cluster.

------------------------------------

 

1.    If the CRSHome is owned by root and you are on a version < 10.1.0.4, change

        theownership of the CRS Home directories on all nodes to the Oracle user

        so thatOUI can read and write to these directories. 

 

2.    Set theDISPLAY environment variable and run the addNode.sh script from

      $ORA_CRS_HOME/oui/binon one of the existing nodes as the oracle user. 

      Example:

 

      DISPLAY=ipaddress:0.0;export DISPLAY

      cd$ORA_CRS_HOME/oui/bin

      ./addNode.sh

 

3.    The OUIWelcome screen will appear, click next.

 

4.    On the"Specify Cluster Nodes to Add to Installation" screen, add the

      public andprivate node names (these should exist in /etc/hosts and

      should bepingable from each of the cluster nodes), click next.

 

5.     The "Cluster Node Addition Summary"screen will appear, click next.

 

6.    The"Cluster Node Addition Progress" screen will appear.  You will

      then beprompted to run rootaddnode.sh as the root user.  First verify

        that theCLSCFG information in the rootaddnode.sh script is correct. 

        Itshould contain the new public and private node names and node

       numbers.  Example:

 

      $CLSCFG-add -nn ,2 -pn ,2 -hn ,2

 

        Then runthe rootaddnode.sh script on the EXISTING node you ran the

       addNode.sh from.  Example:

 

      su root

      cd$ORA_CRS_HOME

      sh -xrootaddnode.sh

 

      Once thisis finished, click OK in the dialog box to continue.

 

7.      At thispoint another dialog box will appear, this time you are

        promptedto run $ORA_CRS_HOME/root.sh on all the new nodes.

        If youare on version < 10.1.0.4 then

        - Locatethe highest numbered NEW cluster node using "$ORA_CRS_HOME/bin/olsnodes -n".

        - Runthe root.sh script on this highest numbered NEW cluster node.

        - Runthe root.sh script on  the rest of theNEW nodes in any order.

        Forversions 10.1.0.4 and above the root scritps can be run on the NEW

        nodes inany order.

 

        Example:

 

      su root

      cd$ORA_CRS_HOME

      sh -xroot.sh

 

      If thereare any problems with this step, refer to Note 240001.1

 

      Once thisis finished, click OK in the dialog box to continue.

 

8.      Afterrunning the CRS root.sh on all new nodes, as the root user run

       $ORA_CRS_HOME/bin/racgons add_config :4948 :4948...

        from anynode.

 

9.    Next youwill see the "End of Installation" screen.  At this point you

      may exitthe installer.

 

10.   Change theownership of all CRS Homes back to root.

 

 

C.    Add theOracle Database software (with RAC option) to the new node.

---------------------------------------------------------------------------

 

1.    On apre-existing node, cd to the $ORACLE_HOME/oui/bin directory and

      run theaddNode.sh script.  Example:

 

      DISPLAY=ipaddress:0.0;export DISPLAY

      cd$ORACLE_HOME/oui/bin

      ./addNode.sh

 

2.    The OUIWelcome screen will appear, click next. 

 

3.    On the"Specify Cluster Nodes to Add to Installation" screen, specify

      the nodeyou want to add, click next.

 

4.    The"Cluster Node Addition Summary" screen will appear, click next.

 

5.    The"Cluster Node Addition Progress" screen will appear.  You will

      then beprompted to run root.sh as the root user. 

 

      su root

      cd$ORACLE_HOME

      ./root.sh

 

      Once thisis finished, click OK in the dialog box to continue.

 

6.    Next youwill see the "End of Installation" screen.  At this point you

      may exitthe installer.

 

7.    Cd to the$ORACLE_HOME/bin directory and run the vipca tool with the

      newnodelist.  Example:

 

      su root

      DISPLAY=ipaddress:0.0;export DISPLAY

      cd$ORACLE_HOME/bin

      ./vipca-nodelist ,

 

8.    The VIPCAWelcome Screen will appear, click next.

 

9.    Add the newnode's virtual IP information, click next.

 

10.   You willthen see the "Summary" screen, click finish.

 

11.   You willnow see a progress bar creating and starting the new CRS

      resources.  Once this is finished, click ok, view theconfiguration

      results, andclick on the exit button.

 

12.   Verify thatinterconnect information is correct with:

 

      oifcfggetif

 

      If it isnot correct, change it with:

 

      oifcfgsetif /:

 

      Forexample:

 

      oifcfgsetif -global eth1/10.10.10.0:cluster_interconnect

 

      or

 

      oifcfg setif-node eth1/10.10.10.0:cluster_interconnect

 

 

D.    Reconfigurelisteners for new node.

-------------------------------------------

 

1.    Run NETCAon the NEW node to verify that the listener is configured on

        the newnode.  Example:

 

      DISPLAY=ipaddress:0.0;export DISPLAY

      netca

 

2.    Choose"Cluster Configuration", click next.

 

3.    Select allnodes, click next.

 

4.    Choose"Listener configuration", click next.

 

5.    Choose"Reconfigure", click next.

 

6.    Choose thelistener you would like to reconfigure, click next.

 

7.    Choose thecorrect protocol, click next.

 

8.    Choose thecorrect port, click next.

 

9.    Choosewhether or not to configure another listener, click next.

 

10.   You may getan error message saying, "The information provided for this

      listener iscurrently in use by another listener...". Click yes to

      continueanyway.

 

11.   The"Listener Configuration Complete" screen will appear, click next.

 

12.   Click"Finish" to exit NETCA.

 

13.   Runcrs_stat to verify that the listener CRS resource was created.

      Example:

 

      cd$ORA_CRS_HOME/bin

      ./crs_stat

     

14.   The newlistener will likely be offline.  Startit by starting the

      nodeapps onthe new node.  Example:

 

      srvctlstart nodeapps -n

 

15.   Usecrs_stat to confirm that all VIP's, GSD's, ONS's, and listeners are

      ONLINE.

 

 

E.    Addinstances via DBCA. (for databases involving standby see section F first)

---------------------------------------------------------------------

 

1.    To add newinstances, launch DBCA from a pre-existing node.  Example:

 

      DISPLAY=ipaddress:0.0;export DISPLAY

      dbca

 

2.    On thewelcome screen, choose "Oracle Real Application Clusters",

      clicknext. 

 

3.    Choose"Instance Management", click next.

 

4.    Choose"Add an Instance", click next.

 

5.    Choose thedatabase you would like to add an instance to and specify

      a user withSYSDBA privileges, click next.  Clicknext again...

 

6.    Choose thecorrect instance name and node, click next.

 

7.    Review thestorage screen, click next.

 

8.    Review thesummary screen, click OK and wait a few seconds for the

      progressbar to start.

 

9.    Allow theprogress bar to finish.  When asked ifyou want to perform

      anotheroperation, choose "No" to exit DBCA.

 

10.   To verifysuccess, log into one of the instances and query from

      gv$instance,you should now see all nodes.

 

 

F.      AddingInstances to Database when there's a standby database in place.

------------------------------------------------

 

    Depending onthe current dataguard configuration there are several steps

    we need toperform. The steps to perform depend on which cluster we are

    adding tothe new node to and how many nodes and instances will be on

    primary andstandby site at the end.

 

    Possiblecases are:

 

1.    Whenadding node to primary cluster only.

 

    In this casewe need to execute all the steps described in E (above) to

    add theinstance to the primary cluster, recreate the standby controlfile

    after addingthe new instance/thread to the primary DB and alter the

    standbydatabase to add the standby redologs for the new thread.

 

    Examplecommmands

    (thread 3was added to primary and only 2 redolog groups per thread):

 

a.    Follow allsteps in E (described above) to add the primary instance

 

b.    Create anew standby controlfile from the primary database and copy it

    to thestandby.

 

        Onprimary:

       alterdatabase create standby controlfile as "/u01/stby.ctl";

 

c.    Add thestandby redologs on standby after the database has been

    mounted withthe new controlfile:

 

        Onstandby:

       alterdatabase add standby logfile thread 3

       group 11('/u01/oradata/DR/srl_3_11.dbf') size 100m,

       group 12('/u01/oradata/DR/srl_3_12.dbf') size 100m;

 

2.    Whenadding node to standby cluster only.

 

    If the newnode was added to the standby cluster we also need to know

    how manynodes are currently in the primary and standby cluster to know

    what to do,the options are :

 

2.1    Whencurrent number of nodes on primary cluster is greater than or equal

    to thenumber of nodes on standby cluster

 

    We assumethe thread and instance on the primary database has already been

    created, andwe now want to add the instance to the standby cluster only.

    In this casewe need to create the standby redologs for the new thread on

    the standby,if they do not already exist, update the init.ora or spfile

    for the newstandby instance and register the new standby instance to CRS

    usingsrvctl.

 

    Examplecommmands

    (a 3rd. nodewas added to standby cluster and only 2 redolog groups per thread):

 

a.    Addstandby redologs :

       alterdatabase add standby logfile thread 3

       group 11 ('/u01/oradata/DR/srl_3_11.dbf')size 100m,

       group 12('/u01/oradata/DR/srl_3_12.dbf') size 100m;

 

b.    Update theinit.ora or spfile parameters such as thread, instance_name,

   instance_number, local_listener, undo_tablespace, etc...

    for the newstandby instance.

 

c.    Registernew instance to CRS using srvctl

          $srvctl add instance -d DB_NAME -i INSTANCE3 -n NEW_STANDBY_NODE3

 

 

2.2    Whencurrent number of nodes on primary cluster is lower than

    the numberof nodes on standby cluster

 

    In this casewe'll need to add a new public thread on the primary database

    and enableit even if there will be no primary instance for that thread,

    thenrecreate the standby's controlfile and follow the same steps in F.2.1

    to add thenew set of SRLs and register new instance to CRS.

 

    Examplecommmands

    (a 3rd. nodewas added to standby cluster and only 2 redolog groups per thread):

 

a.    On primarycreate new thread and enable it:

       alter database add logfile thread 3

      group  9('/u01/oradata/prod/rl_3_9.dbf') size 100m,

       group 10('/u01/oradata/prod/rl_3_10.dbf') size 100m;

       alterdatabase enable public thread 3;

 

b.    Create newstandby controlfile from primary:

       alterdatabase create standby controlfile as "/u01/stby.ctl";

 

c.    On standby(after the database has been mounted with the new controlfile):

       alterdatabase add standby logfile thread 3

       group 11('/u01/oradata/DR/srl_3_11.dbf') size 100m,

       group 12('/u01/oradata/DR/srl_3_12.dbf') size 100m;

 

d.    Update theinit.ora or spfile parameters such as thread, instance_name,

   instance_number, local_listener, undo_tablespace, etc...

    for the newstandby instance.

 

e.    Registernew standby instance to CRS using srvctl

       $ srvctladd instance -d DB_NAME -i INSTANCE3 -n NEW_STANDBY_NODE3

 

 

3.    Whenadding node to both primary and standby clusters.

    In this casefollow all steps in both F.1 and F.2 in that same order.

 

 

 

原创粉丝点击