Oracle 10g Release 2 RAC On RHEL4 Linux Using VMware ESX Server and NFS
来源:互联网 发布:js添加按钮点击事件 编辑:程序博客网 时间:2024/06/05 02:13
Oracle 10g Release 2 RAC On RHEL4 Linux Using VMware ESX Server and NFS
This article describes the installation of Oracle 10g release 2 (10.2) RAC on Red Hat Enterprise Linux 4 using VMware ESX Server and NFS for shared storage.
- Introduction
- Download Software
- Virtual Machine Setup
- Guest Operating System Installation
- Oracle Installation Prerequisites
- Create Shared Disks
- Install the Clusterware Software
- Install the Database Software
- Create a Database using the DBCA
- TNS Configuration
- Check the Status of the RAC
- Direct and Asynchronous I/O
Introduction
ESX Server is the enterprise level hypervisor from VMware, which installs on the "bare-metal", making it significantly more efficient than desktop virtualization tools. This article uses EXS Server to provide the infrastructure for an Oracle Real Application Clusters installation.
The article assumes you already have a VMware ESX Server and a VMware Infrastructure Client installation. The installation of these products is described here:
- VMware ESX Server Installation
- VMware Infrastructure Client Installation
Download Software
Download the following software.
- Red Hat Enterprise Linux (RHEL) 4
- Oracle 10g (10.2.0.1) CRS and DB software
Virtual Machine Setup
Right-click on the ESX server in the left-hand pane and select the "New Virtual Machine..." menu option.
Select the custom option and click the "Next" button.
Enter the virtual machine name (RAC1) as you would like it to appear in the right-hand pane of the client, then click the "Next" button.
Select the default datastore by clicking the "Next" button.
Select the "Linux" and "Red Hat Enterprise Linux 4" options, then click the "Next" button.
Select the required number of CPUs for the virtual machine, then click the "Next" button.
For this proof of concept we can complete the installation with 1G of memory. Enter the required amount of memory for the virtual machines, then click the "Next" button.
We need at least two network cards. One for the public IP and virtual IP addresses and a separate one for the private IP address. Select the number and type of networks cards required, then click the "Next" button.
Accept the default storage adapter by clicking the "Next" button.
Accept the "Create a new virtual disk" option by clicking the "Next" button.
We are using NFS store the shared Oracle homes and the database files, so we don't need much disk space on each virtual machine. Assuming you are using 1G of memory, you will be defining 2G of swap, so 10G of disk space is plenty. If you are using more memory, you will need to increase the disk space accordingly. Enter the appropriate disk capacity, then click the "Next" button.
The local disks do not need to be sharable, so click the "Next" button to ignore the advanced options.
If you are happy with the summary information, click the "Finish" button.
The virtual machine is now visible in the left-hand pane.
Repeat the process to define the second RAC node (RAC2).
To start the virtual machine, click the play button on the toolbar.
The virtual machine will start and attempt to boot from any installable media, or the network.
Guest Operating System Installation
Place the RHEL 4 DVD into the client PC's DVD drive and start the virtual machine by clicking the play button on the toolbar. The right pane of the VMware ESX Server client should display a boot loader, then the RHEL 4 installation screen.
Continue through the RHEL4 installation as you would for a normal server. A general pictorial guide to the installation can be found here. More specifically, it should be a server installation with a minimum of 2G swap, firewall and SELinux disabled and the following package groups installed:
- X Window System
- GNOME Desktop Environment
- Editors
- Graphical Internet
- Server Configuration Tools
- FTP Server
- Development Tools
- Legacy Software Development
- Administration Tools
- System Tools
To be consistent with the rest of the article, the following information should be set during the installation:
RAC1:
- hostname: rac1.localdomain
- IP Address eth0: 10.1.10.201 (public address)
- Default Gateway eth0: 10.1.10.1 (public address)
- IP Address eth1: 10.1.9.201 (private address)
- Default Gateway eth1: none
RAC2:
- hostname: rac2.localdomain
- IP Address eth0: 10.1.10.202 (public address)
- Default Gateway eth0: 10.1.10.1 (public address)
- IP Address eth1: 10.1.9.202 (private address)
- Default Gateway eth1: none
You are free to change the IP addresses to suit your network, but remember to stay consistent with those adjustments throughout the rest of the article.
Once the basic installation is complete, install the following packages whilst logged in as the root user.
# From Oracle Enterprise Linux 4 DVDcd /media/dvd/RedHat/RPMSrpm -Uvh setarch-1*rpm -Uvh compat-libstdc++-33-3*rpm -Uvh make-3*rpm -Uvh glibc-2*rpm -Uvh openmotif-2*rpm -Uvh compat-db-4*rpm -Uvh gcc-3*rpm -Uvh libaio-0*rpm -Uvh rsh-*rpm -Uvh compat-gcc-32-3*rpm -Uvh compat-gcc-32-c++-3*rpm -Uvh openmotif21*rpm -Uvh sysstat*cd /eject
Remember to perform the VMware Tools installation as described in the VMware ESX Server Installation article.
Oracle Installation Prerequisites
Perform the following steps whilst logged into the RAC1 virtual machine as the root user.
The /etc/hosts file must contain the following information.
127.0.0.1 localhost.localdomain localhost# Public10.1.10.201 rac1.localdomain rac110.1.10.202 rac2.localdomain rac2#Private10.1.9.201 rac1-priv.localdomain rac1-priv10.1.9.202 rac2-priv.localdomain rac2-priv#Virtual10.1.10.203 rac1-vip.localdomain rac1-vip10.1.10.204 rac2-vip.localdomain rac2-vip#NAS10.1.10.61 nas1.localdomain nas1
Add the following lines to the /etc/sysctl.conf file.
kernel.shmall = 2097152kernel.shmmax = 2147483648kernel.shmmni = 4096# semaphores: semmsl, semmns, semopm, semmnikernel.sem = 250 32000 100 128#fs.file-max = 65536net.ipv4.ip_local_port_range = 1024 65000#net.core.rmem_default=262144#net.core.rmem_max=262144#net.core.wmem_default=262144#net.core.wmem_max=262144# Additional and amended parameters suggested by Kevin Clossonnet.core.rmem_default = 524288net.core.wmem_default = 524288net.core.rmem_max = 16777216net.core.wmem_max = 16777216net.ipv4.ipfrag_high_thresh=524288net.ipv4.ipfrag_low_thresh=393216net.ipv4.tcp_rmem=4096 524288 16777216net.ipv4.tcp_wmem=4096 524288 16777216net.ipv4.tcp_timestamps=0net.ipv4.tcp_sack=0net.ipv4.tcp_window_scaling=1net.core.optmem_max=524287net.core.netdev_max_backlog=2500sunrpc.tcp_slot_table_entries=128sunrpc.udp_slot_table_entries=128net.ipv4.tcp_mem=16384 16384 16384
Run the following command to change the current kernel parameters.
/sbin/sysctl -p
Add the following lines to the /etc/security/limits.conf file.
* soft nproc 2047* hard nproc 16384* soft nofile 1024* hard nofile 65536
Add the following line to the /etc/pam.d/login file, if it does not already exist.
session required pam_limits.so
Disable secure linux by editing the /etc/selinux/config file, making sure the SELINUX flag is set as follows.
SELINUX=disabled
Alternatively, this alteration can be done using the GUI tool (Applications > System Settings > Security Level). Click on the SELinux tab and disable the feature.
Set the hangcheck kernel module parameters by adding the following line to the /etc/modprobe.conf file.
options hangcheck-timer hangcheck_tick=30 hangcheck_margin=180
To load the module immediately, execute "modprobe -v hangcheck-timer".
Create the new groups and users.
groupadd oinstallgroupadd dbagroupadd operuseradd -u 500 -g oinstall -G dba oraclepasswd oracle
Configure SSH on each node in the cluster. Log in as the "oracle" user and perform the following tasks on each node.
su - oraclemkdir ~/.sshchmod 700 ~/.ssh/usr/bin/ssh-keygen -t rsa # Accept the default settings.exit
The RSA public key is written to the ~/.ssh/id_rsa.pub file and the private key to the ~/.ssh/id_rsa file.
Log in as the "oracle" user on RAC1, generate an "authorized_keys" file on RAC1 and copy it to RAC2 using the following commands.
su - oraclecd ~/.sshcat id_rsa.pub >> authorized_keysscp authorized_keys rac2:/home/oracle/.ssh/exit
Next, log in as the "oracle" user on RAC2 and perform the following commands.
su - oraclecd ~/.sshcat id_rsa.pub >> authorized_keysscp authorized_keys rac1:/home/oracle/.ssh/exit
The "authorized_keys" file on both servers now contains the public keys generated on all RAC nodes.
To enable SSH user equivalency on the cluster member nodes issue the following commands on each node.
su - oraclessh rac1 datessh rac2 datessh rac1.localdomain datessh rac2.localdomain dateexec /usr/bin/ssh-agent $SHELL/usr/bin/ssh-add
You should now be able to SSH and SCP between servers without entering passwords.
Login as the oracle user and add the following lines at the end of the .bash_profile file.
# Oracle SettingsTMP=/tmp; export TMPTMPDIR=$TMP; export TMPDIRORACLE_BASE=/u01/app/oracle; export ORACLE_BASEORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1; export ORACLE_HOMEORACLE_SID=RAC1; export ORACLE_SIDORACLE_TERM=xterm; export ORACLE_TERMPATH=/usr/sbin:$PATH; export PATHPATH=$ORACLE_HOME/bin:$PATH; export PATHLD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATHCLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATHif [ $USER = "oracle" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fifi
Remember to set the ORACLE_SID to RAC2 on the second node.
Create Shared Disks
This installation uses NFS to provide the shared storage for RAC. Amend the following instructions to suit the paths used by your NAS or NFS server.
If you are using a third Linux server as your NFS server, you will need to create some NFS shares as described below.
mkdir /shared_configmkdir /shared_crsmkdir /shared_homemkdir /shared_data
Add the following lines to the /etc/exports file.
/shared_config *(rw,sync,no_wdelay,insecure_locks,no_root_squash)/shared_crs *(rw,sync,no_wdelay,insecure_locks,no_root_squash)/shared_home *(rw,sync,no_wdelay,insecure_locks,no_root_squash)/shared_data *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
Run the following command to export the NFS shares.
chkconfig nfs onservice nfs restart
If you are using a NAS or some other storage applicance that supports NFS, make four equivalent shares.
On both RAC1 and RAC2 create the directories in which the Oracle software will be installed.
mkdir -p /u01/app/crs/product/10.2.0/crsmkdir -p /u01/app/oracle/product/10.2.0/db_1mkdir -p /u01/oradatamkdir -p /u01/shared_configchown -R oracle:oinstall /u01/app /u01/app/oracle /u01/oradata /u01/shared_configchmod -R 775 /u01/app /u01/app/oracle /u01/oradata /u01/shared_config
Add the following lines to the "/etc/fstab" file on each server. The mount options are based on Oracle recommendations from Metalink Note:359515.1.
nas1:/shared_config /u01/shared_config nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,noac,vers=3,timeo=600 0 0nas1:/shared_crs /u01/app/crs/product/10.2.0/crs nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,vers=3,timeo=600,actimeo=0 0 0nas1:/shared_home /u01/app/oracle/product/10.2.0/db_1 nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,vers=3,timeo=600,actimeo=0 0 0nas1:/shared_data /u01/oradata nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3,timeo=600 0 0
Mount the NFS shares by running the following commands as the root user on both servers.
mount /u01/shared_configmount /u01/app/crs/product/10.2.0/crsmount /u01/app/oracle/product/10.2.0/db_1mount /u01/oradata
Create the shared CRS Configuration and Voting Disk files.
touch /u01/shared_config/ocr_configurationtouch /u01/shared_config/voting_disk
Make sure the permissions on the shared directories are correct by running the following commands as the root user on each server.
chown -R oracle:oinstall /u01/shared_configchown -R oracle:oinstall /u01/app/crs/product/10.2.0/crschown -R oracle:oinstall /u01/app/oracle/product/10.2.0/db_1chown -R oracle:oinstall /u01/oradata
Before installing the clusterware, check the prerequisites have been met using the "runcluvfy.sh" utility in the clusterware root directory.
/mountpoint/clusterware/runcluvfy.sh stage -pre crsinst -n rac1,rac2 -verbose
If you get any failures be sure to correct them before proceeding.
Install the Clusterware Software
Place the clusterware and database software in the /u02 directory and unzip it.
cd /u02unzip 10201_clusterware_linux32.zipunzip 10201_database_linux32.zip
Login to RAC1 as the oracle user and start the Oracle installer.
cd /u02/clusterware./runInstaller
On the "Welcome" screen, click the "Next" button.
Accept the default inventory location by clicking the "Next" button.
Enter the appropriate name and path for the Oracle Home and click the "Next" button.
Wait while the prerequisite checks are done. If you have any failures correct them and retry the tests before clicking the "Next" button.
The "Specify Cluster Configuration" screen shows only the RAC1 node in the cluster. Click the "Add" button to continue.
Enter the details for the RAC2 node and click the "OK" button.
Click the "Next" button to continue.
The "Specific Network Interface Usage" screen defines how each network interface will be used. Highlight the "eth0" interface and click the "Edit" button.
Set the "eth0" interface type to "Public" and click the "OK" button.
Leave the "eth1" interface as private and click the "Next" button.
Click the "External Redundancy" option, enter "/u01/ocr_configuration" as the OCR Location and click the "Next" button. To have greater redundancy we would need to define another shared disk for an alternate location.
Click the "External Redundancy" option, enter "/u01/voting_disk" as the Voting Disk Location and click the "Next" button. To have greater redundancy we would need to define another shared disk for an alternate location.
On the "Summary" screen, click the "Install" button to continue.
Wait while the installation takes place.
Once the install is complete, run the orainstRoot.sh and root.sh scripts on both nodes as directed on the following screen.
The output from the orainstRoot.sh file should look something like that listed below.
# cd /u01/app/oracle/oraInventory# ./orainstRoot.shChanging permissions of /u01/app/oracle/oraInventory to 770.Changing groupname of /u01/app/oracle/oraInventory to oinstall.The execution of the script is complete#
The output of the root.sh will vary a little depending on the node it is run on. The following text is the output from the RAC1 node.
# cd /u01/app/crs/product/10.2.0/crs# ./root.shWARNING: directory '/u01/app/crs/product/10.2.0' is not owned by rootWARNING: directory '/u01/app/crs/product' is not owned by rootWARNING: directory '/u01/app/crs' is not owned by rootWARNING: directory '/u01/app' is not owned by rootChecking to see if Oracle CRS stack is already configured/etc/oracle does not exist. Creating it now.Setting the permissions on OCR backup directorySetting up NS directoriesOracle Cluster Registry configuration upgraded successfullyWARNING: directory '/u01/app/crs/product/10.2.0' is not owned by rootWARNING: directory '/u01/app/crs/product' is not owned by rootWARNING: directory '/u01/app/crs' is not owned by rootWARNING: directory '/u01/app' is not owned by rootassigning default hostname rac1 for node 1.assigning default hostname rac2 for node 2.Successfully accumulated necessary OCR keys.Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.node >nodenumber<: >nodename< >private interconnect name< >hostname<node 1: rac1 rac1-priv rac1node 2: rac2 rac2-priv rac2Creating OCR keys for user 'root', privgrp 'root'..Operation successful.Now formatting voting device: /u01/shared_config/voting_diskFormat of 1 voting devices complete.Startup will be queued to init within 90 seconds.Adding daemons to inittabExpecting the CRS daemons to be up within 600 seconds.CSS is active on these nodes. rac1CSS is inactive on these nodes. rac2Local node checking complete.Run root.sh on remaining nodes to start CRS daemons.#
Ignore the directory ownership warnings. We should really use a separate directory structure for the clusterware so it can be owned by the root user, but it has little effect on the finished results.
The output from the RAC2 node is listed below.
# /u01/crs/oracle/product/10.2.0/crs# ./root.shWARNING: directory '/u01/app/crs/product/10.2.0' is not owned by rootWARNING: directory '/u01/app/crs/product' is not owned by rootWARNING: directory '/u01/app/crs' is not owned by rootWARNING: directory '/u01/app' is not owned by rootChecking to see if Oracle CRS stack is already configured/etc/oracle does not exist. Creating it now.Setting the permissions on OCR backup directorySetting up NS directoriesOracle Cluster Registry configuration upgraded successfullyWARNING: directory '/u01/app/crs/product/10.2.0' is not owned by rootWARNING: directory '/u01/app/crs/product' is not owned by rootWARNING: directory '/u01/app/crs' is not owned by rootWARNING: directory '/u01/app' is not owned by rootclscfg: EXISTING configuration version 3 detected.clscfg: version 3 is 10G Release 2.assigning default hostname rac1 for node 1.assigning default hostname rac2 for node 2.Successfully accumulated necessary OCR keys.Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.node >nodenumber<: >nodename< >private interconnect name< >hostname<node 1: rac1 rac1-priv rac1node 2: rac2 rac2-priv rac2clscfg: Arguments check out successfully.NO KEYS WERE WRITTEN. Supply -force parameter to override.-force is destructive and will destroy any previous clusterconfiguration.Oracle Cluster Registry for cluster has already been initializedStartup will be queued to init within 90 seconds.Adding daemons to inittabExpecting the CRS daemons to be up within 600 seconds.CSS is active on these nodes. rac1 rac2CSS is active on all nodes.Waiting for the Oracle CRSD and EVMD to startWaiting for the Oracle CRSD and EVMD to startOracle CRS stack installed and running under init(1M)Running vipca(silent) for configuring nodeappsThe given interface(s), "eth0" is not public. Public interfaces should be used to configure virtual IPs.#
Here you can see that some of the configuration steps are omitted as they were done by the first node. In addition, the final part of the script ran the Virtual IP Configuration Assistant (VIPCA) in silent mode, but it failed. This is because my public IP addresses are actually within the "10.1.255.255" range which is a private IP range. If you were using "legal" IP addresses you would not see this and you could ignore the following VIPCA steps.
Run the VIPCA manually as the root user on the RAC2 node using the following command.
# cd /u01/app/crs/product/10.2.0/crs/bin# ./vipca
Click the "Next" button on the VIPCA welcome screen.
Highlight the "eth0" interface and click the "Next" button.
Enter the vitual IP alias and address for each node. Once you enter the first alias, the remaining values should default automatically. Click the "Next" button to continue.
Accept the summary information by clicking the "Finish" button.
Wait until the configuration is complete, then click the "OK" button.
Accept the VIPCA results by clicking the "Exit" button.
You should now return to the "Execute Configuration Scripts" screen on RAC1 and click the "OK" button.
Wait for the configuration assistants to complete.
When the installation is complete, click the "Exit" button to leave the installer.
The clusterware installation is now complete.
Install the Database Software
Login to RAC1 as the oracle user and start the Oracle installer.
cd /u02/database./runInstaller
On the "Welcome" screen, click the "Next" button.
Select the "Enterprise Edition" option and click the "Next" button.
Enter the name and path for the Oracle Home and click the "Next" button.
Select the "Cluster Install" option and make sure both RAC nodes are selected, the click the "Next" button.
Wait while the prerequisite checks are done. If you have any failures correct them and retry the tests before clicking the "Next" button.
Select the "Install database Software only" option, then click the "Next" button.
On the "Summary" screen, click the "Install" button to continue.
Wait while the database software installs.
Once the installation is complete, wait while the configuration assistants run.
Execute the "root.sh" scripts on both nodes, as instructed on the "Execute Configuration scripts" screen, then click the "OK" button.
When the installation is complete, click the "Exit" button to leave the installer.
Create a Database using the DBCA
Login to RAC1 as the oracle user and start the Database Configuration Assistant.
dbca
On the "Welcome" screen, select the "Oracle Real Application Clusters database" option and click the "Next" button.
Select the "Create a Database" option and click the "Next" button.
Highlight both RAC nodes and click the "Next" button.
Select the "General Purpose" option and click the "Next" button.
Enter the values "RAC.WORLD" and "RAC" for the Global Database Name and SID Prefix respectively, then click the "Next" button.
Accept the management options by clicking the "Next" button. If you are attempting the installation on a server with limited memory, you may prefer not to configure Enterprise Manager at this time.
Enter database passwords then click the "Next" button.
Select the "Cluster File System" option, then click the "Next" button.
Select the "Use Oracle-Managed Files" option and enter "/u01/oradata/" as the database location, then click the "Next" button.
Check the "Specify Flash Recovery Area" option and accept the default location by clicking the "Next" button.
Click the "Next" button on the "Database Content" screen.
Accept the current database services configuration by clicking the "Next" button.
Select the "Custom" memory management option and accept the default settings by clicking the "Next" button.
Accept the database storage settings by clicking the "Next" button.
Accept the database creation options by clicking the "Finish" button.
Accept the summary information by clicking the "OK" button.
Wait while the database is created.
Once the database creation is complete you are presented with the following screen. Make a note of the information on the screen and click the "Exit" button.
The RAC database creation is now complete.
TNS Configuration
Once the installation is complete, the "$ORACLE_HOME/network/admin/listener.ora" file in the shared $ORACLE_HOME will contain the following entries.
# listener.ora Network Configuration File: /u01/app/oracle/product/10.2.0/db_1/network/admin/listener.ora# Generated by Oracle configuration tools.LISTENER_RAC2 = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = rac2-vip.localdomain)(PORT = 1521)(IP = FIRST)) ) (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 10.1.10.202)(PORT = 1521)(IP = FIRST)) ) (ADDRESS_LIST = (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC)) ) ) )LISTENER_RAC1 = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip.localdomain)(PORT = 1521)(IP = FIRST)) ) (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 10.1.10.201)(PORT = 1521)(IP = FIRST)) ) (ADDRESS_LIST = (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC)) ) ) )
The shared "$ORACLE_HOME/network/admin/tnsnames.ora" file will contain the following entries.
# tnsnames.ora Network Configuration File: /u01/app/oracle/product/10.2.0/db_1/network/admin/tnsnames.ora# Generated by Oracle configuration tools.RAC = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip.localdomain)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = rac2-vip.localdomain)(PORT = 1521)) (LOAD_BALANCE = yes) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = RAC.WORLD) ) )LISTENERS_RAC = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip.localdomain)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = rac2-vip.localdomain)(PORT = 1521)) )RAC2 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rac2-vip.localdomain)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = RAC.WORLD) (INSTANCE_NAME = RAC2) ) )RAC1 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip.localdomain)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = RAC.WORLD) (INSTANCE_NAME = RAC1) ) )
This configuration allows direct connections to specific instance, or using a load balanced connection to the main service.
$ sqlplus / as sysdbaSQL*Plus: Release 10.2.0.1.0 - Production on Wed Aug 20 17:56:33 2008Copyright (c) 1982, 2005, Oracle. All rights reserved.Connected to:Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - ProductionWith the Partitioning, Real Application Clusters, OLAP and Data Mining optionsSQL> CONN sys/password@rac1 AS SYSDBAConnected.SQL> SELECT instance_name, host_name FROM v$instance;INSTANCE_NAME HOST_NAME---------------- ----------------------------------------------------------------RAC1 rac1.localdomainSQL> CONN sys/password@rac2 AS SYSDBAConnected.SQL> SELECT instance_name, host_name FROM v$instance;INSTANCE_NAME HOST_NAME---------------- ----------------------------------------------------------------RAC2 rac2.localdomainSQL> CONN sys/password@rac AS SYSDBAConnected.SQL> SELECT instance_name, host_name FROM v$instance;INSTANCE_NAME HOST_NAME---------------- ----------------------------------------------------------------RAC1 rac1.localdomainSQL>
Check the Status of the RAC
There are several ways to check the status of the RAC. The srvctl
utility shows the current configuration and status of the RAC database.
$ srvctl config database -d RACrac1 RAC1 /u01/app/oracle/product/10.2.0/db_1rac2 RAC2 /u01/app/oracle/product/10.2.0/db_1$$ srvctl status database -d RACInstance RAC1 is running on node rac1Instance RAC2 is running on node rac2$
The V$ACTIVE_INSTANCES
view can also display the current status of the instances.
$ sqlplus / as sysdbaSQL*Plus: Release 10.2.0.1.0 - Production on Wed Aug 20 17:57:24 2008Copyright (c) 1982, 2005, Oracle. All rights reserved.Connected to:Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - ProductionWith the Partitioning, Real Application Clusters, OLAP and Data Mining optionsSQL> SELECT * FROM v$active_instances;INST_NUMBER INST_NAME----------- ------------------------------------------------------------ 1 rac1.localdomain:RAC1 2 rac2.localdomain:RAC2SQL>
Finally, the GV$
allow you to display global information for the whole RAC.
SQL> SELECT inst_id, username, sid, serial# FROM gv$session WHERE username IS NOT NULL; INST_ID USERNAME SID SERIAL#---------- ------------------------------ ---------- ---------- 1 SYS 127 2 1 SYS 128 28 1 SYS 130 10 1 SYS 131 4 1 SYS 133 9 1 DBSNMP 134 27 1 DBSNMP 135 1 1 SYS 153 122 2 SYSMAN 120 243 2 DBSNMP 122 37 2 DBSNMP 124 93 INST_ID USERNAME SID SERIAL#---------- ------------------------------ ---------- ---------- 2 SYSMAN 125 2 2 SYSMAN 127 6 2 SYS 128 26 2 SYS 129 30 2 SYS 130 3 2 SYS 133 149 2 SYSMAN 134 58 2 SYS 136 3219 rows selected.SQL>
If you have configured Enterprise Manager, it can be used to view the configuration and current status of the database.
Direct and Asynchronous I/O
Remember to use direct I/O and asynchronous I/O to improve performance. Direct I/O has been supported over NFS for some time, but support for asynchronous I/O over NFS was only introduced in RHEL 4 Update 3 (and its clones), so you need to use an up to date version of your Linux distribution to take advantage of this feature.
You can get details about this Direct and Asynchronous I/O by following the link.
For more information see:
- Oracle 10g RAC On Linux Using VMware Server
- Oracle Database 10g Release 2 (10.2.0.1) RAC Installation On Windows 2003 Using VMware Server
- Build Your Own Oracle RAC 10g Release 2 Cluster on Linux and FireWire
- Installation Guide for Linux x86 (10.2)
- Oracle Clusterware and Oracle Real Application Clusters Installation Guide for Linux (10.2)
- Direct and Asynchronous I/O
Hope this helps. Regards Tim...
Back to the Top.
- Oracle 10g Release 2 RAC On RHEL4 Linux Using VMware ESX Server and NFS
- Oracle Database 11g Release 2 RAC On Linux Using VMware Server 2
- Oracle Database 11g Release 2 RAC On Linux Using VMware Server 2
- Oracle Database 11g Release 2 RAC On Linux Using VMware Server 2
- Oracle Database 11g Release 2 RAC On Linux Using VMware Server 2
- Oracle Database 11g Release 2 RAC On Linux Using VMware Server 2
- Oracle Database 11g Release 2 RAC On Linux Using NFS
- Oracle Database 11g Release 2 RAC On Linux Using NFS
- Build Your Own Oracle RAC 10g Release 2 Cluster on Linux and FireWire
- Build Your Own Oracle RAC 10g Release 2 Cluster on Linux and FireWire
- Build Your Own Oracle RAC 10g Release 2 Cluster on Linux and FireWire
- Oracle Database 11g Release 2 RAC On Oracle Linux 5.8 Using VirtualBox
- Oracle Database 11g Release 2 RAC On Oracle Linux 5.8 Using VirtualBox
- Oracle Database 11g Release 2 RAC On Oracle Linux 5.8 Using VirtualBox
- Oracle Database 11g Release 2 RAC On Oracle Linux 5.8 Using VirtualBox
- Oracle Database 11g Release 2 RAC On Oracle Linux 5.8 Using VirtualBox
- Oracle Database 11g Release 2 RAC On Linux Using VirtualBox
- Oracle 10g rac on windows2003 using vmware
- 模拟电子技术21 36------——共集电极 放大电路
- memset函数
- 珍惜生命,战胜自己,活出精彩
- Ext json传递Date数据问题
- 也搞点艺术!
- Oracle 10g Release 2 RAC On RHEL4 Linux Using VMware ESX Server and NFS
- symbian复合控件的重绘问题 (compound controls redraw)
- [.NET] 在桌面画图的范例
- pv3d视频教程
- android timer与handler的用法体会
- C#中时间差的计算
- MySQL 5.1新特性之事件调度器(Event Scheduler)
- MFC中用户界面元素更新原理
- http://acm.pku.edu.cn/JudgeOnline/problem?id=2480 欧拉