Oracle10g集群详细操作步骤

来源:互联网 发布:tera人类女捏脸数据 编辑:程序博客网 时间:2024/05/09 20:25

一、环境
主机:
OS: Windows 7 64bit
VirtualBox版本:VirtualBox 4.0.8r71778
增加一个Loopback网卡,配置IP为: 10.10.100.1, 供虚拟机Bridged网卡使用
VirtualBox Host-Only Network配置IP为:10.10.200.1

2个虚拟机:
A) RAC节点1:
虚拟机名称:ODBRAC_1
虚拟机所在目录: D:\VirtualBox_PC\ODBRAC_1
   OS: Red Hat Enterprise Linux 5.4 64bit
   Public网卡(NAT): 10.10.100.101   Gateway: 10.10.100.1
Private网卡(Host-Only):10.10.200.101
   Hostname: odbrac1.localdomain

B) RAC节点2:
虚拟机名称:ODBRAC_2
虚拟机所在目录: D:\VirtualBox_PC\ODBRAC_2
   OS: Red Hat Enterprise Linux 5.4 64bit
   Public网卡(NAT): 10.10.100.102   Gateway: 10.10.100.1
Private网卡(Host-Only):10.10.200.102
   Hostname: odbrac2.localdomain


三、RAC节点 RHEL5.4系统安装(每个节点)
安装Redhat Linux的过程比较简单,我选用的Linux版本是Redhat Enterprise Linux 5.4,因需装Oracle10g,系统环境需求如下:
   RAM:1GB(最低需求512MB)
   SWAP:2GB

必须安装以下系统组件:GNOME桌面环境、编辑器、开发工具、开发库等.
   A) Desktop Environments:
 GNOME Desktop Environment
   B) Applications:
 Editers
 Text-based Internet
   C) Development:
 Development Libraries
 Development Tools
 GNOME Software Development
 Legacy Software Development
   D) Servers:
 不安装
   E) Base System:
 Administration Tools
 Base
 Legacy Software Support
 X Windows System
F) Languages:
 Chinese Support
(注意:在虚拟机环境安装时,选择时区时不要选择“System clock uses UTC”)


四, 配置RHEL操作系统
1) 内核版本要求
要在Linux上安装Oracle,所需内核版本:2.4.9-e.25(或更高版本)
通过运行以下命令检查内核版本:
[root@odbrac1 ~]# uname -r
2.6.18-164.el5

2) 设置主机名及网络信息
a) 节点1即odbrac1
[root@localhost ~]# hostname odbrac1.localdomain

[root@localhost ~]# vim /etc/sysconfig/network
NETWORKING_IPV6=no
HOSTNAME=odbrac1.localdomain
GATEWAY=10.10.100.1

[root@localhost ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth0
# Intel Corporation 82545EM Gigabit Ethernet Controller (Copper)
DEVICE=eth0
BOOTPROTO=static
IPADDR=10.10.100.101
NETMASK=255.255.255.0
ONBOOT=yes

[root@localhost ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth1
# Intel Corporation 82545EM Gigabit Ethernet Controller (Copper)
DEVICE=eth1
BOOTPROTO=static
IPADDR=10.10.200.101
NETMASK=255.255.255.0
ONBOOT=yes

[root@localhost ~]# vim /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1               localhost.localdomain localhost
::1             localhost6.localdomain6 localhost6

# Public
10.10.100.101   odbrac1.localdomain odbrac1
10.10.100.102   odbrac2.localdomain odbrac2
# Private
10.10.200.101   odbrac1-priv.localdomain odbrac1-priv
10.10.200.102   odbrac2-priv.localdomain odbrac2-priv
# Virtual
10.10.100.201   odbrac1-vip.localdomain odbrac1-vip
10.10.100.202   odbrac2-vip.localdomain odbrac2-vip
# SCAN
10.10.100.1     odbrac-scan.localdomain odbrac-scan
(切记:不要在127.0.0.1的回送地址中包含主机名)

重新登录:
[root@localhost ~]# logout
[root@odbrac1 ~]# service network restart

b) 节点2即odbrac2
[root@localhost ~]# hostname odbrac2.localdomain

[root@localhost ~]# vim /etc/sysconfig/network
NETWORKING_IPV6=no
HOSTNAME=odbrac1.localdomain
GATEWAY=10.10.100.1

[root@localhost ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth0
# Intel Corporation 82545EM Gigabit Ethernet Controller (Copper)
DEVICE=eth0
BOOTPROTO=static
IPADDR=10.10.100.102
NETMASK=255.255.255.0
ONBOOT=yes

[root@localhost ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth1
# Intel Corporation 82545EM Gigabit Ethernet Controller (Copper)
DEVICE=eth1
BOOTPROTO=static
IPADDR=10.10.200.102
NETMASK=255.255.255.0
ONBOOT=yes

[root@localhost ~]# vim /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1               localhost.localdomain localhost
::1             localhost6.localdomain6 localhost6

# Public
10.10.100.102   odbrac2.localdomain odbrac2
10.10.100.101   odbrac1.localdomain odbrac1
# Private
10.10.200.102   odbrac2-priv.localdomain odbrac2-priv
10.10.200.101   odbrac1-priv.localdomain odbrac1-priv
# Virtual
10.10.100.202   odbrac2-vip.localdomain odbrac2-vip
10.10.100.201   odbrac1-vip.localdomain odbrac1-vip
# SCAN
10.10.100.1     odbrac-scan.localdomain odbrac-scan
(切记:不要在127.0.0.1的回送地址中包含主机名)

重新登录:
[root@localhost ~]# logout
[root@odbrac2 ~]# service network restart


3) 程序包安装和更新(每个节点)
在安装好Linux后,需验证安装是否具备Oracle 10g所需要的所有程序包和更新,按以下步骤验证安装.
要查看系统上安装了这些程序包的哪些版本,以 root 用户身份运行以下命令:
[root@odbrac1 ~]# rpm -q glibc glibc-common glibc-devel glibc-headers gcc gcc-c++ libgcc libstdc++ libstdc++-devel make binutils setarch compat-db compat-gcc-34 compat-gcc-34-c++ compat-libstdc++-296 libXp openmotif openmotif22 ksh pdksh sysstat elfutils-libelf elfutils-libelf-devel elfutils-libelf-devel-static libaio libaio-devel libgomp unixODBC unixODBC-devel
glibc-2.5-42
glibc-2.5-42
glibc-common-2.5-42
glibc-devel-2.5-42
glibc-devel-2.5-42
glibc-headers-2.5-42
gcc-4.1.2-46.el5
gcc-c++-4.1.2-46.el5
libgcc-4.1.2-46.el5
libgcc-4.1.2-46.el5
libstdc++-4.1.2-46.el5
libstdc++-4.1.2-46.el5
libstdc++-devel-4.1.2-46.el5
make-3.81-3.el5
binutils-2.17.50.0.6-12.el5
setarch-2.0-1.1
compat-db-4.2.52-5.1
compat-gcc-34-3.4.6-4
compat-gcc-34-c++-3.4.6-4
compat-libstdc++-296-2.96-138
libXp-1.0.0-8.1.el5
libXp-1.0.0-8.1.el5
openmotif-2.3.1-2.el5
openmotif22-2.2.3-18
ksh-20080202-14.el5
pdksh-5.2.14-36.el5
sysstat-7.0.2-3.el5
elfutils-libelf-0.137-3.el5
elfutils-libelf-0.137-3.el5
elfutils-libelf-devel-0.137-3.el5
elfutils-libelf-devel-static-0.137-3.el5
libaio-0.3.106-3.2
libaio-0.3.106-3.2
libaio-devel-0.3.106-3.2
libaio-devel-0.3.106-3.2
libgomp-4.4.0-6.el5
libgomp-4.4.0-6.el5
unixODBC-2.2.11-7.1
unixODBC-2.2.11-7.1
unixODBC-devel-2.2.11-7.1
unixODBC-devel-2.2.11-7.1

如果系统上缺少任何程序包,或版本比以上指定的版本旧(compat-db 除外),则需要安装这些包,这些包在RHEL的DVD安装盘上的server目录下都有(注意:32位和64位的包都需要安装).
[root@odbrac1 ~]# mount /dev/cdrom /media/cdrom
[root@odbrac1 ~]# cd /media/cdrom/Server
[root@odbrac1 Server]# rpm -Uvh setarch-2.0-1.1.x86_64.rpm
[root@odbrac1 Server]# rpm -Uvh make-3.81-3.el5.x86_64.rpm
[root@odbrac1 Server]# rpm -Uvh glib-1.2.10-20.el5.x86_64.rpm
[root@odbrac1 Server]# rpm -Uvh glib-1.2.10-20.el5.i386.rpm
[root@odbrac1 Server]# rpm -Uvh glibc-common-2.5-42.x86_64.rpm
[root@odbrac1 Server]# rpm -Uvh glibc-headers-2.5-42.x86_64.rpm
[root@odbrac1 Server]# rpm -Uvh glibc-devel-2.5-42.x86_64.rpm
[root@odbrac1 Server]# rpm -Uvh glibc-devel-2.5-42.i386.rpm
[root@odbrac1 Server]# rpm -Uvh libaio-0.3.106-3.2.x86_64.rpm
[root@odbrac1 Server]# rpm -Uvh libaio-0.3.106-3.2.i386.rpm
[root@odbrac1 Server]# rpm -Uvh libaio-devel-0.3.106-3.2.x86_64.rpm
[root@odbrac1 Server]# rpm -Uvh libaio-devel-0.3.106-3.2.i386.rpm
[root@odbrac1 Server]# rpm -Uvh compat-db-4.2.52-5.1.x86_64.rpm
[root@odbrac1 Server]# rpm -Uvh compat-db-4.2.52-5.1.i386.rpm
[root@odbrac1 Server]# rpm -Uvh compat-libstdc++-296-2.96-138.i386.rpm
[root@odbrac1 Server]# rpm -Uvh compat-libf2c-34-3.4.6-4.x86_64.rpm
[root@odbrac1 Server]# rpm -Uvh compat-libf2c-34-3.4.6-4.i386.rpm
[root@odbrac1 Server]# rpm -Uvh compat-gcc-34-3.4.6-4.x86_64.rpm
[root@odbrac1 Server]# rpm -Uvh compat-gcc-34-c++-3.4.6-4.x86_64.rpm
[root@odbrac1 Server]# rpm -Uvh binutils-2.17.50.0.6-12.el5.x86_64.rpm
[root@odbrac1 Server]# rpm -Uvh gcc-4.1.2-46.el5.x86_64.rpm
[root@odbrac1 Server]# rpm -Uvh gcc-c++-4.1.2-46.el5.x86_64.rpm
[root@odbrac1 Server]# rpm -Uvh libgcc-4.1.2-46.el5.x86_64.rpm
[root@odbrac1 Server]# rpm -Uvh libgcc-4.1.2-46.el5.i386.rpm
[root@odbrac1 Server]# rpm -Uvh libstdc++-4.1.2-46.el5.x86_64.rpm
[root@odbrac1 Server]# rpm -Uvh libstdc++-4.1.2-46.el5.i386.rpm
[root@odbrac1 Server]# rpm -Uvh libstdc++-devel-4.1.2-46.el5.x86_64.rpm
[root@odbrac1 Server]# rpm -Uvh libstdc++-devel-4.1.2-46.el5.i386.rpm
[root@odbrac1 Server]# rpm -Uvh libXp-1.0.0-8.1.el5.x86_64.rpm
[root@odbrac1 Server]# rpm -Uvh libXp-1.0.0-8.1.el5.i386.rpm
[root@odbrac1 Server]# rpm -Uvh openmotif-2.3.1-2.el5.x86_64.rpm
[root@odbrac1 Server]# rpm -Uvh openmotif-2.3.1-2.el5.i386.rpm
[root@odbrac1 Server]# rpm -Uvh openmotif22-2.2.3-18.x86_64.rpm
[root@odbrac1 Server]# rpm -Uvh openmotif22-2.2.3-18.i386.rpm
[root@odbrac1 Server]# rpm -Uvh ksh-20080202-14.el5.x86_64.rpm
[root@odbrac1 Server]# rpm -Uvh pdksh-5.2.14-36.el5.x86_64.rpm
[root@odbrac1 Server]# rpm -Uvh sysstat-7.0.2-3.el5.x86_64.rpm
[root@odbrac1 Server]# rpm -Uvh elfutils-0.137-3.el5.x86_64.rpm
[root@odbrac1 Server]# rpm -Uvh elfutils-libelf-0.137-3.el5.x86_64.rpm
[root@odbrac1 Server]# rpm -Uvh elfutils-libelf-0.137-3.el5.i386.rpm
[root@odbrac1 Server]# rpm -Uvh elfutils-libelf-devel-0.137-3.el5.x86_64.rpm
[root@odbrac1 Server]# rpm -Uvh elfutils-libelf-devel-0.137-3.el5.i386.rpm
[root@odbrac1 Server]# rpm -Uvh elfutils-libelf-devel-static-0.137-3.el5.x86_64.rpm
[root@odbrac1 Server]# rpm -Uvh elfutils-libelf-devel-static-0.137-3.el5.i386.rpm
[root@odbrac1 Server]# rpm -Uvh libgomp-4.4.0-6.el5.x86_64.rpm
[root@odbrac1 Server]# rpm -Uvh libgomp-4.4.0-6.el5.i386.rpm
[root@odbrac1 Server]# rpm -Uvh unixODBC-2.2.11-7.1.x86_64.rpm
[root@odbrac1 Server]# rpm -Uvh unixODBC-2.2.11-7.1.i386.rpm
[root@odbrac1 Server]# rpm -Uvh unixODBC-devel-2.2.11-7.1.x86_64.rpm
[root@odbrac1 Server]# rpm -Uvh unixODBC-devel-2.2.11-7.1.i386.rpm

3) 验证系统要求
要验证系统是否满足 Oracle 10g 数据库的最低要求,以 root 用户身份登录并运行以下命令。

要查看可用 RAM 和交换空间大小,运行以下命令:
[root@odbrac1 ~]# grep MemTotal /proc/meminfo
MemTotal:512236 kB
[root@odbrac1 ~]# grep SwapTotal /proc/meminfo
SwapTotal:1574360 kB

所需最小 RAM 为 512MB,而所需最小交换空间为 1GB。对于 RAM 小于或等于 2GB 的系统,交换空间应为 RAM 数量的两倍;对于 RAM 大于 2GB 的系统,交换空间应为 RAM 数量的一到两倍。

Oracle 10g 软件还需要 2.5GB 的可用磁盘空间,而数据库则另需 1.2GB 的可用磁盘空间。/tmp 目录至少需要 400MB 的可用空间。要检查系统上的可用磁盘空间,运行以下命令:
[root@odbrac1 ~]# df -h
Filesystem        Size  Used Avail Use% Mounted on
/dev/sda3         6.8G  1.3G  5.2G  20% /
/dev/sda1         99M   17M   77M  18% /boot

4) 创建 Oracle 组和用户帐户
Oracle数据库必须在Oracle用户下才能安装,所以需要建立相应的用户群组、用户,以及设置相应的目录属主、目录权限
[root@odbrac1 ~]# groupadd oinstall
[root@odbrac1 ~]# groupadd dba
[root@odbrac1 ~]# groupadd oper
[root@odbrac1 ~]# groupadd asmadmin
[root@odbrac1 ~]# groupadd asmdba
[root@odbrac1 ~]# groupadd asmoper
[root@odbrac1 ~]# useradd -g oinstall -G dba,oper,asmdba -m -d /home/oracle oracle
[root@odbrac1 ~]# id oracle
uid=501(oracle) gid=502(oinstall) groups=502(oinstall),503(dba),504(oper),506(asmdba)
[root@odbrac1 ~]# passwd oracle
[root@odbrac1 ~]# useradd -g oinstall -G asmadmin,asmdba,asmoper -m -d /home/grid grid
[root@odbrac1 ~]# id grid
uid=502(grid) gid=502(oinstall) groups=502(oinstall),505(asmadmin),506(asmdba),507(asmoper)
[root@odbrac1 ~]# passwd grid

5) 创建 Oracle应用的安装目录
Oracle RAC安装目录:
[root@odbrac1 ~]# mkdir -p /app/oracle/product/10.2.0/db
Oracle Clusterware安装目录:
[root@odbrac1 ~]# mkdir -p /app/oracle/product/10.2.0/crs
Oracle 共享OCFS2磁盘的Mount点:
[root@odbrac1 ~]# mkdir -p /app/oracle/ocfs2
更改安装目录权限:
[root@odbrac1 ~]# chown -R oracle.oinstall /app/oracle
[root@odbrac1 ~]# chmod -R 775 /app/oracle

6) 配置Linux内核参数
Linux 内核非常出色。与大多数其他 *NIX 系统不同,Linux 允许在系统启动和运行时修改大多数内核参数。完成内核参数更改后不必重新启动系统。Oracle 数据库 10g 需要以下所示的内核参数设置。其中给出的是最小值,因此如果您的系统使用的值较大,则不要更改它。
设置原则:
kernel.shmmax 为 4GB-1byte或一半的物理内存(kernel.shmmax默认即可无需设置), 哪个值更低用哪个; fs.file-max 为512 * PROCESSES.

以 root 用户身份登录后执行下命令(在文件最后加入)
[root@odbrac1 ~]# vim /etc/sysctl.conf
# For Oracle 10g
kernel.msgmni = 2878
kernel.shmmni = 4096
# semaphores: semmsl, semmns, semopm, semmni
kernel.sem = 250 32000 100 142
fs.file-max = 131072
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 1048576
net.core.rmem_max = 1048576
net.core.wmem_default = 262144
net.core.wmem_max = 262144

保存文件后执行以下命令激活更改.
[root@odbrac1 ~]# /sbin/sysctl -p

7) 为oracle用户设置Shell限制
Oracle 建议对每个 Linux 帐户可以使用的进程数和打开的文件数设置限制。要进行这些更改,以 root 用户的身份执行下列命令(在文件最后加入):
[root@odbrac1 ~]# vim /etc/security/limits.conf
# For Oracle 10g
oracle soft nproc   2047
oracle hard nproc  16384
oracle soft nofile  1024
oracle hard nofile 65536

[root@odbrac1 ~]# vim /etc/pam.d/login
# For Oracle 10g
session    required     pam_limits.so

(解决SSH连接时慢的问题)
[root@oiam ~]# vim /etc/ssh/sshd_config
UseDNS=no

以root用户身份运行以下命令(在文件最后加入):
[root@odbrac1 ~]# vim /etc/profile
# For Oracle 10g
if [ $USER = "oracle" ]; then
  if [ $SHELL = "/bin/ksh" ]; then
     ulimit -p 16384
     ulimit -n 65536
  else
     ulimit -u 16384 -n 65536
  fi
  umask 022
fi

以root用户身份运行以下命令(在文件最后加入):
[root@odbrac1 ~]# vim /etc/csh.login
# For Oracle 10g
if ( $USER == "oracle" ) then
  limit maxproc 16384
  limit descriptors 65536
  umask 022
endif

为/usr/lib/libdb.so.2创建一个链接:
[root@odbrac1 ~]# ln -s /usr/lib/libgdbm.so.2.0.0 /usr/lib/libdb.so.2

8) 关闭Firewall:
[root@odbrac1 ~]# chkconfig iptables off

9) 禁用SELINUX,配置 /etc/selinux/config:
[root@odbrac1 ~]# vim /etc/selinux/config
SELINUX=disabled
(注意:需重启)

10) 配置每个节点的hangcheck-timer内核:
先确认module文件是否存在
[root@odbrac1 ~]# find /lib/modules -name "hangcheck-timer.ko"
[root@odbrac1 ~]# vim /etc/modprobe.conf
加入如下内容:
# For Oracle 10g
options hangcheck-timer hangcheck_tick=30 hangcheck_margin=180

设置为自动启动hangcheck-timer:
[root@odbrac1 ~]# /sbin/modprobe -v hangcheck_timer
insmod /lib/modules/2.6.18-164.el5/kernel/drivers/char/hangcheck-timer.ko hangcheck_tick=30 hangcheck_margin=180

检查是否成功启动:
[root@odbrac1 ~]# grep hangcheck /var/log/messages |tail -2
Jun 23 10:51:06 localhost kernel: Hangcheck: starting hangcheck timer 0.9.0 (tick is 30 seconds, margin is 180 seconds).

11) 减少虚拟机占用CPU的比率,修改/etc/grub.conf:
[root@odbrac1 ~]# vim /etc/grub.conf
将:
kernel /vmlinuz-2.6.18-164.el5 ro root=/dev/VolGroup00/LogVol00 rhgb quiet
改为:
kernel /vmlinuz-2.6.18-164.el5 ro root=/dev/VolGroup00/LogVol00 rhgb quiet divider=10

====================================================================================================================================================================

12) 在集群中所有节点上建立互信
(有两种方式: 配置 SSH 或者 RSH和RLOGIN, 任选其一)
a) 配置 SSH(每个节点)
修改 sshd_config 文件,允许root登录和证书认证:
[root@odbrac1 ~]# vim /etc/ssh/sshd_config
PermitRootLogin yes
PubkeyAuthentication yes
AuthorizedKeysFile      .ssh/authorized_keys

重启SSH服务:
[root@odbrac1 ~]# service sshd restart

采用ssh建立oracle用户的互信:
[root@odbrac1 ~]# su - oracle
[oracle@odbrac1 ~]$ ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_dsa):
Created directory \'/home/oracle/.ssh\'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_dsa.
Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
f6:7b:c7:d8:70:ab:1c:dd:90:15:f3:89:95:46:62:f1 oracle@odbrac1.localdomain
(采用默认一路回车)
[root@odbrac2 ~]# su - oracle
[oracle@odbrac2 ~]$ ssh-keygen -t dsa

在每个节点上生成 authorized_keys 文件(注意:每个节点都生成并复制到其它节点):
[oracle@odbrac1 ~]$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[oracle@odbrac2 ~]$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[oracle@odbrac1 ~]$ scp ~/.ssh/authorized_keys root@10.10.100.102:/tmp/authorized_keys.odbrac1
[oracle@odbrac2 ~]$ scp ~/.ssh/authorized_keys root@10.10.100.101:/tmp/authorized_keys.odbrac2
在每个节点上合并 authorized_keys 文件:
[oracle@odbrac1 ~]$ cat /tmp/authorized_keys.odbrac2 >> ~/.ssh/authorized_keys
[oracle@odbrac2 ~]$ cat /tmp/authorized_keys.odbrac1 >> ~/.ssh/authorized_keys
使用more命令可以查看每个节点的 authorized_keys 文件是否一致:
[oracle@odbrac1 ~]$ more ~/.ssh/authorized_keys
[oracle@odbrac2 ~]$ more ~/.ssh/authorized_keys
更改 authorized_keys 文件的权限:
[oracle@odbrac1 ~]$ chmod 644 ~/.ssh/authorized_keys
[oracle@odbrac2 ~]$ chmod 644 ~/.ssh/authorized_keys

开始建立SSH信任关系(每个节点):
[oracle@odbrac1 ~]$ exec /usr/bin/ssh-agent $SHELL
[oracle@odbrac1 ~]$ /usr/bin/ssh-add
[oracle@odbrac2 ~]$ exec /usr/bin/ssh-agent $SHELL
[oracle@odbrac2 ~]$ /usr/bin/ssh-add
执行如下命令以测试连通性(每个节点):
[oracle@odbrac1 ~]$ ssh localhost date
[oracle@odbrac1 ~]$ ssh odbrac1 date
[oracle@odbrac1 ~]$ ssh odbrac2 date
[oracle@odbrac1 ~]$ ssh odbrac1-priv date
[oracle@odbrac1 ~]$ ssh odbrac2-priv date
[oracle@odbrac2 ~]$ ssh localhost date
[oracle@odbrac2 ~]$ ssh odbrac2 date
[oracle@odbrac2 ~]$ ssh odbrac1 date
[oracle@odbrac2 ~]$ ssh odbrac2-priv date
[oracle@odbrac2 ~]$ ssh odbrac1-priv date
(输入“Yes”将接受并注册此密钥)

b) 配置 RSH和RLOGIN
检查是否安装RSH:
[root@odbrac1 ~]# rpm -q rsh rsh-server
rsh-0.17-40.el5
rsh-server-0.17-40.el5
如未安装,使用RHEL安装盘安装:
[root@odbrac1 ~]# mount /dev/cdrom /media/cdrom
[root@odbrac1 ~]# cd /media/cdrom/Server
[root@odbrac1 Server]# rpm -Uvh rsh-0.17-40.el5.x86_64.rpm
[root@odbrac1 Server]# rpm -Uvh rsh-server-0.17-40.el5.x86_64.rpm
启用RSH和RLOGIN:
[root@odbrac1 ~]# chkconfig rsh on
[root@odbrac1 ~]# chkconfig rlogin on
[root@odbrac1 ~]# service xinetd reload
创建 /etc/hosts.equiv 文件:
[root@odbrac1 ~]# touch /etc/hosts.equiv
[root@odbrac1 ~]# chmod 600 /etc/hosts.equiv
[root@odbrac1 ~]# chown root:root /etc/hosts.equiv

[root@odbrac1 ~]# vim /etc/hosts.equiv
在 /etc/hosts.equiv 中增加以下内容:
+odbrac1 oracle
+odbrac2 oracle
+odbrac1-priv oracle
+odbrac2-priv oracle

13) 时间同步
有两种方式:修改/etc/sysconfig/ntpd文件或者禁止NTP
修改配置
[root@odbrac1 ~]# vim /etc/sysconfig/ntpd
修改:
OPTIONS="-u ntp:ntp -p /var/run/ntpd.pid"
为:
OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"
重启ntpd服务
[root@odbrac1 ~]# service ntpd restart

禁止NTP
[root@odbrac1 ~]# service ntpd stop
[root@odbrac1 ~]# chkconfig ntpd off
[root@odbrac1 ~]# mv /etc/ntp.conf /etc/ntp.conf.orig
[root@odbrac1 ~]# rm /var/run/ntpd.pid

在节点1启动时间服务:
[root@odbrac1 ~]# chkconfig time-dgram on
[root@odbrac1 ~]# chkconfig time-stream on
[root@odbrac1 ~]# service xinetd restart

在其它节点测试节点1的时间服务:
[root@odbrac1 ~]# /usr/bin/rdate -p odbrac1-priv
其它节点与节点1同步时间:
[root@odbrac1 ~]# /usr/bin/rdate -s odbrac1-priv

在其它节点配置调度:
[root@odbrac1 ~]# crontab -e
加入(每10分钟同步一次):
*/10 * * * * /usr/bin/rdate -s odbrac1-priv

14) 创建共享磁盘(Shared Disks):
停止虚拟机(每个节点):
[root@odbrac1 ~]# shutdown -h now
[root@odbrac2 ~]# shutdown -h now

a) 在Windows主机上创建一个共享Disk目录, 路径为:
D:\VirtualBox_PC\ODBRAC_ShareDisk

b) 创建一个17GB的Shared Disk(VirtualBox默认安装目录为C:\Program Files\Oracle\VirtualBox), 在主机上运行下列命令(VirtualBox 4.0.0版本以上):
VBoxManage createhd --filename D:\VirtualBox_PC\ODBRAC_ShareDisk\sharedisk.vdi --size 17000 --format VDI --variant Fixed

c) 将Share Disk连接到虚拟机:控制器
VBoxManage storageattach ODBRAC_1 --storagectl "SATA Controller" --port 1 --device 0 --type hdd --medium D:\VirtualBox_PC\ODBRAC_ShareDisk\sharedisk.vdi --mtype shareable
VBoxManage storageattach ODBRAC_2 --storagectl "SATA Controller" --port 1 --device 0 --type hdd --medium D:\VirtualBox_PC\ODBRAC_ShareDisk\sharedisk.vdi --mtype shareable

d) 配置Share Disk为“可共享的”:
VBoxManage modifyhd D:\VirtualBox_PC\ODBRAC_ShareDisk\sharedisk.vdi --type shareable

e) 启动虚拟机1(节点1),给Share Disk分区(即sdb):
[root@odbrac1 ~]# ls /dev/sd*
/dev/sda  /dev/sda1  /dev/sda2  /dev/sdb

(从下面可以看到:sdb有2167个柱体cylinders,每个cylinder有 8225280 bytes,如果一个分区要分1GB,即需要16个cylinders,计算公式为:1*1000*1000*1000/8225280 = 121.58 约为122个cylinders)
[root@odbrac2 ~]# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won\'t be recoverable.


The number of cylinders for this disk is set to 1305.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
  (e.g., DOS FDISK, OS/2 FDISK)  
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): p

Disk /dev/sdb: 17.8 GB, 17825792000 bytes
255 heads, 63 sectors/track, 2167 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

  Device Boot      Start         End      Blocks   Id  System

分区规划:
1个1GB的OCFS2分区(OCR+CRS voting), 2个7GB的ASM分区(Oracle Data),余下创建1个ASM分区(Flash Recovery Area)
[说明: OCFS2分区在任何环境1GB足够,实际上只需要:OCR和OCR Mirror两个文件各100MB,CRS voting和CRS voting mirror1+mirror2三个文件各20MB,即合计:260MB; Flash Recovery AREA 4GB足够了)
分区前,先使用上面公式根据自己的实际分区情况进行计算后记录下,即为:
1-122
123-974
975-1826
1826-2167

开始分区:
[root@odbrac1 ~]# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won\'t be recoverable.


The number of cylinders for this disk is set to 2167.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
  (e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): p

Disk /dev/sdb: 17.8 GB, 17825792000 bytes
255 heads, 63 sectors/track, 2167 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

  Device Boot      Start         End      Blocks   Id  System

Command (m for help): n
Command action
  e   extended
  p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-2167, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-2167, default 2167): 122

Command (m for help): n
Command action
  e   extended
  p   primary partition (1-4)
e
Partition number (1-4): 2
First cylinder (123-2167, default 123):
Using default value 123
Last cylinder or +size or +sizeM or +sizeK (123-2167, default 2167):
Using default value 2167

Command (m for help): p

Disk /dev/sdb: 17.8 GB, 17825792000 bytes
255 heads, 63 sectors/track, 2167 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

  Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1         122      979933+  83  Linux
/dev/sdb2             123        2167    16426462+   5  Extended

Command (m for help): n
Command action
  l   logical (5 or over)
  p   primary partition (1-4)
l
First cylinder (123-2167, default 123):
Using default value 123
Last cylinder or +size or +sizeM or +sizeK (123-2167, default 2167): 974

Command (m for help): n
Command action
  l   logical (5 or over)
  p   primary partition (1-4)
l
First cylinder (975-2167, default 975):
Using default value 975
Last cylinder or +size or +sizeM or +sizeK (975-2167, default 2167): 1826

Command (m for help): n
Command action
  l   logical (5 or over)
  p   primary partition (1-4)
l
First cylinder (1827-2167, default 1827):
Using default value 1827
Last cylinder or +size or +sizeM or +sizeK (1827-2167, default 2167):
Using default value 2167

Command (m for help): p

Disk /dev/sdb: 17.8 GB, 17825792000 bytes
255 heads, 63 sectors/track, 2167 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

  Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1         122      979933+  83  Linux
/dev/sdb2             123        2167    16426462+   5  Extended
/dev/sdb5             123         974     6843658+  83  Linux
/dev/sdb6             975        1826     6843658+  83  Linux
/dev/sdb7            1827        2167     2739051   83  Linux

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

查看分区信息:
[root@odbrac1 ~]# fdisk -l /dev/sdb
Disk /dev/sdb: 17.8 GB, 17825792000 bytes
255 heads, 63 sectors/track, 2167 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

  Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1         122      979933+  83  Linux
/dev/sdb2             123        2167    16426462+   5  Extended
/dev/sdb5             123         974     6843658+  83  Linux
/dev/sdb6             975        1826     6843658+  83  Linux
/dev/sdb7            1827        2167     2739051   83  Linux


15) 安装并配置 OCFS2
a) 查看RHEL Linux Kernel版本:
[root@odbrac1 dev]# uname -rm
2.6.18-164.el5 x86_64

b) 从Oracle网站下载对应版本的OCFS2:
OCFS2 Download URL: http://oss.oracle.com/projects/ocfs2/dist/files/RedHat/RHEL5/x86_64
OCFS2 Tools和Console Download URL: http://oss.oracle.com/projects/ocfs2-tools/files/RedHat/RHEL5/x86_64

c) 安装 OCFS2(每个节点):
[root@odbrac1 ~]# rpm -Uvh ocfs2-tools-1.4.4-1.el5.x86_64.rpm
[root@odbrac1 ~]# rpm -Uvh ocfs2-2.6.18-164.el5-1.4.7-1.el5.x86_64.rpm
[root@odbrac1 ~]# rpm -Uvh ocfs2console-1.4.4-1.el5.x86_64.rpm

d) 配置 OCFS2(第1个节点):
进入X-Windows:
[root@odbrac1 ~]# startx
开启一个终端窗口,执行:
[root@odbrac1 ~]# ocfs2console &
Cluster->Configure Nodes, 配置所有节点(应使用心跳IP,而非公网IP,但Name要与hostname一致):
Active Name    Node IP Address IP  Port
*  odbrac1    0  10.10.200.101  7777
*  odbrac2    1  10.10.200.102  7777
点击“Apply"存盘,会生成配置文件 /etc/ocfs2/cluster.conf
Cluster->Propagate Configuration, 将配置传播至每个节点:
会弹出一个终端窗口,输入 root 用户密码,直到 Finished.
(注意:有可能在第1个节点上无法生成配置,可以到其它节点生成。)
(说明:如果配置有误,需重新配置时,删除 /etc/ocfs2/cluster.conf 文件再配置即可)

e) 启动O2CB服务(每个节点):
[root@odbrac1 ~]# /etc/init.d/o2cb enable
Writing O2CB configuration: OK
Loading filesystem "configfs": OK
Mounting configfs filesystem at /sys/kernel/config: OK
Loading filesystem "ocfs2_dlmfs": OK
Mounting ocfs2_dlmfs filesystem at /dlm: OK
Starting O2CB cluster ocfs2: OK
[root@odbrac1 ~]# /etc/init.d/o2cb online
Cluster ocfs2 already online

f) 配置O2CB集群服务(每个节点):
在使用 OCFS2 执行任何操作(如格式化或挂载文件系统)之前,我们需要先运行 OCFS2 的集群堆栈 O2CB。每个节点都运行以下命令:
[root@odbrac1 ~]# /etc/init.d/o2cb offline ocfs2
Stopping O2CB cluster ocfs2: OK

[root@odbrac1 ~]# /etc/init.d/o2cb unload
Unmounting ocfs2_dlmfs filesystem: OK
Unloading module "ocfs2_dlmfs": OK
Unmounting configfs filesystem: OK
Unloading module "configfs": OK

[root@odbrac1 ~]# vim /etc/sysconfig/o2cb
O2CB_ENABLED=true
O2CB_STACK=o2cb
O2CB_BOOTCLUSTER=ocfs2
O2CB_HEARTBEAT_THRESHOLD=61
O2CB_IDLE_TIMEOUT_MS=30000
O2CB_KEEPALIVE_DELAY_MS=2000
O2CB_RECONNECT_DELAY_MS=2000
(修改 /etc/sysconfig/o2cb 配置文件,将参数值与上面对应)

[root@odbrac1 ~]# /etc/init.d/o2cb configure
Configuring the O2CB driver.

This will configure the on-boot properties of the O2CB driver.
The following questions will determine whether the driver is loaded on
boot.  The current values will be shown in brackets (\'[]\').  Hitting
<ENTER> without typing an answer will keep that current value.  Ctrl-C
will abort.

Load O2CB driver on boot (y/n) [y]: y
Cluster stack backing O2CB [o2cb]: o2cb
Cluster to start on boot (Enter "none" to clear) [ocfs2]: ocfs2
Specify heartbeat dead threshold (>=7) [31]: 61
Specify network idle timeout in ms (>=5000) [30000]: 30000
Specify network keepalive delay in ms (>=1000) [2000]: 2000
Specify network reconnect delay in ms (>=2000) [2000]: 2000
Writing O2CB configuration: OK
Loading filesystem "configfs": OK
Mounting configfs filesystem at /sys/kernel/config: OK
Loading filesystem "ocfs2_dlmfs": OK
Mounting ocfs2_dlmfs filesystem at /dlm: OK
Starting O2CB cluster ocfs2: OK

g) 创建OCFS2分区文件系统(即:/dev/sdb1)(第1个节点):
只需要在1个节点上执行即可。
[root@odbrac1 ~]# mkfs.ocfs2 -b 4K -C 32K -N 4 -L oracrsfiles /dev/sdb1
mkfs.ocfs2 1.4.4
Cluster stack: classic o2cb
Label: oracrsfiles
Features: sparse backup-super unwritten inline-data strict-journal-super
Block size: 4096 (12 bits)
Cluster size: 32768 (15 bits)
Volume size: 1003421696 (30622 clusters) (244976 blocks)
Cluster groups: 1 (tail covers 30622 clusters, rest cover 30622 clusters)
Extent allocator size: 4194304 (1 groups)
Journal size: 16777216
Node slots: 4
Creating bitmaps: done
Initializing superblock: done
Writing system files: done
Writing superblock: done
Writing backup superblock: 0 block(s)
Formatting Journals: done
Growing extent allocator: done
Formatting slot map: done
Writing lost+found: done
mkfs.ocfs2 successful

(说明:
-b : 数据块的大小,
-C : cluster 的块大小
-N : node slot 允许多少个节点同时挂载这个分区
-L : label 标签)

f) 挂载OCFS2文件系统(至之前创建的挂载点 /app/oracle/ocfs2)(每个节点):
(注:需要使用 OCFS2 标签即 oracrsfiles 以 root 用户帐户在每个节点上执行集群文件系统挂载)
[root@odbrac1 ~]# mount -t ocfs2 -o datavolume,nointr -L "oracrsfiles" /app/oracle/ocfs2

使用mount命令确认:
[root@odbrac1 ~]# mount -t ocfs2
/dev/sda1 on /app/oracle/ocfs2 type ocfs2 (rw,_netdev,datavolume,nointr,heartbeat=local)

设置自动挂载:
[root@odbrac1 ~]# vim /etc/fstab
LABEL=oracrsfiles /app/oracle/ocfs2 ocfs2 _netdev,datavolume,nointr 0 0

h) 设置 OCFS2 文件系统的权限(第1个节点):
[root@odbrac1 ~]# chown oracle:oinstall /app/oracle/ocfs2
[root@odbrac1 ~]# chmod 775 /app/oracle/ocfs2
[root@odbrac1 ~]# ls -ld /app/oracle/ocfs2
drwxrwxr-x 2 oracle oinstall 4096 Jun 24 09:33 /app/oracle/ocfs2

i) 创建存放OCR配置文件与表决磁盘的目录(第1个节点):
[root@odbrac1 ~]# mkdir -p /app/oracle/ocfs2/oradata/racdb
[root@odbrac1 ~]# chown -R oracle:oinstall /app/oracle/ocfs2/oradata
[root@odbrac1 ~]# chmod -R 775 /app/oracle/ocfs2/oradata
[root@odbrac1 ~]# ls -ld /app/oracle/ocfs2/oradata
drwxrwxr-x 3 oracle oinstall 3896 Jun 24 10:30 /app/oracle/ocfs2/oradata


16) 安装并配置 ASMLib
a) 查看RHEL Linux Kernel版本:
[root@odbrac1 dev]# uname -rm
2.6.18-164.el5 x86_64

b) 从Oracle网站下载对应版本的ASMLib:
ASMLib 2.0 Download URL: http://www.oracle.com/technetwork/server-storage/linux/downloads/index-088143.html

c) 安装 ASMLib(每一个节点):
[root@odbrac1 ~]# rpm -Uvh oracleasm-support-2.1.4-1.el5.x86_64.rpm
[root@odbrac1 ~]# rpm -Uvh oracleasm-2.6.18-164.el5-2.0.5-1.el5.x86_64.rpm
[root@odbrac1 ~]# rpm -Uvh oracleasmlib-2.0.4-1.el5.x86_64.rpm

d) 使用下面命令配置ASMLib(每个节点):
[root@odbrac1 ~]# /etc/init.d/oracleasm configure -i
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets (\'[]\').  Hitting <ENTER> without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface []: oracle
Default group to own the driver interface []: oinstall
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done

e) 启用 ASMLib 驱动程序(每一个节点):
[root@odbrac1 ~]# /etc/init.d/oracleasm enable
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [  OK  ]
Scanning the system for Oracle ASMLib disks: [  OK  ]

f) 为 ASM 配置磁盘,即两个asm disk(只需在第1个节点上):
告诉 ASMLib 驱动程序要使用哪些磁盘(请注意,这些磁盘是不包含任何内容的空磁盘),通过以 root 用户身份运行以下命令来标记由 ASMLib 使用的磁盘。
/etc/init.d/oracleasm createdisk {DISK_NAME} {device_name}
(提示:以大写字母输入{DISK_NAME}。当前版本中有一个错误,即如果使用小写字母,ASM 实例将无法识别磁盘。)
注意:仅从一个集群主机上执行此操作
[root@odbrac1 ~]# /etc/init.d/oracleasm createdisk DISK1 /dev/sdb5
Marking disk "DISK1" as an ASM disk: [  OK  ]
[root@odbrac1 ~]# /etc/init.d/oracleasm createdisk DISK2 /dev/sdb6
Marking disk "DISK2" as an ASM disk: [  OK  ]
[root@odbrac1 ~]# /etc/init.d/oracleasm createdisk DISK3 /dev/sdb7
Marking disk "DISK3" as an ASM disk: [  OK  ]

g) 列出标记为由ASMLib使用的所有磁盘:
[root@odbrac1 ~]# /etc/init.d/oracleasm listdisks
DISK1
DISK2
DISK3

h) 在虚拟机2(节点2)上,扫描所配置的 ASMLib 磁盘
[root@odbrac2 dev]# /etc/init.d/oracleasm scandisks
(注意:在所有集群其他节点主机上,只需要以 root 用户身份运行以下命令,扫描所配置的 ASMLib 磁盘即可)

===============================================================================================================================================
三, 安装CRS(Oracle Clusterware)和Oracle Database 10g
安装版本:
Oracle Clusterware 10.2.0.1: 10201_clusterware_linux_x86_64.cpio.gz
Oracle Database 10g R2 10.2.0.1: 10201_database_linux_x86_64.cpio.gz

1) 配置Oracle用户环境变量(在每个节点)
要使用 Oracle 产品,应该或必须设置几个环境变量。对于数据库服务器,建议设置以下环境变量:
ORACLE_BASE
ORACLE_HOME
ORACLE_SID
PATH
如果您在同一服务器上安装了多个 Oracle 产品或数据库,则 ORACLE_HOME、ORACLE_SID 和 PATH 变量可能会更改。

ORACLE_BASE 变量不应更改,并可以在需要时在您的登录配置文件中设置它。Oracle 提供了一个称作 oraenv 的实用程序来设置其他变量。
以 oracle 身份登录,并通过在 .bash_profile 或 .profile(bash 或 ksh)中添加以下行,将 ORACLE_BASE 添加到登录配置文件
[root@odbrac1 ~]# su - oracle
[oracle@odbrac1 ~]$ vim .bash_profile
# For Oracle 10g
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR

ORACLE_BASE=/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db; export ORACLE_HOME
ORA_CRS_HOME=$ORACLE_BASE/product/10.2.0/crs; export ORA_CRS_HOME
ORACLE_SID=oradb1; export ORACLE_SID
ORACLE_TERM=xterm; export ORACLE_TERM
TNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN
ORA_NLS33=$ORACLE_HOME/ocommon/nls/admin/data; export ORA_NLS33
PATH=/usr/sbin:$PATH; export PATH
PATH=$ORA_CRS_HOME/bin:$ORACLE_HOME/bin:$ORACLE_HOME/opmn/bin:$ORACLE_HOME/ldap/bin:$PATH; export PATH
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME/lib:$ORACLE_HOME/opmn/lib:$ORACLE_HOME/oracm/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/jre:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib; export CLASSPATH

(注意:在节点2上ORACLE_SID为oradb2   ******************************)

2) 安装CRS(Clusterware在下载DB 10g的页面可以找到,11g叫Grid Infrastructure)
(切记:只在第一个节点上安装即可,会自动远程安装至其它节点)
Clusterware R2 10.2.0.1.0 Download URL: http://www.oracle.com/technetwork/database/10201linx8664soft-092456.html

解压缩10201_clusterware_linux_x86_64.cpio.gz 安装包
[root@odbrac1 tmp]# gzip -dc /media/sf_odb/10201_clusterware_linux_x86_64.cpio.gz | cpio -div

先安装cvuqdisk,即CVU(Cluster Verification Utility)(每个节点):
[root@odbrac1 ~]# export CVUQDISK_GRP=oinstall
[root@odbrac1 ~]# cd /tmp/clusterware/rpm
[root@odbrac1 rpm]# rpm -Uvh cvuqdisk-1.0.1-1.rpm
[root@odbrac2 ~]# scp root@odbrac1:/tmp/clusterware/rpm/cvuqdisk-1.0.1-1.rpm /tmp
[root@odbrac2 ~]# export CVUQDISK_GRP=oinstall
[root@odbrac2 ~]# rpm -Uvh /tmp/cvuqdisk-1.0.1-1.rpm
校验cvuqdisk包的安装:
[root@odbrac1 ~]# ls -l /usr/sbin/cvuqdisk
-rwsr-x--- 1 root oinstall 4168 Jun  2  2005 /usr/sbin/cvuqdisk

利用 CVU 检查 CRS 的安装前任务:
[root@odbrac1 ~]# su - oracle
[oracle@odbrac1 ~]$ cd /tmp/clusterware/cluvfy
[oracle@odbrac1 cluvfy]$ ./runcluvfy.sh stage -pre crsinst -n odbrac1,odbrac2 -verbose
查看 CVU 报告。注意,与 VIP 有关的错误可以忽略(参考 Metalink 说明 338924.1),即:
ERROR:
Could not find a suitable set of interfaces for VIPs

利用 CVU 检查硬件和操作系统设置:
[root@odbrac1 ~]# su - oracle
[oracle@odbrac1 ~]$ cd /tmp/clusterware/cluvfy
[oracle@odbrac1 cluvfy]$ ./runcluvfy.sh stage -post hwos -n odbrac1,odbrac2 -verbose
查看 CVU 报告。注意,与 VIP 有关的错误可以忽略(参考 Metalink 说明 338924.1),即:
ERROR:
Could not find a suitable set of interfaces for VIPs

进入X-Windows:
[root@odbrac1 ~]# startx
开启一个终端窗口,首先运行xhost以建立用户等效性
[root@odbrac1 ~]# xhost +
[root@odbrac1 ~]# su - oracle
[oracle@odbrac1 ~]$ export DISPLAY=localhost:0.0

开始安装CRS(安装报Can\'t connect to X11 window server using \'localhost:0.0\' as the value of the DISPLAY variable这个错误时,
 应该[oracle@localhost database]$ export DISPLAY=:0
 [oracle@localhost database]$ su
 Password:
 [root@localhost database]# xhost + localhost
 localhost being added to access control list )


[oracle@odbrac1 ~]$ cd /tmp/clusterware
[oracle@odbrac2 clusterware]$ ./runInstaller
a) Has \'rootpre.sh\' been run by root? [y/n] (n)
y

b) Specify Inventory directory and credentials
Enter the full path of the inventory directory: /app/oracle/oraInventory
Specify Operating System group name: oinstall

c) Specify Home Details
Name: OraCrs10g_home
Path: /app/oracle/product/10.2.0/crs

d) Specify Cluster Configuration
Cluster Name: crs
Cluster Nodes
Public Node Name Private Node Name Virtual Host Name
odbrac1    odbrac1-priv   odbrac1-vip
odbrac2    odbrac2-priv   odbrac2-vip

e) Specify Network Interface Usage
Interface Name  Subnet   Interface Type
eth0    10.10.100.0  Public
eth1    10.10.200.0  Private

f) Specify Oracle Cluster Registry(OCR) Location
OCR Configuration
(*) Normal Redundancy
Specify ORC Location: /app/oracle/ocfs2/oradata/racdb/OCRFile
Specify ORC Mirror Location: /app/oracle/ocfs2/oradata/racdb/OCRFile_mirror

g) Specify Voting Disk Location
Voting Disk Configuration
(*) Normal Redundancy
Voting Disk Location: /app/oracle/ocfs2/oradata/racdb/CSSFile
Additional Voting Disk 1 Location: /app/oracle/ocfs2/oradata/racdb/CSSFile_mirror1
Additional Voting Disk 2 Location: /app/oracle/ocfs2/oradata/racdb/CSSFile_mirror2

h) Execute Configuration scripts
(切记:当出现此窗口时,先不要点击“OK”,在每一个节点上,打开终端窗口运行指定的两个script)
(注意:由于CRS 10.2.0.1的bug,在RHEL5下执行root.sh时会报 "/arch/app/oracle/product/10.2.0/crs_1/jdk/jre/bin/java: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory" 的错误,在执行前,修改 /app/oracle/product/10.2.0/crs/bin/vipca 文件,unset 掉 LD_ASSUME_KERNEL 即在:
[root@odbrac1 ~]# vim /app/oracle/product/10.2.0/crs/bin/vipca
if [ "$arch" = "i686" -o "$arch" = "ia64" -o "$arch" = "x86_64" ]
then
 LD_ASSUME_KERNEL=2.4.19
 export LD_ASSUME_KERNEL
fi
下面加入:
unset LD_ASSUME_KERNEL
同时,修改 /app/oracle/product/10.2.0/crs/bin/srvctl 文件,同样:
[root@odbrac1 ~]# vim /app/oracle/product/10.2.0/crs/bin/srvctl
LD_ASSUME_KERNEL=2.4.19
export LD_ASSUME_KERNEL
下面加入:
unset LD_ASSUME_KERNEL

同样,在其它节点也这样做。)

[root@odbrac1 ~]# /app/oracle/oraInventory/orainstRoot.sh
[root@odbrac1 ~]# /app/oracle/product/10.2.0/crs/root.sh
(每一个节点执行完成后,再到下一个节点执行)
[root@odbrac2 ~]# /app/oracle/oraInventory/orainstRoot.sh
[root@odbrac2 ~]# /app/oracle/product/10.2.0/crs/root.sh
(注意:
在其它节点执行时,会报一个"The given interface(s), "eth0" is not public. Public interfaces should be used to configure virtual IPs."的错误,解决方法是手工启动vipca图形界面进行配置:
[root@odbrac2 ~]# /app/oracle/product/10.2.0/crs/bin/vipca
(如果出现 Error 0(Native: listNetInterfaces:[3] 的错误,先运行:
运行下面命行查看IP环境:
[root@odbrac2 bin]# ./oifcfg iflist
eth0  10.10.100.0
eth1  10.10.200.0
配置RAC的网络接口:
[root@odbrac2 ~]# cd /app/oracle/product/10.2.0/crs/bin
[root@odbrac2 bin]# ./oifcfg setif -global eth0/10.10.100.0:public
[root@odbrac2 bin]# ./oifcfg setif -global eth1/10.10.200.0:cluster_interconnect
查看网络接口:
[root@odbrac2 bin]# ./oifcfg getif
eth0  10.10.100.0  global  public
eth1  10.10.200.0  global  cluster_interconnect
然后再执行 vipca)

在 "VIP Configuration Assistant, Step 1 of 2: Network Interfaces" 界面选择:eth0
在 "VIP Configuration Assistant, Step 2 of 2: Virtual IPs for cluster nodes" 填写下列信息:
Node Name IP Alias Name IP address   Subnet Mask
odbrac1  odbrac1-vip  10.10.100.201  255.255.255.0
odbrac2  odbrac2-vip  10.10.100.202  255.255.255.0

执行完成后,回到第一个节点上,点击“OK”

h) Configuration Assistants
如果出现“Oracle Cluster Verification Utility" Failed (see details...) ,就是因为上面的VIP未配置,配置后点击"Retry"即可成功。

i) 查看IP信息
通过 ifconfig 命令即可看到 eth0:1 配置为VIP地址。

j) 确认CRS安装成功
检查集群节点:
[root@odbrac1 ~]# su - oracle
[oracle@odbrac1 ~]$ /app/oracle/product/10.2.0/crs/bin/olsnodes -n
odbrac1 1
odbrac2 2

验证CRS安装(每个节点):
[oracle@odbrac1 ~]$ /app/oracle/product/10.2.0/crs/bin/crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy
[oracle@odbrac1 ~]$ /app/oracle/product/10.2.0/crs/bin/crs_stat
[oracle@odbrac1 ~]$ /app/oracle/product/10.2.0/crs/bin/crs_stat -t -v
Name           Type           R/RA   F/FT   Target    State     Host        
----------------------------------------------------------------------
ora....ac1.gsd application    0/5    0/0    ONLINE    ONLINE    odbrac1    
ora....ac1.ons application    0/3    0/0    ONLINE    ONLINE    odbrac1    
ora....ac1.vip application    0/0    0/0    ONLINE    ONLINE    odbrac1    
ora....ac2.gsd application    0/5    0/0    ONLINE    ONLINE    odbrac2    
ora....ac2.ons application    0/3    0/0    ONLINE    ONLINE    odbrac2    
ora....ac2.vip application    0/0    0/0    ONLINE    ONLINE    odbrac2

(注意:如果先前有使用netca配置过listener,会register至crs,请清除:
[oracle@odbrac1 ~]$ /app/oracle/product/10.2.0/crs/bin/crs_unresister ora.odbrac1.LISTENER_ODBRAC1.lsnr
[oracle@odbrac1 ~]$ /app/oracle/product/10.2.0/crs/bin/crs_unresister ora.odbrac2.LISTENER_ODBRAC2.lsnr)

[oracle@odbrac1 ~]$ ls -l /etc/init.d/init.*
-r-xr-xr-x 1 root root  1951 Jun 11 23:44 /etc/init.d/init.crs
-r-xr-xr-x 1 root root  4717 Jun 11 23:44 /etc/init.d/init.crsd
-r-xr-xr-x 1 root root 36805 Jun 11 23:44 /etc/init.d/init.cssd
-r-xr-xr-x 1 root root  3193 Jun 11 23:44 /etc/init.d/init.evmd


3) 开始安装Oracle Database 10g R2 10.2.0.1
安装 Oracle 数据库软件之前,我们应该使用集群验证实用程序 (CVU) 运行以下数据库安装前检查。
[root@odbrac1 ~]# su - oracle
[oracle@odbrac1 ~]$ cd /app/oracle/product/10.2.0/crs/bin
[oracle@odbrac1 bin]$ ./cluvfy stage -pre dbinst -n odbrac1,odbrac2 -r 10gR2 -verbose
查看 CVU 报表。(注意,该报表将包含我们在检查 CRS 安装前任务时收到的错误:找不到一组合适的 VIP 接口。可以忽略这个错误,没什么问题。)

解压缩 10201_database_linux_x86_64.cpio.gz 安装包
[root@odbrac1 ~]# cd /tmp
[root@odbrac1 tmp]# gzip -dc /media/sf_odb/10201_database_linux_x86_64.cpio.gz | cpio -div

开始安装ODB:
安装前,确保CRS已全部启动并ONLINE:
[oracle@odbrac1 ~]$ /app/oracle/product/10.2.0/crs/bin/crs_stat
(如果有问题,尝试以root用户清除 /var/tmp/.oracle/ 下的所有文件并重启:
[root@odbrac1 ~]# /etc/init.d/init.crs stop
[root@odbrac1 ~]# rm -rf /var/tmp/.oracle/*
或者使用root.sh直接重做OCR分区:
[root@odbrac1 ~]# /app/oracle/product/10.2.0/crs/bin/root.sh
WARNING: directory \'/app/oracle/product/10.2.0\' is not owned by root
WARNING: directory \'/app/oracle/product\' is not owned by root
WARNING: directory \'/app/oracle\' is not owned by root
Checking to see if Oracle CRS stack is already configured
Oracle CRS stack is already configured and will be running under init(1M)
重启系统

如果启动不了,请查看日志:
/app/oracle/product/10.2.0/crs/log/odbrac1/crsd/crsd.log

进入X-Windows:
[root@odbrac1 ~]# startx
开启一个终端窗口,首先运行xhost以建立用户等效性
[root@odbrac1 ~]# xhost +
[root@odbrac1 ~]# su - oracle
[oracle@odbrac1 ~]$ export DISPLAY=localhost:0.0

[oracle@odbrac1 ~]$ cd /tmp/database
[oracle@odbrac1 ~]$ ./runInstaller
a) Select Installation Type
(*) Enterprise Edition (1.60GB)

b) Specify Home Details
Name: OraDb10g_home1
Path: /app/oracle/product/10.2.0/db

c) Specify Hardware Cluster Installation Mode
(*) Cluster Installation
 (*) odbrac1
 (*) odbrac2

d) Product-Specific Prerequistie Checks
忽略Warning信息

e) Select Configuration Option
(*) Install database Software only
选择仅安装数据库软件

f) 弹出 Execute Configuration scripts 窗口时
在每一个节点,以root身份运行:
[root@odbrac1 ~]# /app/oracle/product/10.2.0/db/root.sh
Running Oracle10 root.sh script...

The following environment variables are set as:
   ORACLE_OWNER= oracle
   ORACLE_HOME=  /app/oracle/product/10.2.0/db

Enter the full pathname of the local bin directory: [/usr/local/bin]:
  Copying dbhome to /usr/local/bin ...
  Copying oraenv to /usr/local/bin ...
  Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
(同时,在其它节点也执行它)
[root@odbrac2 ~]# /app/oracle/product/10.2.0/db/root.sh

g) End of Installation
Please remember...
The following J2EE Applications have been deployed and are accessible at the URLs listed below.

iSQL*Plus URL:
http://odbrac1.localdomain:5560/isqlplus

iSQL*Plus DBA URL:
http://odbrac1.localdomain:5560/isqlplus/dba

(切记:先打补丁,再配置 netca )

四、安装 Patch Set 10.2.0.5
下载 Oracle Database 的补丁:p8202632_10205_Linux-x86-64.zip

1) 解压缩:
[root@odbrac1 ~]# mkdir /tmp/p8202632
[root@odbrac1 ~]# unzip -d /tmp/p8202632 /media/sf_odb/p8202632_10205_Linux-x86-64.zip

2) 安装CRS程序的补丁
(切记:安装前,确保CRS及相关服务已全部启动完成。)

进入X-Windows:
[root@odbrac1 ~]# startx
开启一个终端窗口,首先运行xhost以建立用户等效性
[root@odbrac1 ~]# xhost +
[root@odbrac1 ~]# su - oracle
[oracle@odbrac1 ~]$ export DISPLAY=localhost:0.0
[oracle@odbrac1 ~]$ cd /tmp/p8202632/Disk1/
[oracle@odbrac1 Disk1]$ ./runInstaller
a) Specify Home Details
Name: ORACrs10g_home
Path: /app/oracle/product/10.2.0/crs

b) Specify Hardware Cluster Installation Mode
(*) Cluster Installation
 (*) odbrac1
 (*) odbrac2

c) Product-Specific Prequisite Checks
(Check完成后,点击"Install"开始安装)

d) 运行CRS更新程序脚本(分别在每个节点上以root用户执行配置脚本)
[root@odbrac1 ~]# /app/oracle/product/10.2.0/crs/bin/crsctl stop crs
Stopping resources.
Error while stopping resources. Possible cause: CRSD is down.
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
(说明:如果有错误,不用理它)
[root@odbrac1 ~]# /app/oracle/product/10.2.0/crs/install/root102.sh
Creating pre-patch directory for saving pre-patch clusterware files
Completed patching clusterware files to /app/oracle/product/10.2.0/crs
Relinking some shared libraries.
Relinking of patched files is complete.
WARNING: directory \'/app/oracle/product/10.2.0\' is not owned by root
WARNING: directory \'/app/oracle/product\' is not owned by root
WARNING: directory \'/app/oracle\' is not owned by root
Preparing to recopy patched init and RC scripts.
Recopying init and RC scripts.
Startup will be queued to init within 30 seconds.
Starting up the CRS daemons.
Waiting for the patched CRS daemons to start.
 This may take a while on some systems.
.
.
10205 patch successfully applied.
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully deleted 1 values from OCR.
Successfully deleted 1 keys from OCR.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: odbrac1 odbrac1-priv odbrac1
Creating OCR keys for user \'root\', privgrp \'root\'..
Operation successful.
clscfg -upgrade completed successfully
Creating \'/app/oracle/product/10.2.0/crs/install/paramfile.crs\' with data used for CRS configuration
Setting CRS configuration values in /app/oracle/product/10.2.0/crs/install/paramfile.crs

(切记:每个节点运行完成后再到下一个节点运行)
[root@odbrac2 ~]# /app/oracle/product/10.2.0/crs/bin/crsctl stop crs
[root@odbrac2 ~]# /app/oracle/product/10.2.0/crs/install/root102.sh

e) 回到安装界面,点击“Exit"


3) 安装Database程序的补丁
开启一个终端窗口,首先运行xhost以建立用户等效性
[root@odbrac1 ~]# xhost +
[root@odbrac1 ~]# su - oracle
[oracle@odbrac1 ~]$ export DISPLAY=localhost:0.0
[oracle@odbrac1 ~]$ cd /tmp/p8202632/Disk1/
[oracle@odbrac1 Disk1]$ ./runInstaller
a) Specify Home Details
Name: OraDb10g_home1
Path: /app/oracle/product/10.2.0/db

b) Specify Hardware Cluster Installation Mode
(*) Cluster Installation
 (*) odbrac1
 (*) odbrac2

c) Product-Specific Prequisite Checks
(Check完成后,点击"Install"开始安装)

d) 运行配置脚本(分别在每个节点上以root用户执行配置脚本)
[root@odbrac1 ~]# /app/oracle/product/10.2.0/db/root.sh
Running Oracle 10g root.sh script...

The following environment variables are set as:
   ORACLE_OWNER= oracle
   ORACLE_HOME=  /app/oracle/product/10.2.0/db

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
  Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
  Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
  Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.

(切记:每个节点运行完成后再到下一个节点运行)
[root@odbrac2 ~]# /app/oracle/product/10.2.0/db/root.sh


4) 安装DBConsole 证书补丁(第1个节点上):
(创建一个有效期为10年的证书)
下载补丁:p8350262_10205_Generic.zip

a) 解压缩:
[root@odbrac1 ~]# mkdir /tmp/p8350262
[root@odbrac1 ~]# unzip -d /tmp/p8350262 /media/sf_odb/p8350262_10205_Generic.zip

b) 安装补丁:
[root@odbrac1 ~]# su - oracle
停止 EM DB Console:
[oracle@odbrac1 ~]$ $ORACLE_HOME/bin/emctl stop dbconsole
开始打补丁:
[oracle@odbrac1 ~]$ export PATH=$ORACLE_HOME/OPatch:$PATH
[oracle@odbrac1 ~]$ cd /tmp/p8350262/8350262/
[oracle@odbrac1 ~]$ opatch apply
启动 EM DB Console:
[oracle@odbrac1 ~]$ $ORACLE_HOME/bin/emctl start dbconsole

五、配置 Oracle Net
配置前,确保CRS已全部启动并ONLINE:
[oracle@odbrac1 ~]$ /app/oracle/product/10.2.0/crs/bin/crs_stat

(注意:如果先前有使用netca配置过listener,会register至crs,请清除:
[oracle@odbrac1 ~]$ /app/oracle/product/10.2.0/crs/bin/crs_unresister ora.odbrac1.LISTENER_ODBRAC1.lsnr
[oracle@odbrac1 ~]$ /app/oracle/product/10.2.0/crs/bin/crs_unresister ora.odbrac2.LISTENER_ODBRAC2.lsnr)

进入X-Windows,以oracle用户的身份执行 netca:
[root@odbrac1 ~]# startx
开启一个终端窗口,首先运行xhost以建立用户等效性
[root@odbrac1 ~]# xhost +
[root@odbrac1 ~]# su - oracle
[oracle@odbrac1 ~]$ export DISPLAY=localhost:0.0

[oracle@odbrac1 ~]$ /app/oracle/product/10.2.0/db/bin/netca
Select the type of Oracle Net Services configuration:
(*) Cluster configuration

Select the nodes to configure
(*) odbrac1
(*) odbrac2

1) 监听程序配置
Choose the configuration you would like to do:
(*) Listener configuration

Select what you want to do:
(*) Add

Listener name: LISTENER

Selected Protocols:
TCP

(*) Use the standard port number of 1521

Would you like to configure another listener?
(*) No

2) 命名方法配置
Choose the configuration you would like to do:
(*) Naming Methods configuration

Selected Naming Methods:
Local Naming

3) 点击 "Finish" 完成配置


六、创建集群数据库
进入X-Windows,以oracle用户的身份执行 dbca:
[root@odbrac1 ~]# startx
开启一个终端窗口,首先运行xhost以建立用户等效性
[root@odbrac1 ~]# xhost +
[root@odbrac1 ~]# su - oracle
[oracle@odbrac1 ~]$ export DISPLAY=localhost:0.0

[oracle@odbrac1 ~]$ /app/oracle/product/10.2.0/db/bin/dbca
a) Select the database type that you would like to create or administer:
(*) Oracle Real Application Clusters database

b) Select the operation that you want to perform:
(*) Create a Database

c) Select the nodes on which you want to create the cluster database:
(*) odbrac1
(*) odbrac2

d) Select a template from the following list to create a database:
(*) General Purpose

e) Database Identification
Global Database Name: oradb
SID Prefix: oradb

f) Management Options(使用默认)
(*) Configure the Database with Enterprise Manager
 (*) Use Database Control for Database Management

g) Database Credentials
(*) Use the Same Password for All Accounts
 Password: Abcd1234
 Confirm Password: Abcd1234

h) Select the storage mechanism you would like to use for the database
(*) Automatic Storage Management (ASM)

i) Create ASM Instance
The new ASM instance has its own SYS user for remote management. Specify the password for that user.
 SYS password: Abcd1234
 Confirm SYS password: Abcd1234

Choose the type of parameter file that you would like to use for the new ASM instance.
 (*) Create initialization parameter file (IFILE)
  Initialization Parameter Filename: {ORACLE_BASE}/admin/+ASM/pfile/init.ora
 
j) 此时开始创建ASM实例

k) ASM Disk Groups
Select one or more disk groups to be used as storage for the database....
(此时要选择磁盘组,由于是新安装,没有可用磁盘组,选择新建来创建ASM磁盘组)
点击"Create New":
Disk Group Name: DATA_GROUP
 Redundancy: Normal  
 (冗余模式选择"High"时,会要求3个裸设备构成1个磁盘组;
 选择"External",刚可以只需要1个)
 Select Member Disks
  (*) Show Candidates
  Disk Path  Header Status ASM Name Failure Group Size(MB)
  ORCL:DISK1 PROVISIONED         6683
  ORCL:DISK2 PROVISIONED         6683
点击"Create New":
Disk Group Name: RECOVERY_GROUP
 Redundancy: External  
 Select Member Disks
  (*) Show Candidates
  Disk Path  Header Status ASM Name Failure Group Size(MB)
  ORCL:DISK3 PROVISIONED         2674

选择"DATA_GROUP",继续

((如果创建出现失误,可以清除disk上的ASM Diskgroup metadata信息,命令如下:
先停止ASM服务:
[oracle@odbrac1 ~]$ srvctl stop asm -n odbrac1
[oracle@odbrac1 ~]$ srvctl stop asm -n odbrac2
删除ASM磁盘:
[root@odbrac1 ~]# /etc/init.d/oracleasm deletedisk DISK3 /dev/sdb7
Removing ASM disk "DISK3": [  OK  ]
[root@odbrac1 ~]# /etc/init.d/oracleasm deletedisk DISK2 /dev/sdb6
Removing ASM disk "DISK2": [  OK  ]
[root@odbrac1 ~]# /etc/init.d/oracleasm deletedisk DISK1 /dev/sdb5
Removing ASM disk "DISK1": [  OK  ]
清除DISK上的ASM磁盘组信息:
[root@odbrac1 ~]# dd if=/dev/zero of=/dev/sdb5 bs=8192 count=12800
12800+0 records in
12800+0 records out
104857600 bytes (105 MB) copied, 2.76579 seconds, 37.9 MB/s
[root@odbrac1 ~]# dd if=/dev/zero of=/dev/sdb6 bs=8192 count=12800
12800+0 records in
12800+0 records out
104857600 bytes (105 MB) copied, 2.87222 seconds, 36.5 MB/s
[root@odbrac1 ~]# dd if=/dev/zero of=/dev/sdb7 bs=8192 count=12800
12800+0 records in
12800+0 records out
104857600 bytes (105 MB) copied, 2.10452 seconds, 49.8 MB/s
重新创建ASM磁盘:
[root@odbrac1 ~]# /etc/init.d/oracleasm createdisk DISK1 /dev/sdb5
Marking disk "DISK1" as an ASM disk: [  OK  ]
[root@odbrac1 ~]# /etc/init.d/oracleasm createdisk DISK2 /dev/sdb6
Marking disk "DISK2" as an ASM disk: [  OK  ]
[root@odbrac1 ~]# /etc/init.d/oracleasm createdisk DISK3 /dev/sdb7
Marking disk "DISK3" as an ASM disk: [  OK  ]
[root@odbrac2 ~]# /etc/init.d/oracleasm scandisks
Scanning the system for Oracle ASMLib disks: [  OK  ]
检查ASM磁盘:
[root@odbrac2 ~]# /etc/init.d/oracleasm listdisks
DISK1
DISK2
DISK3
再启动ASM服务:
[oracle@odbrac1 ~]$ srvctl start asm -n odbrac1
[oracle@odbrac1 ~]$ srvctl start asm -n odbrac2
运行dbca创建数据库.))

l) Database File Locations
Specify locations for the Database files to be created
 (*) Use Oracle-Managed Files
  Database Area: +DATA_GROUP

m) Recovery Configuration
Choose the recovery options for the datebase:
 (*) Specify Flash Recovery Area
  Flash Recovery Area: +RECOVERY_GROUP
  Flash Recovery Area Size: 2048 MB

n) Database Content
(跳过)

o) Database Services
单击 "Add" 按钮,增加服务
Enter Service Name: oradbsrv

将两个实例均设为首选(Preferred),TAF策略设为基本(Basic),即:
Instance Not Used Preferred Available
oradb1     *
oradb2     *
TAF Policy: Basic

p) Initialization Parameters
注意字符集的设置
Database Character Sets:
 (*) Use Unicode (AL32UTF8)
National Character Set: UTF8 - Unicode 3.0 UTF-8 Universal character set
Default Language: American
Default Date Format: United States

q) 选择 "Create Database" 开始创建数据库。
(说明:可以查看路径 /app/oracle/product/10.2.0/db/cfgtoollogs/dbca/oradb 下的日志文件)
The Database Control URL is https://odbrac1.localdomain:1158/em


七)配置Oracle客户端程序
要在客户端的hosts文件中写入相应的RAC(内容与前面etc/hosts一致)地址,对于Windows系统,路径为 %system32%\drivers\etc\hosts。编辑hosts文件,内容与前面etc/hosts一致
客户端配置tnsnames.ora文件,内容如下:
oradb =
 (DESCRIPTION =
   (ADDRESS_LIST =
     (LOAD_BALANCE=ON)
     (ADDRESS = (PROTOCOL = TCP)(HOST = 10.10.100.201)(PORT = 1521))
     (ADDRESS = (PROTOCOL = TCP)(HOST = 10.10.100.202)(PORT = 1521))
   )
(CONNECT_DATA =
     (SERVICE_NAME = oradb)
     (FAILOVER_MODE =
   (TYPE=SELECT)
   (MODE=BASIC)
   (RETRY=3)
   (DEALY=5)
     )
   )
 )
(上面的CONNECT_DATE部分应该根据你安装RAC的配置进行相应的改动, RETRY=3 为 重试3次, DEALY=5 为重试间隔5秒)
说明:http://odbrac1:1158/em即可登陆到Database Control,启动Database Control用如下命


八) 集群基本命令
a) 停止 Oracle RAC 10g 环境
第一步是停止 Oracle 实例。当此实例(和相关服务)关闭后,关闭 ASM 实例。 最后,关闭节点应用程序(虚拟 IP、GSD、TNS 监听器和 ONS)。
停止TNS监听和dbconsole:
[oracle@odbrac1 ~]$ lsnrctl stop
[oracle@odbrac1 ~]$ emctl stop dbconsole
[oracle@odbrac2 ~]$ lsnrctl stop
[oracle@odbrac2 ~]$ emctl stop dbconsole
停止数据库的所有实例(-d oradb 为Global Database Name):
[oracle@odbrac1 ~]$ srvctl stop database -d oradb
或者分别停止每个数据库实例:
[oracle@odbrac1 ~]$ srvctl stop instance -d oradb -i oradb1
[oracle@odbrac1 ~]$ srvctl stop instance -d oradb -i oradb2
停止ASM:
[oracle@odbrac1 ~]$ srvctl stop asm -n odbrac1
[oracle@odbrac1 ~]$ srvctl stop asm -n odbrac2
停止节点1的服务:
[oracle@odbrac1 ~]$ srvctl stop nodeapps -n odbrac1
停止节点2的服务:
[oracle@odbrac1 ~]$ srvctl stop nodeapps -n odbrac2
停止CRS服务:
[oracle@odbrac1 ~]$ crs_stop -all
(或者用:[root@odbrac1 ~]$ /etc/init.d/init.crs stop )
[oracle@odbrac2 ~]$ crs_stop -all
(或者用:[root@odbrac2 ~]$ /etc/init.d/init.crs stop )


b) 启动 Oracle RAC 10g 环境
第一步是启动节点应用程序(虚拟 IP、GSD、TNS 监听器和 ONS)。当成功启 动节点应用程序后,启动 ASM 实例。最后,启动 Oracle 实例(和相关服务) 以及企业管理器数据库控制台。
启动CRS服务(实际上会自动启动):
[root@odbrac1 ~]$ /etc/init.d/init.crs start
(或者用:[oracle@odbrac1 ~]$ crs_start -all )
[root@odbrac2 ~]$ /etc/init.d/init.crs start
(或者用:[oracle@odbrac2 ~]$ crs_start -all )
启动节点1的服务:
[oracle@odbrac1 ~]$ srvctl start nodeapps -n odbrac1
启动节点2的服务:
[oracle@odbrac1 ~]$ srvctl start nodeapps -n odbrac2
启动ASM:
[oracle@odbrac1 ~]$ srvctl start asm -n odbrac1
[oracle@odbrac1 ~]$ srvctl start asm -n odbrac2
启动数据库的所有实例(-d oradb 为Global Database Name):
[oracle@odbrac1 ~]$ srvctl start database -d oradb
启动TNS监听和dbconsole:
[oracle@odbrac1 ~]$ lsnrctl start
[oracle@odbrac1 ~]$ emctl start dbconsole
[oracle@odbrac2 ~]$ lsnrctl start
[oracle@odbrac2 ~]$ emctl start dbconsole

3) 其它操作
检查所有实例和服务的状态:
[oracle@odbrac1 ~]$ srvctl status database -d oradb
Instance oradb1 is running on node odbrac1
Instance oradb2 is running on node odbrac2

检查单个实例的状态:
[oracle@odbrac1 ~]$ srvctl status instance -d oradb -i oradb2
Instance oradb2 is running on node odbrac2
(-d 为数据局全局sid, -i 为单个实例名称)

检查在数据库全局命名服务的状态:
[oracle@odbrac1 ~]$ srvctl status service -d oradb -s oradbsrv
Service oradbsrv is running on instance(s) oradb2, oradb1
(-d 为数据局全局sid, -s 为 Database Service Name)

检查特定节点上节点应用程序的状态:
[oracle@odbrac1 ~]$ srvctl status nodeapps -n odbrac1
VIP is running on node: odbrac1
GSD is running on node: odbrac1
Listener is running on node: odbrac1
ONS daemon is running on node: odbrac1
(-n 为节点node名称)

检查 ASM 实例的状态:
[oracle@odbrac1 ~]$ srvctl status asm -n odbrac1
ASM instance +ASM1 is running on node odbrac1.

列出配置的所有数据库:
[oracle@odbrac1 ~]$ srvctl config database
oradb

显示 RAC 数据库的配置:
[oracle@odbrac1 ~]$ srvctl config database -d oradb
odbrac1 oradb1 /app/oracle/product/10.2.0/db
odbrac2 oradb2 /app/oracle/product/10.2.0/db

显示指定集群数据库的所有服务:
[oracle@odbrac1 ~]$ srvctl config service -d oradb
oradbsrv PREF: oradb2 oradb1 AVAIL:

显示节点应用程序的配置 -(VIP、GSD、ONS、监听器):
[oracle@odbrac1 ~]$ srvctl config nodeapps -n odbrac1 -a -g -s -l
VIP exists.: /odbrac1-vip/10.10.100.201/255.255.255.0/eth0
GSD exists.
ONS daemon exists.
Listener exists.

显示 ASM 实例的配置:
[oracle@odbrac1 ~]$ srvctl config asm -n odbrac1
+ASM1 /app/oracle/product/10.2.0/db

0 0
原创粉丝点击