【ORACLE RAC安装】VMwareServer1.0.6 + RHEL5 + Oracle 10gR2 RAC+OCFS2

来源:互联网 发布:淘宝信用值 评论 编辑:程序博客网 时间:2024/05/19 18:17
 

转载,供大家学习。


这几天学习Oracle RAC,本想使用VMware Workstation5.0搭了个环境,但在共享磁盘这一块儿,总是达不到理想的结果。反正VMware Server1.0.6也是免费的,做的也比workstation5.0好,干脆换它了。其间费些周折,写下来供大家参考,也算对自己这几天的一个交代吧,:)

 

一、搭建环境

 

两个节点的情况分别为:

Hostame: oracle1.hrwang.com

eth0: 192.168.162.138         NAT

eth1: 10.0.0.138              Host-only

vip:  192.168.162.158

 

Hostname: oracle2.hrwang.com

eth0: 192.168.162.139         NAT 

eth1: 10.0.0.139              Host-only

vip:  192.168.162.159

 

(一)第一大阶段

 

好了,开始动手,:)

1.用vmware创建一个虚拟机,安装RHEL5

 

有几点注意一下:

(1)建议虚拟机硬盘大小 >= 10G

(2)分配的内存1G

(3)需要两块网卡,一块选择NAT模式,一块选择Host-only模式。

 

网上有人说安装的时候不要选择带xen的核心什么的,我想只是建议吧,至少我做的实验安装了xen的内核也是成功的。安装时输入的RHEL5的序列号:

Red Hat Enterprise Linux Virtualization Platform:

49af89414d147589

 

2.配置RHEL5

 

假设刚刚装完了艘桓鯮HEL5系统,现在我们来做一些oracle RAC安装前的配置。

 

(1)检查oracle安装所依赖的包是否装全

setarch-2*

 make-3*

 glibc-2*

 libaio-0*

 compat-libstdc++-33-3*

 compat-gcc-34-3*

 compat-gcc-34-c++-3*

 gcc-4*

 libXp-1*

 openmotif22-*

 compat-db-4*

 

使用#rpm -qa |grep setarch 来检查setarch-2×是否安装上,其他同理。

这些包就集中在1、2、3三张盘吧,没有装的话,找到后用#rpm -Uvh 安装就是了。 

 

(2)配置主机名及网络设置

#vi /etc/sysconfig/network

HOSTNAME=oracle1.hrwang.com                 #真对自己的情况改名吧

 

#vi /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE=eth0

BOOTPROTO=static

HWADDR=00:0C:29:8A:CF:56

ONBOOT=yes

IPADDR=192.168.162.138

NETMASK=255.255.255.0

GATEWAY=192.168.162.2

 

#vi /etc/sysconfig/network-scripts/ifcfg-eth1

DEVICE=eth1

BOOTPROTO=static

HWADDR=00:0C:29:8A:CF:60

ONBOOT=yes

IPADDR=10.0.0.138

NETMASK=255.0.0.0

 

#vi /etc/hosts

# Do not remove the following line, or various programs

# that require network functionality will fail.

127.0.0.1 localhost.localdomain localhost

::1 localhost6.localdomain6 localhost6

#public

192.168.146.138 oracle1.hrwang.com oracle1

192.168.146.139 oracle2.hrwang.com oracle2

#private

10.0.0.138 oracle1-priv.hrwang.com oracle1-priv

10.0.0.139 oracle2-priv.hrwang.com oracle2-priv

#vitual

192.168.146.158 oracle1-vip.hrwang.com oracle1-vip

192.168.146.159 oracle2-vip.hrwang.com oracle2-vip

(3)添加oracle所需的账户,组

#groupadd oinstall

#groupadd dba

#groupadd oper

#useradd -g oinstall -G dba oracle

#passwd oracle

#mkdir /oracle                              #用于安装oracle用的

#chown -R oracle:oinstall /oracle

#chmod -R 775 /oracle

(4)修改内核参数

#vi /etc/sysctl.conf                  //添加如下内容

kernel.shmmni = 4096

kernel.sem = 250 32000 100 128

fs.file-max = 65536

net.ipv4.ip_local_port_range = 1024 65000

net.core.rmem_default=262144

net.core.rmem_max=262144

net.core.wmem_default=262144

net.core.wmem_max=262144

使更改立即生效,使用如下命令:

#sysctl -P

(5)设置oracle用户的shell limit

#vi /etc/security/limits.conf             //添加如下内容

oracle soft nproc 2047

oracle hard nproc 16384

oracle soft nofile 1024

oracle hard nofile 65536

接下来更改/etc/pam.d/login文件,添加下面的内容,使shell limit生效:

#vi /etc/pam.d/login           

session           required                pam_limits.so

(6)修改/etc/redhat-release文件

因为Oracle 10g官方只支持到RHEL4为止,所以需要更改版本说明,编辑/etc/redhat-release文件,删除Red Hat Enterprise Linux Server release 5 (Tikanga),改为redhat-4

(7)更改oracle用户环境变量

[oracle@oracle1 ~]$vi .bash_profile          //以oracle登陆,添加如下内容

export ORACLE_BASE=/oracle

export ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1

export ORA_CRS_HOME=$ORACLE_BASE/product/10.2.0/crs

export ORACLE_SID=orcl1

export PATH=$PATH:$ORACLE_HOME/bin

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib

export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

if [ $USER = "oracle" ]; then

        if [ $SHELL = "/bin/ksh" ]; then

                ulimit -p 16384

                ulimit -n 65536

        else

                ulimit -u 16384 -n 65536

        fi

fi

(8)安装OCFS2

OCFS2(Oracle Cluster File System 2)所需的核心包和工具包的下载地址为:

http://oss.oracle.com/projects/ocfs2/files/

http://oss.oracle.com/projects/ocfs2-tools/files/

切忌,核心包一定要装跟你系统相匹配的。我这里用的是RHEL5

[root@oracle1 ~]# uname -r

2.6.18-8.el5xen

 

[root@oracle1 opt]# ls -l

total 1576

-rw-r--r-- 1 root root  278783 Sep 22 21:48 ocfs2-2.6.18-8.el5xen-1.2.9-1.el5.i686.rpm

-rw-r--r-- 1 root root  169193 Sep 22 21:48 ocfs2console-1.2.7-1.el5.i386.rpm

-rw-r--r-- 1 root root 1144294 Sep 22 21:48 ocfs2-tools-1.2.7-1.el5.i386.rpm

 

直接用#rpm -Uvh * 安装就行了。

 

好了,到这里第一阶段的配置工作完成,关闭虚拟机。

 

(二)第二大阶段

 

安装Oracle RAC,肯定需要共享存储设备了!! 而且我们现在只装了一个RHEL5,还缺少一个RHEL5

呢,别急,下面来搞定这些。

 

1.创建共享存储

 

到VMware server 1.0.6的安装目录下,你会发现一个vmware-vdiskmanager.exe文件。

好了,我们就用它来创建一个共享磁盘。

 

C:\Program Files\VMware\VMware Server>vmware-vdiskmanager.exe -c -s 8Gb -a

lsilogic -t 3 "E:\hrwang-vmware-ocfs\ocfs.vmdk"

Using log file C:\DOCUME~1\hrwang\LOCALS~1\Temp\vmware-hrwang\vdiskmanager.log

Creating a split preallocated disk 'E:\hrwang-vmware-ocfs\ocfs.vmdk'

  Create: 100% done.

Virtual disk creation successful.

2.更改RHEL5虚拟机安装目录下以".vmx"为后缀的文件

我的RHEL5安装在E:\hrwang-vmware-RHEL5-oracle1\下,其下有一个名为“Other Linux 2.6.x

kernle.vmx"的文件,就是它了,添加如下信息:

 

scsi1.present="TRUE"

scsi1.virtualDev="lsilogic"

scsi1.sharedBus="virtual"

scsi1:1.present="TRUE"

scsi1:1.mode="independent-persistent"

scsi1:1.filename="E:\hrwang-vmware-ocfs\ocfs.vmdk"

scsi1:1.deviceType="disk"

disk.locking="false"

diskLib.dataCacheMaxSize = "0"

diskLib.dataCacheMaxReadAheadSize = "0"

diskLib.DataCacheMinReadAheadSize = "0"

diskLib.dataCachePageSize = "4096"

diskLib.maxUnsyncedWrites = "0"

3.创建第二个RHEL5主机

因为我们用的是VMware,所以直接把hrwang-vmware-RHEL5-oracle1拷贝一份,命名为hrwang-vmware-RHEL5-oracle2。然后用vmware打开,启动。第二个RHEL5就大体搭完了,

简单吧,:)

(1)当然,主机名和网络设置还是要更改的。

#vi /etc/sysconfig/network

HOSTNAME=oracle2.hrwang.com                 #更改hostname

#vi /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE=eth0

ONBOOT=yes

BOOTPROTO=static

HWADDR=00:0c:29:2f:bb:b3

IPADDR=192.168.162.139

NETMASK=255.255.255.0

GATEWAY=192.168.162.2

#vi /etc/sysconfig/network-scripts/ifcfg-eth1

DEVICE=eth1

ONBOOT=yes

BOOTPROTO=static

HWADDR=00:0c:29:2f:bb:bd

IPADDR=10.0.0.139

NETMASK=255.0.0.0

(2)更改oracle用户目录下的.bash_profile

切换到oracle用户, $vi /home/oracle/.bash_profile

把ORACLE_SID = orcl1     改为 ORACLE_SID = orcl2

好了,重启使设置生效吧,第二个RHEL5系统也配置完成了。

(三)第三大阶段

好了,到目前为止,我们需要把配置好的两个系统都启动起来了。先用ping测试一下连通性。没问题吧!

1.配置oracle用户的同等性(需使用oracle用户登陆)

在两台机器中都执行如下操作:

[oracle@oracle2 ~]$ mkdir .ssh

[oracle@oracle2 ~]$ chmod 700 .ssh

[oracle@oracle2 ~]$ ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/home/oracle/.ssh/id_rsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/oracle/.ssh/id_rsa.

Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.

The key fingerprint is:

29:88:60:b7:47:8f:32:d7:b6:cf:8a:6a:7f:d6:9b:1b oracle@oracle2.hrwang.com

然后,在oracle1上,执行如下操作:

[oracle@oracle1 .ssh]$ssh oracle1 cat /home/oracle/.ssh/id_rsa.pub >> authorized_keys

The authenticity of host 'oracle1 (192.168.162.138)' can't be established.

RSA key fingerprint is e6:7e:fc:49:c0:32:d5:12:a6:3c:3b:92:c5:44:be:f6.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'oracle1,192.168.162.138' (RSA) to the list of known hosts.

oracle@oracle1's password:

[oracle@oracle1 .ssh]$ssh oracle2 cat /home/oracle/.ssh/id_rsa.pub >> authorized_keys

oracle@oracle2‘s password:

[oracle@oracle1 .ssh]$scp authorized_keys oracle2:/home/oracle/.ssh/

然后到oracle2上,把authorized_keys的权限也改为600.

最后以oracle用户在两个节点上分别执行如下操作:

$ssh oracle1 date

$ssh oracle2 date

$ssh oracle1.hrwang.com date

$ssh oracle2.hrwang.com date

$exec /usr/bin/ssh-agent $SHELL

$/usr/bin/ssh-add

好了,先在两台机器之间oracle用户的同等性建立完成。发现了吗,ssh到对方机器时不再需要密码。

2.通过OCFS2配置共享磁盘(root权限执行)

下面的操作我以root权限在oracle1上执行(你如果喜欢在另外节点上执行也没问题,因为是共享的,在一个节点上对其操作就ok。)

 

(1)使用#fdisk -l 发现共享磁盘设备名为/dev/sdb。好了,先创建分区吧。

 

[root@oracle1 ~]# fdisk /dev/sdb

The number of cylinders for this disk is set to 1044.

There is nothing wrong with that, but this is larger than 1024,

and could in certain setups cause problems with:

1) software that runs at boot time (e.g., old versions of LILO)

2) booting and partitioning software from other OSs

   (e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): n

Command action

   e extended

   p primary partition (1-4)

p

Partition number (1-4): 1

First cylinder (1-1044, default 1):

Using default value 1

Last cylinder or +size or +sizeM or +sizeK (1-1044, default 1044):

Using default value 1044

Command (m for help): p

Disk /dev/sdb: 8589 MB, 8589934592 bytes

255 heads, 63 sectors/track, 1044 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot Start End Blocks Id System

/dev/sdb1 1 1044 8385898+ 83 Linux

Command (m for help): w

The partition table has been

Calling ioctl() to re-read partition table.

Syncing disks.

 

(2)#[root@oracle1 ~]ocfs2console            

选择"Task"菜单下的format,更改volume label为ocfs,点击OK,格式化。参考下图:

 VMwareServer1.0.6 + RHEL5 + Oracle 10gR2 RAC+OCFS2  - dslmoon - 天空已蔚蓝

 

查看一下/etc/ocfs2/下是否有cluster.conf文件,把里面的内容清空;如果没有此文件,touch一个。

 

接下来,点击"Cluster"菜单下的Node Configuration,把两个节点都添加上,名字就是Hostname,即oracle1.hrwang.com  oracle2.hrwang.com。 IP部分添私网IP,即10.0.0.138  10.0.0.139。点击Apply保存。参考下图:

VMwareServer1.0.6 + RHEL5 + Oracle 10gR2 RAC+OCFS2  - dslmoon - 天空已蔚蓝

添加完节点之后,再看看cluster.conf文件中的内容是什么。如果如下图就对了:

node:

        ip_port = 7777

        ip_address = 10.0.0.138

        number = 0

        name = oracle1.hrwang.com

        cluster = ocfs

node:

        ip_port = 7777

        ip_address = 10.0.0.139

        number = 1

        name = oracle2.hrwang.com

        cluster = ocfs

cluster:

        node_count = 2

        name = ocfs

最后点击"Cluster"菜单下的"Progagate Cluster Configuration",来同步oracle1上的配置到oracle2上去,如下图:

VMwareServer1.0.6 + RHEL5 + Oracle 10gR2 RAC+OCFS2  - dslmoon - 天空已蔚蓝

(3)启动和配置ocfs2服务(root权限)

在两个节点上分别执行如下操作:

[root@oracle2 ocfs2]# /etc/init.d/o2cb enable

Writing O2CB configuration: OK

Loading module "configfs": OK

Mounting configfs filesystem at /sys/kernel/config: OK

Loading module "ocfs2_nodemanager": OK

Loading module "ocfs2_dlm": OK

Loading module "ocfs2_dlmfs": OK

Creating directory '/dlm': OK

Mounting ocfs2_dlmfs filesystem at /dlm: OK

Starting O2CB cluster ocfs2: OK

[root@oracle2 ocfs2]# /etc/init.d/o2cb configure

Configuring the O2CB driver.

This will configure the on-boot properties of the O2CB driver.

The following questions will determine whether the driver is loaded on

boot. The current values will be shown in brackets ('[]'). Hitting

<ENTER> without typing an answer will keep that current value. Ctrl-C

will abort.

Load O2CB driver on boot (y/n) [y]:

Cluster to start on boot (Enter "none" to clear) [ocfs2]:

Specify heartbeat dead threshold (>=7) [31]:

Specify network idle timeout in ms (>=5000) [30000]:

Specify network keepalive delay in ms (>=1000) [2000]:

Specify network reconnect delay in ms (>=2000) [2000]:

Writing O2CB configuration: OK

O2CB cluster ocfs2 already online

(4)创建挂载点(root权限)

在两个节点上分别执行如下操作:

#mkdir /oracle_ocfs

#vi /etc/fstab                          '添加下面一行

/dev/sdb1    /oracle_ocfs   ocfs2    _netdev,datavolume,nointr  0 0

重启两台机器,测试一下看是否都可以挂载上,没问题的话继续。

重启之后共享分区应该都正常挂载,我们在任意一节点上创建两个目录,并分配权限给oracle

[root@oracle1 ocfs2]# mkdir /oracle_ocfs/shared_config

[root@oracle1 ocfs2]# mkdir /oracle_ocfs/data

[root@oracle1 /]# chown -R oracle:oinstall /oracle_ocfs

[root@oracle1 /]# chmod -R 755 /oracle_ocfs

OK,配置部分到这里就全部搞定了,下面我们来安装吧~~~

二、安装Oracle Cluster

在oracle1上安装(这个没有限制,你可以任意选择其中的一台)

使用oracle用户登陆oracle1.hrwang.com,切换clutserware目录,这是解压好的10201_clusterware_linux32.zip。请参考下面的截图,:)

1. [oracle@oracle1 clusterware]$ ./runInstaller      

VMwareServer1.0.6 + RHEL5 + Oracle 10gR2 RAC+OCFS2  - dslmoon - 天空已蔚蓝

2.

VMwareServer1.0.6 + RHEL5 + Oracle 10gR2 RAC+OCFS2  - dslmoon - 天空已蔚蓝

3.这一步时可以通过"Product Languages”选择简体中文。我打算把CRS安装在本机磁盘而不是共享磁盘上,所以安装目录选择了/oracle/product/10.2.0/crs

VMwareServer1.0.6 + RHEL5 + Oracle 10gR2 RAC+OCFS2  - dslmoon - 天空已蔚蓝

 4.

VMwareServer1.0.6 + RHEL5 + Oracle 10gR2 RAC+OCFS2  - dslmoon - 天空已蔚蓝

5. 还记得我们更改过的/etc/hosts文件吧?对了!就是把那里面的两个节点对应的都添加进来

 

VMwareServer1.0.6 + RHEL5 + Oracle 10gR2 RAC+OCFS2  - dslmoon - 天空已蔚蓝

 

6. 我们设置eth0为Public,eth1为Private

VMwareServer1.0.6 + RHEL5 + Oracle 10gR2 RAC+OCFS2  - dslmoon - 天空已蔚蓝

 

VMwareServer1.0.6 + RHEL5 + Oracle 10gR2 RAC+OCFS2  - dslmoon - 天空已蔚蓝

 

7. 记得我们在共享磁盘下创建过两个文件夹吗?呵呵,其中的shared_config目录就是用来放置ocr文件和voting_disk的

VMwareServer1.0.6 + RHEL5 + Oracle 10gR2 RAC+OCFS2  - dslmoon - 天空已蔚蓝

VMwareServer1.0.6 + RHEL5 + Oracle 10gR2 RAC+OCFS2  - dslmoon - 天空已蔚蓝

 

 

8. 开始安装

 

VMwareServer1.0.6 + RHEL5 + Oracle 10gR2 RAC+OCFS2  - dslmoon - 天空已蔚蓝

 

9. 提示以root权限执行脚本

VMwareServer1.0.6 + RHEL5 + Oracle 10gR2 RAC+OCFS2  - dslmoon - 天空已蔚蓝

 

执行脚本的顺序如下:

 

[root@oracle1]/oracle/oraInventory/orainstRoot.sh

[root@oracle2]/oracle/oraInventory/orainstRoot.sh

[root@oracle1]/oracle/product/10.2.0/crs/root.sh

[root@oracle2]/oracle/product/10.2.0/crs/root.sh

在oracle2上执行root.sh时,会提示一个错误,关于vipca无法执行的。于是乎,我手动运行/oracle/product/10.2.0/crs/bin/vipca,结果又告诉我找不到libpthread.so.0。晕~~

你也应该遇到吧!! 别急这是个bug,我们这样来解决:

[root@oracle2 ~]vi /oracle/product/10.2.0/crs/bin/vipca

JREDIR=/oracle/product/10.2.0/crs/jdk/jre/    #把最后/的去掉,改为:

JREDIR=/oracle/product/10.2.0/crs/jdk/jre

LD_ASSUME_KERNEL=2.4.19                      

export LD_ASSUME_KERNEL                       fi                           #找到这部分,改为:

LD_ASSUME_KERNEL=2.4.19                      

export LD_ASSUME_KERNEL                       fi                      

unset LD_ASSUME_KERNEL

现在再运行vipca试试,什么?又出现个错误?

是不是类似下面的错误:

Error 0(Native: listNetInterfaces:[3])

   [Error 0(Native: listNetInterfaces:[3])]

阿弥佗佛,压住火气,这样来解决:

[root@oracle2 bin]# ./oifcfg iflist

eth1 10.0.0.0

eth0 192.168.162.0

[root@oracle2 bin]# ./oifcfg setif -global eth0/192.168.162.0:public

[root@oracle2 bin]# ./oifcfg setif -global eth1/10.0.0.0:cluster_interconnect

[root@oracle2 bin]# ./oifcfg getif

eth0 192.168.162.0 global public

eth1 10.0.0.0 global cluster_interconnect

你再试试vipca,怎么样出来了吧!参考下图配置vipca

(1)

VMwareServer1.0.6 + RHEL5 + Oracle 10gR2 RAC+OCFS2  - dslmoon - 天空已蔚蓝

VMwareServer1.0.6 + RHEL5 + Oracle 10gR2 RAC+OCFS2  - dslmoon - 天空已蔚蓝

(2)一般填入浮动IP主机名,IP地址就会自动填入

VMwareServer1.0.6 + RHEL5 + Oracle 10gR2 RAC+OCFS2  - dslmoon - 天空已蔚蓝

(3)

VMwareServer1.0.6 + RHEL5 + Oracle 10gR2 RAC+OCFS2  - dslmoon - 天空已蔚蓝

VMwareServer1.0.6 + RHEL5 + Oracle 10gR2 RAC+OCFS2  - dslmoon - 天空已蔚蓝

VMwareServer1.0.6 + RHEL5 + Oracle 10gR2 RAC+OCFS2  - dslmoon - 天空已蔚蓝

 

好了,在oracle2上配置完vip了。

 

10. 回到oracle1的安装界面上来,点击OK,会再一次检查,成功了就安装结束了

VMwareServer1.0.6 + RHEL5 + Oracle 10gR2 RAC+OCFS2  - dslmoon - 天空已蔚蓝

 

11. Ok,我们在oracle1上的crs已经安装完成了,这时你登陆oracle2看看,是不是/oracle/product/10.2.0/crs也已经存在了呢,:)

 

 

三、安装Oracle数据库软件

 

我们还是以oracle用户登陆oracle1.hrwang.com, 到database目录,这是解压好的10201_database_linux32.zip。

可以参考http://blog.chinaunix.net/u/22677/showart_1205499.html来安装。

 

这里有两个地方需要注意,一是在选择安装数据库的节点时,把两台机器都选上,如下图:

VMwareServer1.0.6 + RHEL5 + Oracle 10gR2 RAC+OCFS2  - dslmoon - 天空已蔚蓝

 

二是安装时我们选择只安装数据库软件,稍后我们再通过dbca创建数据库,如下图:

VMwareServer1.0.6 + RHEL5 + Oracle 10gR2 RAC+OCFS2  - dslmoon - 天空已蔚蓝

 

oracle1上的软件安装完毕,你登陆到oracle2上看看安装目录,也有了东东,呵呵!!

到此我们的oracle数据库软件也安装完毕了。

 

 

四、创建数据库

 

我还是以oracle用户在oracle1.hrwang.com上执行创建数据库的操作,请看下面的图,比较简单:

[oracle@oracle1 ~]$dbca           

VMwareServer1.0.6 + RHEL5 + Oracle 10gR2 RAC+OCFS2  - dslmoon - 天空已蔚蓝

VMwareServer1.0.6 + RHEL5 + Oracle 10gR2 RAC+OCFS2  - dslmoon - 天空已蔚蓝

VMwareServer1.0.6 + RHEL5 + Oracle 10gR2 RAC+OCFS2  - dslmoon - 天空已蔚蓝

VMwareServer1.0.6 + RHEL5 + Oracle 10gR2 RAC+OCFS2  - dslmoon - 天空已蔚蓝

VMwareServer1.0.6 + RHEL5 + Oracle 10gR2 RAC+OCFS2  - dslmoon - 天空已蔚蓝

VMwareServer1.0.6 + RHEL5 + Oracle 10gR2 RAC+OCFS2  - dslmoon - 天空已蔚蓝

VMwareServer1.0.6 + RHEL5 + Oracle 10gR2 RAC+OCFS2  - dslmoon - 天空已蔚蓝

VMwareServer1.0.6 + RHEL5 + Oracle 10gR2 RAC+OCFS2  - dslmoon - 天空已蔚蓝

VMwareServer1.0.6 + RHEL5 + Oracle 10gR2 RAC+OCFS2  - dslmoon - 天空已蔚蓝

VMwareServer1.0.6 + RHEL5 + Oracle 10gR2 RAC+OCFS2  - dslmoon - 天空已蔚蓝

VMwareServer1.0.6 + RHEL5 + Oracle 10gR2 RAC+OCFS2  - dslmoon - 天空已蔚蓝

VMwareServer1.0.6 + RHEL5 + Oracle 10gR2 RAC+OCFS2  - dslmoon - 天空已蔚蓝

VMwareServer1.0.6 + RHEL5 + Oracle 10gR2 RAC+OCFS2  - dslmoon - 天空已蔚蓝

VMwareServer1.0.6 + RHEL5 + Oracle 10gR2 RAC+OCFS2  - dslmoon - 天空已蔚蓝

VMwareServer1.0.6 + RHEL5 + Oracle 10gR2 RAC+OCFS2  - dslmoon - 天空已蔚蓝

VMwareServer1.0.6 + RHEL5 + Oracle 10gR2 RAC+OCFS2  - dslmoon - 天空已蔚蓝

VMwareServer1.0.6 + RHEL5 + Oracle 10gR2 RAC+OCFS2  - dslmoon - 天空已蔚蓝

VMwareServer1.0.6 + RHEL5 + Oracle 10gR2 RAC+OCFS2  - dslmoon - 天空已蔚蓝

 

好了,到此共享数据库已经建立好了。不过大家选字符集的时候可别象我那么选,会有乱码的,呵呵!

 

五、管理ORACLE RAC

 

创建完共享数据库,我们通过srvctl来看看状态(在哪个节点上执行? 随意!!)

[oracle@oracle1 admin]$ srvctl --help

/oracle/product/10.2.0/db_1/jdk/jre/bin/java: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory

 

怎么回事?????   别急,还是那个bug问题,我们更改两个节点上的/oracle/product/10.2.0/db_1/bin/srvctl文件

 

找到下面的部分

LD_ASSUME_KERNEL=2.4.19

export LD_ASSUME_KERNEL

改为

LD_ASSUME_KERNEL=2.4.19

export LD_ASSUME_KERNEL

unset LD_ASSUME_KERNEL

1.查看数据库状态

[oracle@oracle2 admin]$ srvctl status database -d orcl

Instance orcl1 is running on node oracle1

Instance orcl2 is running on node oracle2

2.查看实例状态

[oracle@oracle1 bin]$ srvctl status instance -d orcl -i orcl1,orcl2

Instance orcl1 is running on node oracle1

Instance orcl2 is running on node oracle2

3.查看节点软件状态

[oracle@oracle1 bin]$ srvctl status nodeapps -n oracle1

VIP is running on node: oracle1

GSD is running on node: oracle1

Listener is running on node: oracle1

ONS daemon is running on node: oracle1

[oracle@oracle1 bin]$ srvctl status nodeapps -n oracle2

VIP is running on node: oracle2

GSD is running on node: oracle2

Listener is running on node: oracle2

ONS daemon is running on node: oracle2

4.关闭和启动oracle RAC的步骤

[oracle@oracle2 admin]$ emctl stop dbconsole

[oracle@oracle2 admin]$ srvctl stop instance -d orcl -i orcl2

[oracle@oracle2 admin]$ srvctl stop nodeapps -n oracle2

[oracle@oracle2 admin]$ srvctl start nodeapps -n oracle2

[oracle@oracle2 admin]$ srvctl start instance -d orcl -i orcl2

[oracle@oracle2 admin]$ emctl start dbconsole

 

 

杂:细心的朋友可能会发现,有的截图是vmware workstation,后来的就是vmware server,这是因为我一开始是使用workstation来做实验的,后来迁移到了server上了。不打紧了,建议你直接使用vmware server。