8 Monitoring Performance and Troubleshooting

来源:互联网 发布:hadoop 非结构化数据 编辑:程序博客网 时间:2024/05/17 08:01
8 Monitoring Performance and Troubleshooting
The Global Cache Block Access Latency chart shows data for two different types of data block requests: current and consistent-read (CR) blocks. When you update data in the database, Oracle Database must locate the most recent version of the data block that contains the data, which is called the current block. If you perform a query, then only data committed before the query began is visible to the query. Data blocks that were changed after the start of the query are reconstructed from data in the undo segments. The reconstructed data is made available to the query in the form of a consistent-read block.


-----这里官档说了什么叫当前块,什么叫CR块
当前块:发生在更新数据库的时候,如果更新数据到数据库,oracle需要知道包含这个数据的数据库的最近的版本,这就叫做current block.
CR块:如果你执行了查询,只有在查询之前提交的数据是可见的.在查询之后数据块开始改变的,就需要从undo中构造一致块.



Concurrent read and write activity on shared data in a cluster is a frequently occurring activity. Depending on the service requirements, this activity does not usually cause performance problems. However, when global cache requests cause a performance problem, optimizing SQL plans and the schema to improve the rate at which data blocks are located in the local buffer cache, and minimizing I/O is a successful strategy for performance tuning. If the latency for consistent-read and current block requests reaches 10 milliseconds, then your first step in resolving the problem should be to go to the Cluster Cache Coherency page for more detailed information.


-----如果一致地读取和当前块请求的延迟到达10毫秒,就需要进一步查看原因了。


Using CRSCTL to Diagnose Cluster Issues


About the Oracle Clusterware Alert Log
The Oracle Clusterware should be the first place to look for serious errors. It often contains references to other diagnostic logs that provide detailed information on a specific component. The location of the Oracle Clusterware log file is CRS_home/log/hostname/alerthostname.log, where CRS_home is the directory in which Oracle Clusterware was installed and hostname is the host name of the local node.


----CRS的alert日志默认放在CRS_home/log/hostname/alerthostname.log,hostname为本地节点的名字


Running the Oracle Clusterware Diagnostics Collection Script


To run the Oracle Clusterware Diagnostics Collection script:


In a command window, log in to the operating system as the root user.


Run the diagcollection.pl script from the operating system prompt as follows, where Grid_home is the home directory of your Oracle Grid Infrastructure for a cluster installation:


# Grid_home/bin/diagcollection.pl --collect


执行例子如下:
[root@elvis111 bin]# ./diagcollection.pl --collect
Production Copyright 2004, 2010, Oracle.  All rights reserved
Cluster Ready Services (CRS) diagnostic collection tool
The following CRS diagnostic archives will be created in the local directory.
crsData_elvis111_20150504_0728.tar.gz -> logs,traces and cores from CRS home. Note: core files will be packaged only with the --core option. 
ocrData_elvis111_20150504_0728.tar.gz -> ocrdump, ocrcheck etc 
coreData_elvis111_20150504_0728.tar.gz -> contents of CRS core files in text format


osData_elvis111_20150504_0728.tar.gz -> logs from Operating System
lsInventory_elvis111_20150504_0728 ->Opatch lsinventory details
Collecting crs data
/bin/tar: log/elvis111/gipcd/gipcd.log: file changed as we read it
/bin/tar: log/elvis111/agent/crsd/orarootagent_root/orarootagent_root.log: file changed as we read it
Collecting OCR data 
Collecting information from core files


warning: exec file is newer than core file.
Cannot access memory at address 0x642520726f727265


warning: exec file is newer than core file.
Cannot access memory at address 0x642520726f727265
Collecting lsinventory details
The following diagnostic archives will be created in the local directory.
acfsData_elvis111_20150504_0728.tar.gz -> logs from acfs log.
Collecting acfs data
Collecting OS logs
Collecting sysconfig data


-rw-r--r-- 1 root root      11933011 May  4 07:29 crsData_elvis111_20150504_0728.tar.gz
-rw-r--r-- 1 root root         25331 May  4 07:30 ocrData_elvis111_20150504_0728.tar.gz
-rw-r--r-- 1 root root         58268 May  4 07:30 coreData_elvis111_20150504_0728.tar.gz
-rw-r--r-- 1 root root          7634 May  4 07:31 lsInventory_elvis111_20150504_0728
-rw-r--r-- 1 root root           847 May  4 07:31 acfsData_elvis111_20150504_0728.tar.gz
-rw-r--r-- 1 root root        133061 May  4 07:31 osData_elvis111_20150504_0728.tar.gz
-rw-r--r-- 1 root root         29113 May  4 07:31 sysconfig_elvis111_20150504_0728.txt


------------可以看到这里系统自动打包了很多CRS日志,系统版本等信息,在故障处理或者提交SR应该非常有用.


-------下面是debug的方法,在对CRS的进行问题定位的时候会非常有用,建议好好仔细看下
Enabling Debugging of Oracle Clusterware Components


To enable debugging of Oracle Clusterware components:


1.In a command window, log in to the operating system as the root user.


2.Use the following command to obtain the module names for a component, where component_name is crs, evm, css or the name of the component for which you want to enable debugging:


# crsctl lsmodules component_name
For example, viewing the modules of the css component might return the following results:


# crsctl lsmodules css
The following are the CSS modules :: 
CSSD
COMMCRS
COMMNS


例子输出如下:
[root@elvis111 grid]# crsctl lsmodules css
List CSSD Debug Module: CLSF
List CSSD Debug Module: CSSD
List CSSD Debug Module: GIPCCM
List CSSD Debug Module: GIPCGM
List CSSD Debug Module: GIPCNM
List CSSD Debug Module: GPNP
List CSSD Debug Module: OLR
List CSSD Debug Module: SKGFD
[root@elvis111 grid]# crsctl lsmodules crs
List CRSD Debug Module: AGENT
List CRSD Debug Module: AGFW
List CRSD Debug Module: CLSFRAME
List CRSD Debug Module: CLSVER
List CRSD Debug Module: CLUCLS
List CRSD Debug Module: COMMCRS
List CRSD Debug Module: COMMNS
List CRSD Debug Module: CRSAPP
List CRSD Debug Module: CRSCCL
List CRSD Debug Module: CRSCEVT
List CRSD Debug Module: CRSCOMM
List CRSD Debug Module: CRSD
List CRSD Debug Module: CRSEVT
List CRSD Debug Module: CRSMAIN
List CRSD Debug Module: CRSOCR
List CRSD Debug Module: CRSPE
List CRSD Debug Module: CRSPLACE
List CRSD Debug Module: CRSRES
List CRSD Debug Module: CRSRPT
List CRSD Debug Module: CRSRTI
List CRSD Debug Module: CRSSE
List CRSD Debug Module: CRSSEC
List CRSD Debug Module: CRSTIMER
List CRSD Debug Module: CRSUI
List CRSD Debug Module: CSSCLNT
List CRSD Debug Module: OCRAPI
List CRSD Debug Module: OCRASM
List CRSD Debug Module: OCRCAC
List CRSD Debug Module: OCRCLI
List CRSD Debug Module: OCRMAS
List CRSD Debug Module: OCRMSG
List CRSD Debug Module: OCROSD
List CRSD Debug Module: OCRRAW
List CRSD Debug Module: OCRSRV
List CRSD Debug Module: OCRUTL
List CRSD Debug Module: SuiteTes
List CRSD Debug Module: UiServer
[root@elvis111 grid]# 




3.Use CRSCTL as follows, where component_name is the name of the Oracle Clusterware component for which you want to enable debugging, module is the name of module, and debugging_level is a number from 1 to 5:


# crsctl debug log component module:debugging_level
For example, to enable the lowest level of tracing for the CSSD module of the css component, you would use the following command:


# crsctl debug log css CSSD:1
--------------进行这个debug之后,会在相应的log里面能看到debug的日志



Enabling and Disabling Oracle Clusterware Daemons
When the Oracle Clusterware daemons are enabled, they start automatically when the node is started. To prevent the daemons from starting automatically, you can disable them using crsctl commands.


To enable automatic startup for all Oracle Clusterware daemons:
1.In a command window, log in to the operating system as the root user.
2.Run the following CRSCTL command:
# crsctl enable crs


To disable automatic startup for all Oracle Clusterware daemons:
1.In a command window, log in to the operating system as the root user.
2.Run the following CRSCTL command:
# crsctl disable crs


----通过上述命令可以开启或者关闭CRS自动启动的,默认是可以自动启动的.


-------利用cluvy在诊断RAC数据库配置方面非常有用,建议仔细研究下
Using the Cluster Verification Utility to Diagnose Problems
使用CVU可以诊断各种各样的配置问题


This section contains the following topics:

Verifying the Existence of Node Applications
Verifying the Integrity of Oracle Clusterware Components
Verifying the Integrity of the Oracle Cluster Registry
Verifying the Integrity of Your Entire Cluster
Checking the Settings for the Interconnect
Enabling Tracing

可以诊断如下问题:
1.验证节点应用程序的存在性
2.验证Oracle集群组件的完整性
3.验证OCR的完整性
4.验证整个集群的完整性
5.检查内网的设置
6.可以开启trace功能


To verify the existence of node applications:


1.In a command window, log in to the operating system as the user who owns the Oracle Clusterware software installation.
2.Use the comp nodeapp command of CVU, using the following syntax:
cluvfy comp nodeapp [ -n node_list] [-verbose]
where node_list represents the nodes to check.

例子输出如下:

如果不知道node_list是什么,可以使用olsnodes命令看下

[grid@elvis111 ~]$ cluvfy comp nodeapp -n elvis111,elvis112 -verbose


Verifying node application existence 


Checking node application existence...


Checking existence of VIP node application (required)
  Node Name     Required                  Running?                  Comment   
  ------------  ------------------------  ------------------------  ----------
  elvis112      yes                       yes                       passed    
  elvis111      yes                       yes                       passed    
VIP node application check passed


Checking existence of NETWORK node application (required)
  Node Name     Required                  Running?                  Comment   
  ------------  ------------------------  ------------------------  ----------
  elvis112      yes                       yes                       passed    
  elvis111      yes                       yes                       passed    
NETWORK node application check passed


Checking existence of GSD node application (optional)
  Node Name     Required                  Running?                  Comment   
  ------------  ------------------------  ------------------------  ----------
  elvis112      no                        no                        exists    
  elvis111      no                        no                        exists    
GSD node application is offline on nodes "elvis112,elvis111"


Checking existence of ONS node application (optional)
  Node Name     Required                  Running?                  Comment   
  ------------  ------------------------  ------------------------  ----------
  elvis112      no                        yes                       passed    
  elvis111      no                        yes                       passed    
ONS node application check passed




Verification of node application existence was successful. 
[grid@elvis111 ~]$ 


3.If the cluvfy command returns the value of UNKNOWN for a particular node, then CVU cannot determine whether a check passed or failed. Determine if the failure was caused by one of the following reasons:
The node is down.
Executable files that CVU requires are missing in the CRS_home/bin directory or the Oracle_home/bin directory.
The user account that ran CVU does not have permissions to run common operating system executable files on the node.
The node is missing an operating system patch or required package.
The kernel parameters on that node were not configured correctly and CVU cannot obtain the operating system resources required to perform its checks.


Verifying the Integrity of Oracle Clusterware Components
You use the CVU comp crs command to verify the existence of all the Oracle Clusterware components.
To verify the integrity of Oracle Clusterware components:
1.In a command window, log in to the operating system as the user who owns the Oracle Clusterware software installation.
2.Use the comp crs command of CVU, using the following syntax:


cluvfy comp crs [ -n node_list] [-verbose]
where node_list represents the nodes to check.


Verifying the Integrity of the Oracle Cluster Registry
You use the CVU comp ocr command to verify the integrity of the Oracle Clusterware registry.


To verify the integrity of the Oracle Clusterware registry:
1.In a command window, log in to the operating system as the user who owns the Oracle Clusterware software installation.
2.Use the comp ocr command of CVU, using the following syntax:


cluvfy comp ocr [ -n node_list] [-verbose]
where node_list represents the nodes to check.
例子输入如下:
[grid@elvis111 ~]$ cluvfy comp crs -n elvis111,elvis112 -verbose


Verifying CRS integrity 


Checking CRS integrity...


Clusterware version consistency passed
The Oracle Clusterware is healthy on node "elvis112"
The Oracle Clusterware is healthy on node "elvis111"


CRS integrity check passed


Verification of CRS integrity was successful. 


Verifying the Integrity of Your Entire Cluster
You use the CVU comp clu command to check that all nodes in the cluster have the same view of the cluster configuration.
To verify the integrity of your cluster:
1.In a command window, log in to the operating system as the user who owns the Oracle Clusterware software installation.
2.Use the comp clu command of CVU, using the following syntax:
cluvfy comp clu [-verbose]
例子输入如下:
[grid@elvis111 ~]$ cluvfy comp clu -verbose


Verifying cluster integrity 


Checking cluster integrity...


  Node Name                           
  ------------------------------------
  elvis112                            
  elvis111                            


Cluster integrity check passed




Verification of cluster integrity was successful. 
[grid@elvis111 ~]$ 


Checking the Settings for the Interconnect
To check the settings for the interconnect:


1.In a command window, log in to the operating system as the user who owns the Oracle Clusterware software installation.


2.To verify the accessibility of the cluster nodes, specified by node_list, from the local node or from any other cluster node, specified by srcnode, use the component verification command nodereach as follows:


cluvfy comp nodereach -n node_list [ -srcnode node ] [-verbose]
例子输入如下:
[grid@elvis111 ~]$ cluvfy comp nodereach -n elvis111,elvis112 -srcnode elvis112 -verbose


Verifying node reachability 


Checking node reachability...


Check: Node reachability from node "elvis112"
  Destination Node                      Reachable?              
  ------------------------------------  ------------------------
  elvis111                              yes                     
  elvis112                              yes                     
Result: Node reachability check passed from node "elvis112"




Verification of node reachability was successful. 


3.To verify the connectivity among the nodes, specified by node_list, through the available network interfaces from the local node or from any other cluster node, use the comp nodecon command as shown in the following example:


cluvfy comp nodecon -n node_list -verbose
例子输出如下:
[grid@elvis111 ~]$ cluvfy comp nodecon -n elvis111,elvis112 -verbose


Verifying node connectivity 


Checking node connectivity...


Checking hosts config file...
  Node Name                             Status                  
  ------------------------------------  ------------------------
  elvis112                              passed                  
  elvis111                              passed                  


Verification of the hosts config file successful




Interface information for node "elvis112"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU   
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.56.112  192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:72:C8:E4 1500  
 eth0   192.168.56.115  192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:72:C8:E4 1500  
 eth0   192.168.56.114  192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:72:C8:E4 1500  
 eth1   172.168.1.112   172.168.1.0     0.0.0.0         192.168.56.1    08:00:27:ED:96:2A 1500  
 eth1   169.254.215.143 169.254.0.0     0.0.0.0         192.168.56.1    08:00:27:ED:96:2A 1500  




Interface information for node "elvis111"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU   
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.56.111  192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:FA:68:C3 1500  
 eth0   192.168.56.113  192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:FA:68:C3 1500  
 eth1   172.168.1.111   172.168.1.0     0.0.0.0         192.168.56.1    08:00:27:91:EA:DE 1500  
 eth1   169.254.250.101 169.254.0.0     0.0.0.0         192.168.56.1    08:00:27:91:EA:DE 1500  




Check: Node connectivity of subnet "192.168.56.0"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  elvis112[192.168.56.112]        elvis112[192.168.56.115]        yes             
  elvis112[192.168.56.112]        elvis112[192.168.56.114]        yes             
  elvis112[192.168.56.112]        elvis111[192.168.56.111]        yes             
  elvis112[192.168.56.112]        elvis111[192.168.56.113]        yes             
  elvis112[192.168.56.115]        elvis112[192.168.56.114]        yes             
  elvis112[192.168.56.115]        elvis111[192.168.56.111]        yes             
  elvis112[192.168.56.115]        elvis111[192.168.56.113]        yes             
  elvis112[192.168.56.114]        elvis111[192.168.56.111]        yes             
  elvis112[192.168.56.114]        elvis111[192.168.56.113]        yes             
  elvis111[192.168.56.111]        elvis111[192.168.56.113]        yes             
Result: Node connectivity passed for subnet "192.168.56.0" with node(s) elvis112,elvis111




Check: TCP connectivity of subnet "192.168.56.0"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  elvis111:192.168.56.111         elvis112:192.168.56.112         passed          
  elvis111:192.168.56.111         elvis112:192.168.56.115         passed          
  elvis111:192.168.56.111         elvis112:192.168.56.114         passed          
  elvis111:192.168.56.111         elvis111:192.168.56.113         passed          
Result: TCP connectivity check passed for subnet "192.168.56.0"




Check: Node connectivity of subnet "172.168.1.0"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  elvis112[172.168.1.112]         elvis111[172.168.1.111]         yes             
Result: Node connectivity passed for subnet "172.168.1.0" with node(s) elvis112,elvis111




Check: TCP connectivity of subnet "172.168.1.0"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  elvis111:172.168.1.111          elvis112:172.168.1.112          passed          
Result: TCP connectivity check passed for subnet "172.168.1.0"




Check: Node connectivity of subnet "169.254.0.0"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  elvis112[169.254.215.143]       elvis111[169.254.250.101]       yes             
Result: Node connectivity passed for subnet "169.254.0.0" with node(s) elvis112,elvis111




Check: TCP connectivity of subnet "169.254.0.0"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  elvis111:169.254.250.101        elvis112:169.254.215.143        passed          
Result: TCP connectivity check passed for subnet "169.254.0.0"




Interfaces found on subnet "192.168.56.0" that are likely candidates for VIP are:
elvis112 eth0:192.168.56.112 eth0:192.168.56.115 eth0:192.168.56.114
elvis111 eth0:192.168.56.111 eth0:192.168.56.113


Interfaces found on subnet "172.168.1.0" that are likely candidates for VIP are:
elvis112 eth1:172.168.1.112
elvis111 eth1:172.168.1.111


Interfaces found on subnet "169.254.0.0" that are likely candidates for VIP are:
elvis112 eth1:169.254.215.143
elvis111 eth1:169.254.250.101


WARNING: 
Could not find a suitable set of interfaces for the private interconnect
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.56.0".
Subnet mask consistency check passed for subnet "172.168.1.0".
Subnet mask consistency check passed for subnet "169.254.0.0".
Subnet mask consistency check passed.


Result: Node connectivity check passed




Verification of node connectivity was successful. 


When you issue the nodecon command as shown in the previous example, it instructs CVU to perform the following tasks:


1.Discover all the network interfaces that are available on the specified cluster nodes.
2.Review the corresponding IP addresses and subnets for the interfaces.
3.Obtain the list of interfaces that are suitable for use as VIPs and the list of interfaces to private interconnects.
4.Verify the connectivity among all the nodes through those interfaces.


When you run the nodecon command in verbose mode, it identifies the mappings between the interfaces, IP addresses, and subnets.


4.To verify the connectivity among the nodes through specific network interfaces, use the comp nodecon command with the -i option and specify the interfaces to be checked with the interface_list argument:


cluvfy comp nodecon -n node_list -i interface_list [-verbose]
例子如下:
[grid@elvis111 ~]$ cluvfy comp nodecon -n elvis111,elvis112 -i interface_list -verbose


Verifying node connectivity 


Checking node connectivity...


Checking hosts config file...
  Node Name                             Status                  
  ------------------------------------  ------------------------
  elvis112                              passed                  
  elvis111                              passed                  


Verification of the hosts config file successful




Interface information for node "elvis112"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU   
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.56.112  192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:72:C8:E4 1500  
 eth0   192.168.56.115  192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:72:C8:E4 1500  
 eth0   192.168.56.114  192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:72:C8:E4 1500  
 eth1   172.168.1.112   172.168.1.0     0.0.0.0         192.168.56.1    08:00:27:ED:96:2A 1500  
 eth1   169.254.215.143 169.254.0.0     0.0.0.0         192.168.56.1    08:00:27:ED:96:2A 1500  




Interface information for node "elvis111"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU   
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.56.111  192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:FA:68:C3 1500  
 eth0   192.168.56.113  192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:FA:68:C3 1500  
 eth1   172.168.1.111   172.168.1.0     0.0.0.0         192.168.56.1    08:00:27:91:EA:DE 1500  
 eth1   169.254.250.101 169.254.0.0     0.0.0.0         192.168.56.1    08:00:27:91:EA:DE 1500  




ERROR: 
PRVG-11049 : Interface "interface_list" does not exist on nodes "elvis112,elvis111"


Check: Node connectivity for interface "interface_list"
Result: Node connectivity failed for interface "interface_list"
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.56.0".
Subnet mask consistency check passed for subnet "172.168.1.0".
Subnet mask consistency check passed for subnet "169.254.0.0".
Subnet mask consistency check passed.


Result: Node connectivity check failed




Verification of node connectivity was unsuccessful on all the specified nodes. 
[grid@elvis111 ~]$ 




For example, you can verify the connectivity among the nodes racnode1, racnode2, and racnode3, through the specific network interface eth0 by running the following command:


cluvfy comp nodecon -n racnode1,racnode2,racnode3 -i eth0 -verbose
例子如下:
[grid@elvis111 ~]$ cluvfy comp nodecon -n elvis111,elvis112 -i eth0 -verbose


Verifying node connectivity 


Checking node connectivity...


Checking hosts config file...
  Node Name                             Status                  
  ------------------------------------  ------------------------
  elvis112                              passed                  
  elvis111                              passed                  


Verification of the hosts config file successful




Interface information for node "elvis112"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU   
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.56.112  192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:72:C8:E4 1500  
 eth0   192.168.56.115  192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:72:C8:E4 1500  
 eth0   192.168.56.114  192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:72:C8:E4 1500  
 eth1   172.168.1.112   172.168.1.0     0.0.0.0         192.168.56.1    08:00:27:ED:96:2A 1500  
 eth1   169.254.215.143 169.254.0.0     0.0.0.0         192.168.56.1    08:00:27:ED:96:2A 1500  




Interface information for node "elvis111"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU   
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.56.111  192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:FA:68:C3 1500  
 eth0   192.168.56.113  192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:FA:68:C3 1500  
 eth1   172.168.1.111   172.168.1.0     0.0.0.0         192.168.56.1    08:00:27:91:EA:DE 1500  
 eth1   169.254.250.101 169.254.0.0     0.0.0.0         192.168.56.1    08:00:27:91:EA:DE 1500  




Check: Node connectivity for interface "eth0"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  elvis112[192.168.56.112]        elvis112[192.168.56.115]        yes             
  elvis112[192.168.56.112]        elvis112[192.168.56.114]        yes             
  elvis112[192.168.56.112]        elvis111[192.168.56.111]        yes             
  elvis112[192.168.56.112]        elvis111[192.168.56.113]        yes             
  elvis112[192.168.56.115]        elvis112[192.168.56.114]        yes             
  elvis112[192.168.56.115]        elvis111[192.168.56.111]        yes             
  elvis112[192.168.56.115]        elvis111[192.168.56.113]        yes             
  elvis112[192.168.56.114]        elvis111[192.168.56.111]        yes             
  elvis112[192.168.56.114]        elvis111[192.168.56.113]        yes             
  elvis111[192.168.56.111]        elvis111[192.168.56.113]        yes             
Result: Node connectivity passed for interface "eth0"




Check: TCP connectivity of subnet "192.168.56.0"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  elvis111:192.168.56.111         elvis112:192.168.56.112         passed          
  elvis111:192.168.56.111         elvis112:192.168.56.115         passed          
  elvis111:192.168.56.111         elvis112:192.168.56.114         passed          
  elvis111:192.168.56.111         elvis111:192.168.56.113         passed          
Result: TCP connectivity check passed for subnet "192.168.56.0"


Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.56.0".
Subnet mask consistency check passed for subnet "172.168.1.0".
Subnet mask consistency check passed for subnet "169.254.0.0".
Subnet mask consistency check passed.


Result: Node connectivity check passed




Verification of node connectivity was successful. 


Enabling Tracing
CVU does not generate trace files unless you enable tracing. The CVU trace files are created in the CRS_home/cv/log directory. Oracle RAC automatically rotates the log files, and the most recently created log file has the name cvutrace.log.0. You should remove unwanted log files or archive them to reclaim disk space, if needed.




To enable tracing using CVU:
1.In a command window, log in to the operating system as the root user.
2.Set the environment variable SRVM_TRACE to true.
# set SRVM_TRACE=true; export SRVM_TRACE
Run the command to trace.
0 0