ORA-32701

来源:互联网 发布:大华监视器软件下载 编辑:程序博客网 时间:2024/06/10 01:59

一:版本信息

操作系统版本:AIX 61009

数据库版本:11.2.0.3.11(RAC)


二:错误描述

  检查发现一套使用ASM的rac两个实例基本上每个小时都会报一次ORA-32701错误,截取alert日志中错误信息如下:

Sat Dec 06 09:44:00 2014Errors in file /oracle/app/oracle/diag/rdbms/egmmdb/egmmdb2/trace/egmmdb2_dia0_13500888.trc  (incident=1041128):ORA-32701: Possible hangs up to hang ID=0 detectedIncident details in: /oracle/app/oracle/diag/rdbms/egmmdb/egmmdb2/incident/incdir_1041128/egmmdb2_dia0_13500888_i1041128.trcDIA0 terminating blocker (ospid: 15335610 sid: 1299 ser#: 5849) of hang with ID = 3    requested by master DIA0 process on instance 1    Hang Resolution Reason: Although the number of affected sessions did not    justify automatic hang resolution initially, this previously ignored    hang was automatically resolved.    by terminating session sid: 1299 ospid: 15335610Sat Dec 06 09:44:01 2014Sweep [inc][1041128]: completedSweep [inc2][1041128]: completedDIA0 successfully terminated session sid:1299 ospid:15335610 with status 31.Sat Dec 06 09:45:35 2014Errors in file /oracle/app/oracle/diag/rdbms/egmmdb/egmmdb2/trace/egmmdb2_dia0_13500888.trc  (incident=1041129):ORA-32701: Possible hangs up to hang ID=0 detectedIncident details in: /oracle/app/oracle/diag/rdbms/egmmdb/egmmdb2/incident/incdir_1041129/egmmdb2_dia0_13500888_i1041129.trcDIA0 terminating blocker (ospid: 15335610 sid: 1299 ser#: 5849) of hang with ID = 3    requested by master DIA0 process on instance 1    Hang Resolution Reason: Although the number of affected sessions did not    justify automatic hang resolution initially, this previously ignored    hang was automatically resolved.   by terminating the processDIA0 successfully terminated process ospid:15335610.Sat Dec 06 09:45:37 2014Sweep [inc][1041129]: completedSweep [inc2][1041129]: completedSat Dec 06 10:45:12 2014Errors in file /oracle/app/oracle/diag/rdbms/egmmdb/egmmdb2/trace/egmmdb2_dia0_13500888.trc  (incident=1041130):ORA-32701: Possible hangs up to hang ID=0 detectedIncident details in: /oracle/app/oracle/diag/rdbms/egmmdb/egmmdb2/incident/incdir_1041130/egmmdb2_dia0_13500888_i1041130.trcSat Dec 06 10:45:13 2014Sweep [inc][1041130]: completedSweep [inc2][1041130]: completed

egmmdb2_dia0_13500888_i1041129.trc中截取如下信息:

<span style="color:#333333;">*** 2014-12-06 09:45:35.770Resolvable Hangs in the System                      Root       Chain Total               Hang                  Hang Hang          Inst Root  #hung #hung  Hang   Hang  Resolution              ID Type Status   Num  Sess   Sess  Sess  Conf   Span  Action               ----- ---- -------- ---- ----- ----- ----- ------ ------ -------------------      3 HANG RSLNPEND    2  1299     2     2   HIGH GLOBAL Terminate Process    Hang Resolution Reason: Although the number of affected sessions did not    justify automatic hang resolution initially, this previously ignored    hang was automatically resolved.       inst# SessId  Ser#     OSPID PrcNm Event      ----- ------ ----- --------- ----- -----          1   1444  7855  10420452  M000 enq: FU - contention          2   1299  5849  15335610  M000 not in wait     <span style="color:#ff0000;"><<<<====从这里我们可以看出一个M00*进程阻塞了另一个M00*进程(从上面的alert日志中我们可以看到,Hang Manager通过杀掉造成阻塞的1299会话解决Hang问题)</span> Dumping process info of pid[155.15335610] (sid:1299, ser#:5849)    requested by master DIA0 process on instance 1. *** 2014-12-06 09:45:35.770Process diagnostic dump for oracle@egmmdb2 (M000), OS id=15335610,pid: 155, proc_ser: 153, sid: 1299, sess_ser: 5849 -------------------------------------------------------------------------------os thread scheduling delay history: (sampling every 1.000000 secs)  0.000000 secs at [ 09:45:35 ]    NOTE: scheduling delay has not been sampled for 0.376554 secs  0.000000 secs from [ 09:45:31 - 09:45:36 ], 5 sec avg  0.000000 secs from [ 09:44:36 - 09:45:36 ], 1 min avg  0.000000 secs from [ 09:40:36 - 09:45:36 ], 5 min avgloadavg : 2.68 2.42 2.41swap info: free_mem = 19881.13M rsv = 256.00M            alloc = 138.07M avail = 65536.00M swap_free = 65397.93M       F S      UID      PID     PPID   C PRI NI ADDR    SZ    WCHAN    STIME    TTY  TIME CMD  240001 A   oracle 15335610        1   0  60 20 948d16590 209136 f1000a01500d48b0 08:37:22      -  0:01 ora_m000_egmmdb2Short stack dump: ksedsts()+360<-ksdxfstk()+44<-ksdxcb()+3384<-sspuser()+116<-49f0<-sntpread()+40<-ntpfprd()+116<-nsbasic_brc()+1116<-nsbrecv()+128<-nioqrc()+17480<-ttcdrv()+988<-nioqwa()+76<-upirtrc()+2428<-kpurcsc()+108<-kpuexec()+3088<-OCIStmtExecute()+32<-kfgGrpTableCbInternal()+2756<-kfgGrpTableCb()+48<-qerfxFetch()+3728<-qergsFetch()+6856<-opifch2()+6228<-opiefn0()+552<-opipls()+3492<-opiodr()+720<-rpidrus()+216<-skgmstack()+120<-rpidru()+132<-rpiswu2()+480<-rpidrv()+1060<-psddr0()+260<-psdnal()+472<-pevm_EXECC()+532<-pfrinstr_EXECC()+72<-pfrrun_no_tool()+96<-pfrrun()+1032<-plsql_run()+616<-peicnt()+320<-kkxexe()+508<-opiexe()+17224<-opiall0()+1296<-opikpr()+488<-opiodr()+720<-rpidrus()+216<-skgmstack()+120<-rpidru()+132<-rpiswu2()+480<-kprball()+1036<-kkxsexr()+380<-kkxsexe()+604<-pevm_icd_call_common()+896<-pfrinstr_ICAL()+156<-pfrrun_no_tool()+96<-pfrrun()+1032<-plsql_run()+616<-peicnt()+320<-kkxexe()+508<-opiexe()+17224<-kpoal8()+4616<-opiodr()+720<-kpoodrc()+36<-rpiswu2()+480<-kpoodr()+512<-upirtrc()+1328<-kpurcsc()+108<-kpuexec()+10448<-OCIStmtExecute()+32<-kewuxs_execute_statement()+676<-kewfeus_execute_usage_sampling()+268<-kewfads_auto_dbfus_slave()+1296<-kebm_slave_main()+492<-ksvrdp()+1708<-opirip()+724<-opidrv()+608<-sou2o()+136<-opimai_real()+188<-ssthrdmain()+268<-main()+204<-__start()+112 -------------------------------------------------------------------------------Process diagnostic dump actual duration=0.084000 sec  (max dump time=15.000000 sec)*** 2014-12-06 09:45:35.854current sql: SELECT MAX(TOTAL_MB), MIN(TOTAL_MB), SUM(TOTAL_MB), COUNT(*) FROM V$ASM_DISKGROUP <<span style="color:#ff0000;"><<<====M000的进程在进行磁盘组空间的统计。(相同的情况在节点2上也在发生,也同时在ON CPU上执行current sql: SELECT MAX(TOTAL_MB), MIN(TOTAL_MB), SUM(TOTAL_MB), COUNT(*) FROM V$ASM_DISKGROUP ,产生节点1和节点2之间M000进程的交叉等待,之后Hang manager介入杀掉进程,同时提示ORA-32701 ),关于这种相互等待我们可以参考enq: FU - contention' and ORA-32701 Warning Seen in Alert Log (Doc ID 1464844.1)                                                                     </span>***********中间部分内容省略 ***********************           
下面我们来看一下trace文件中Current Wait Stack部分类容                                                                         
Current Wait Stack:                        Not in wait; last wait ended 58 min 27 sec ago  <span style="color:#ff0000;"><<<<====一个新的会话需要独占FU enqueue,获取失败,发现FU enqueue已经被另一个会话持有(在not in wait状态下持有超过58分钟),并且另一个会话整出not in wait状态,所以最后持有FU enqueue的会话被Hang Manager杀掉。</span>                      There are 1 sessions blocked by this session.                      Dumping one waiter:                        inst: 1, sid: 1444, ser: 7855                        wait event: 'enq: FU - contention'                          p1: 'name|mode'=0x46550006                          p2: '0'=0x0                          p3: '0'=0x0                        row_wait_obj#: 4294967295, block#: 0, row#: 0, file# 0                        min_blocked_time: 534 secs, waiter_cache_ver: 31914                      Wait State:                        fixed_waits=0 flags=0x20 boundary=0x0/-1                      Session Wait History:                          elapsed time of 58 min 27 sec since last wait                       0: waited for 'latch free'                          address=0x7000000000356a0, number=0x13c, tries=0x0                          wait_id=12785 seq_num=20683 snap_id=1                          wait times: snap=0.001418 sec, exc=0.001418 sec, total=0.001418 sec                          wait times: max=infinite                          wait counts: calls=0 os=0                          occurred after 0.001614 sec of elapsed time                       1: waited for 'control file sequential read'                          file#=0x0, block#=0x1a, blocks=0x1                          wait_id=12784 seq_num=20682 snap_id=1                          wait times: snap=0.000258 sec, exc=0.000258 sec, total=0.000258 sec                          wait times: max=infinite                          wait counts: calls=0 os=0                          occurred after 0.000016 sec of elapsed time</span></span>

三:错误原因及解决方法

  要处理这个问题我们需要先了解一下 oracle 11g 新特性—hang 管理器(Hang Manager),关于Hang Manager详细内容可以参考我的另一篇博客

http://blog.csdn.net/shaochenshuo/article/details/41845079(取自MOS)。

  看完Hang Manager,结合上面alert日志及trace的分析我们知道报错的原因是FU enqueue的争用。但是为什么会产生FU enqueue的争用呢,为什么导致阻塞的会话会保持FU enqueue 58分钟这么久的时间?

  我试着在执行手动执行SELECT MAX(TOTAL_MB), MIN(TOTAL_MB), SUM(TOTAL_MB), COUNT(*) FROM V$ASM_DISKGROUP;语句,发现语句一直处于Hang着不动,等了几十分钟都没有结果。接着有试着查看了asm_disk,也是被HANG住。这就难怪数据库会报ORA-32701错误了,数据库MMON的子进程在搜集磁盘组的使用信息时,被HANG住(推测这个信息可能是每隔一个小时会收集一次,因为这个报错大概一个小时会发生一次。当下次收集任务开始时,发现FU enqueue还在被上一个任务持有,然后通过Hang Manager杀掉了上一个会话,如此往复)。

   那么问题又来了,为什么访问与磁盘信息有关的视图会Hang住呢?

  MOS上找到一遍NOTE貌似跟我们报错一样,'enq: FU - contention' and ORA-32701 Warning Seen in Alert Log (Doc ID 1464844.1)。这个上面说可能是因为统计信息有问题,导致查询Hang,但是数据库fixed objects统计信息是没有问题的。

  没有办法提了SR,但是一个多月过去了,oracle也没查出问题原因。看来还是只能靠自己啊,根据经验asm_diskstring异常经常会导致查询磁盘相关视图慢,我们的数据库该参数是空的(正常情况下是没有问题的,该参数不设置,ASM也会根据操作系统不同到相关目录下去扫描磁盘)。所以我猜测这个问题有可能是该参数引起的。根据猜测准备重置该参数alter system set asm_diskstring='' scope=both;发现该语句也被hang住。最后重启实例,实例重启后数据库不再报ORA-32701,disk相关视图也能正常查询。

  报错解决了,但是触发该报错的具体原因还不清楚,后续如果结果会及时更新。

##下面是mos上相关note

'enq: FU - contention' and ORA-32701 Warning Seen in Alert Log (Doc ID 1464844.1)

Queries On V$Asm_diskgroup Are Taking Long Time (Doc ID 1552549.1)

Bug 8786114 - V$asm_disk query takes way too long, kfod commands hang (Doc ID 8786114.8) 

Many sessions hang resolved by Hang Manager with ORA-32701 in alert log while the blocker is in 'Not in wait' status (Doc ID 1926781.1)







0 0
原创粉丝点击