Library Cache Miss

来源:互联网 发布:围巾学生推荐知乎 编辑:程序博客网 时间:2024/05/19 06:18

The following question came up on the comp.databases.oracle.server newsgroup a little while ago: 

What is the interpretation of the following:

  • Misses in library cache during parse
    • (Misses while looking for plan in library cache–Hard parse)
  • Misses in library cache during execute
    • (Misses while about to execute the plan and found it missing/invalid in library cache)
 call     count       cpu    elapsed       disk      query    current       rows ------- ------  -------- ---------- ---------- ---------- ---------- ---------- Parse        2      0.02       0.02          0          0          0          0 Execute      7      0.09       0.08          0          5          0          0 Fetch     2017      0.63       0.75        203      47302          0      15000 ------- ------  -------- ---------- ---------- ---------- ---------- ---------- total     2026      0.75       0.86        203      47307          0      15000                 Misses in library cache during parse: 1 Misses in library cache during execute: 5 

As the poster points out, the statistics about “Misses in library cache” correspond to calls to the optimizer to produce an execution plan for the statement (the so-called “hard parse” – even though some such “parses” take place on the execute).

The real question, presumably, is “Why did Oracle have to re-optimise the statement so frequently prior to executing it ?”.

It’s hard to infer specific reasons, but we can take a couple of guesses. The statement was parsed twice (two parse calls) and executed seven times – so the client code seems to be holding cursor open for at least some of the time. On one parse call and two execute calls the execution plan was found in the library cache, but on most occasions it could not be found. So either it has been invalidated, or it had been flushed from the library cache (and “reloaded”) fairly frequently.

Invalidation means that some significant change occured to the objects referenced that made the current execution invalid. Flushes will occur if there is demand for free memory – execution plans are recreateable, so they can be flushed from memory even when the cursor is being held by the client.

To investigate further, we need to know whether the problem was one of simple reloads or invalidations. The first step is to go back to the raw trace file (which I assume you have since you’ve just used tkprof on it) and find the “parsing” line for the guilty statement. It will look something like this:

 PARSING IN CURSOR #3 len=55 dep=0 uid=62 oct=3 lid=62 tim=117415948231 hv=1024979987 ad='1893bf40' select /*+ find this */ count(n2) from t1 where n1 = 15       

The hv value is the hash_value (even in 10g) from v$sql, and the ad is the address. Once you have these values, you can query v$sql for the critical information:

 SQL> select   2     child_number, loads, invalidations, parse_calls, executions   3  from   4     v$sql   5  where   6     hash_value = 1024979987   7  ;      CHILD_NUMBER      LOADS INVALIDATIONS PARSE_CALLS EXECUTIONS ------------ ---------- ------------- ----------- ----------            0          6             5           1          1 

I’ve only used the hash_value (that’s the efficient way into v$sql for versions before 10g – theaddress is useful only in the unlikely case of a hash collision occurring).

As you can see, the number of loads is just a tiny bit bigger than the number of invalidations. This tells me that most of my problems were do to a structural effect, rather than losing the plan through memory starvation. If the number of loads was much higher than the number of invalidations, that would indicate a memory flushing problem.

As you can see, I’ve also printed up the child_number – this isn’t information that would makes a lot of sense in the trace file – it’s potentially transient and misleading – but it is possible for the same SQL text to result in many child cursors, so it’s worth making sure you check v$sql so that you can see them all.

Just for reference, one of the issues that could cause this type of “excess invalidation” (prior to 10gR2) was the use of global temporary tables (GTTs) defined as ‘on commit preserve rows’. To empty such a table, you either had to terminate your session, delete all the rows (inefficiently) ortruncate the table.

But the truncate command is DDL, so if you truncated your private copy of the table every cursor referencing that table would be invalidated. I’ve seen this cause massive contention in the SQL area when such tables have been used at high levels of concurrency.


0 0