/dev/shm 分析

来源:互联网 发布:prototype js 闭包 编辑:程序博客网 时间:2024/06/07 14:35

同事邮件问我oracle 的主机上 /dev/shm 下有好多文件,快要把/dev/sdm目录占用完了,其实只占用了62% 

一、/dev/shm理论
/dev/shm/是linux下一个非常有用的目录,因为这个目录不在硬盘上,而是在内存里。因此在linux下,就不需要大费周折去建ramdisk,直接使用/dev/shm/就可达到很好的优化效果。 /dev /shm/需要注意的一个是容量问题,在linux下,它默认最大为内存的一半大小,使用df -h命令可以看到。但它并不会真正的占用这块内存,如果/dev/shm/下没有任何文件,它占用的内存实际上就是0字节;如果它最大为1G,里头放有 100M文件,那剩余的900M仍然可为其它应用程序所使用,但它所占用的100M内存,是绝不会被系统回收重新划分的,否则谁还敢往里头存文件呢?

默认系统就会加载/dev/shm ,它就是所谓的tmpfs,有人说跟ramdisk(虚拟磁盘),但不一样。象虚拟磁盘一样,tmpfs 可以使用您的 RAM,但它也可以使用您的交换分区来存储。而且传统的虚拟磁盘是个块设备,并需要一个 mkfs 之类的命令才能真正地使用它,tmpfs 是一个文件系统,而不是块设备;您只是安装它,它就可以使用了。
  tmpfs有以下优势:
  1,动态文件系统的大小。
  2,tmpfs 的另一个主要的好处是它闪电般的速度。因为典型的 tmpfs 文件系统会完全驻留在 RAM 中,读写几乎可以是瞬间的。
  3,tmpfs 数据在重新启动之后不会保留,因为虚拟内存本质上就是易失的。所以有必要做一些脚本做诸如加载,绑定的操作。
 
二、修改/dev/shm大小
默认的最大一半内存大小在某些场合可能不够用,并且默认的inode数量很低一般都要调高些,这时可以用mount命令来管理它。
#mount -o size=1500M -o nr_inodes=1000000 -o noatime,nodiratime -o remount /dev/shm
在2G的机器上,将最大容量调到1.5G,并且inode数量调到1000000,这意味着大致可存入最多一百万个小文件。
 
如果需要永久修改/dev/shm的值,需要修改/etc/fstab
tmpfs /dev/shm tmpfs defaults,size=1.5G 0 0
#mount -o remount /dev/shm
 
三、/dev/shm应用
  首先在/dev/shm建个tmp文件夹,然后与实际/tmp绑定
  #mkdir /dev/shm/tmp
  #chmod 1777 /dev/shm/tmp
  #mount –bind /dev/shm/tmp /tmp(–bind )
  在使用mount –bind olderdir newerdir命令来挂载一个目录到另一个目录后,newerdir的权限和所有者等所有信息会发生变化。挂载后的目录继承了被挂载目录的所有属性,除了名称。Oracle 11g的amm内存管理模式就是使用/dev/shm,所以有时候修改MEMORY_TARGET或者MEMORY_MAX_TARGET会出现ORA-00845的错误
 
哪这些ora的一些文件与asm  文件是如何来的呢,mos 文档是如是说,原来这是oracle 设计就是这样的
 
Oracle Server - Enterprise Edition - Version: 11.1.0.7 to 11.2.0.2 - Release: 11.1 to 11.2
Linux x86
Linux x86-64
 
Goal
 
Oracle is creating hundreds of thousands open file descriptors in /dev/shm (open files)
 
$ lsof -n | grep /dev/shm | wc -l
247455
 


But there are only just hundreds of files in  /dev/shm
 
$ ls -l /dev/shm/* | wc -l
262
 


1. Why is Oracle keeping hundreds of thousands Open File descriptors in /dev/shm while there are just hundreds of files ?

2. Is this a known issue? ( any notes/documents/bug reports/fixes exist).

3. Or is this expected behavior. of oracle?
 
Solution
 

1) Let's use a test database (11.1.0.7) to demonstrate how Automatic Memory Management uses file descriptors and why there are so many Open File descriptors.

    

A) Before starting the 11.1.0.7 database, /dev/shm is empty and there are no open files
$ ls -l /dev/shm
total 0
$ lsof -n | grep /dev/shm


B)  Let's start the database, then check /dev/shm

UNIX> sqlplus " / as sysdba"
       SQL*Plus: Release 11.1.0.7.0 - Production on Fri May 6 14:57:28 2011
       Copyright (c) 1982, 2008, Oracle. All rights reserved.
       Connected to an idle instance.

 SQL> startup
       ORACLE instance started.
       Total System Global Area 845348864 bytes
       Fixed Size 1316656 bytes
       Variable Size 578816208 bytes
       Database Buffers 260046848 bytes
       Redo Buffers 5169152 bytes
       Database mounted.
       Database opened.
 

SQL> show parameter memory_target
NAME           TYPE        VALUE
-------------- ----------- ------
memory_target  big integer 808M


SQL> show parameter memory_max_target
  NAME              TYPE          VALUE
 ----------------- ------------- ------
 memory_max_target  big integer  808M


C) let's check /dev/shm again

UNIX> ls -l /dev/shm/* | wc -l
           203
UNIX> lsof -n | grep /dev/shm | wc -l
            4872
 


Number of files in /dev/shm
-----------------------------------------
There are 203 files  ( 4 MB size, which is the granule size)
This is approximately MEMORY_TARGET/4MB
Since MEMORY_TARGET < 1 GB,  203 x 4 MB files are created

For a larger MEMORY_TARGET,  the number of files in /dev/shm = MEMORY_TARGET/granule size

See Note 947152.1 How to determine granule size.

Number of open files descriptors
------------------------------------------------
There are 4872 open files handles why ?

After starting the 11.1.0.7 database, 24 background processes were created
UNIX> ps -ef | grep -i <database name> | wc -l
24


Each background process open 203 files x 24 = 4872
UNIX> lsof -n | grep /dev/shm | wc -l
4872

 

Please note that there are no connections to the database yet. Oracle is not leaking file descriptors.

For each instance running on the server, each Oracle background process will open files under /dev/shm for the suitable instance. Those files will be opened as long as the database is running. No connections are required


In a dedicated server environment, a dedicated process is created for each database connection.
Each dedicated process needs to connect to /dev/shm shared memory segments, so opens  MEMORY_TARGET/<granule size> file descriptors.  In addition, it generates file descriptors for each datafile.

These file descriptors persist until the connections are terminated.


2) That is the expected behavior. and it is also documented in the Reference Manual
 
Oracle Database Administrator's Reference        
11g Release 1 (11.1) for Linux and UNIX-Based Operating Systems
Part Number B32009-09

Appendix C
Administering Oracle Database on Linux

C.6 Allocating Shared Resources

"The number of file descriptors for each Oracle instance are increased by 512*PROCESSES. Therefore, the maximum number of file descriptors should be at least this value, plus some more for the operating system requirements. For example, if the cat /proc/sys/fs/file-max command returns 32768 and PROCESSES are 100, you can set it to 6815744 or higher as root, to have 51200 available for Oracle."