Linux内核代码分析 slab.c

来源:互联网 发布:nik collection mac 编辑:程序博客网 时间:2024/06/05 04:16
slab.c来自linux内核2.4.22版,本文件按照GNU协议发布。
一、准备知识:
  1. slab的概念:
    • 提出的原因:由于操作系统在运行中会不断产生、使用、释放大量重复的对象,
      所以对这样的重复对象的生成进行改进可以大大提高效率

    • 解决buddy system造成的内存浪费问题
    • 最早由sun的工程师提出(1994年)并首先在sunos5.4上应用。
  2. slab算法的基本思路:
    分配:
    if(对相对应的缓存区有空闲位置)
    使用这个位置,不必再初始化;
    else{
    分配内存;
    初始化对象;
    }
    释放:
    在缓存中标记空闲,不做析构;
    资源不足:
    寻找未使用的对象空间;
    按照要求对部分对象做析构;
    释放对象占用的空间;
  3. 缓存区:每一个对象放在一个缓存区
  4. slab:每个slab块都是页面大小的整数倍(有上限)
  5. 着色:字节数按照硬件的要求对齐,可以大大提高硬件缓存的利用率和效率。
  6. slab块的两种管理模式:
    • on-slab 适用于小对象(小于1/8页),slab管理结构存放在slab块中。
    • off-slab适用于大对象,(大于等于1/8页),对象和slab块的管理结构都由cache_slabp中分配。
      根据slab提出者的论文,slab不适合用在大对象上。
  7. slab涉及的重要操作:
    • 缓存区创建kmem_cache_create与销毁kmem_cache_destory
    • 缓存区收缩kmem_cache_shrink与扩张kmem_cache_grow
    • 对象分配kmem_cache_alloc与释放kmem_cache_free
    • 内核态内存的申请kmalloc与释放kfree
二、涉及的重要数据结构:
  1. typedef unsigned int kmem_bufctl_t:slab块中的管理结构
  2. cache_size表:保存了不同大小(2^n)页面指向cache_cache的两种指针(dma和非dma)。
  3. 链表:最重要的是在管理slab结构中出现的3个链表,分别为完全使用的,部分使用的和完全没用过的slab。
  4. 结构体:见下面的代码分析。
、代码分析:
每个颜色代表的含义:
红色:代码注释;
藕荷色:编译预处理需要处理的内容;
蓝色:C语言关键字、函数定义;
绿色:宏定义;
黑色:代码;
灰色:输出信息;
深蓝色:我给出的注解。
/*
* linux/mm/slab.c
* Written by Mark Hemment, 1996/97.
* (markhe@nextd.demon.co.uk)
*
* kmem_cache_destroy() + some cleanup - 1999 Andrea Arcangeli
*
* Major cleanup, different bufctl logic, per-cpu arrays
*(c) 2000 Manfred Spraul
*
以上为版权信息
* An implementation of the Slab Allocator as described in outline in;
*UNIX Internals: The New Frontiers by Uresh Vahalia
*Pub: Prentice HallISBN 0-13-101908-2
关于slab分配符的一本书
* or with a little more detail in;
*The Slab Allocator: An Object-Caching Kernel Memory Allocator
*Jeff Bonwick (Sun Microsystems).
*Presented at: USENIX Summer 1994 Technical Conference

这个人在1994年USENIX年会上首先提出了关于slab(对象缓存)的概念 www.usenix.org

*
*
* The memory is organized in caches, one cache for each object type.
* (e.g. inode_cache, dentry_cache, buffer_head, vm_area_struct)
* Each cache consists out of many slabs (they are small (usually one
* page long) and always contiguous), and each slab contains multiple
* initialized objects.
*
注释大意:在缓存中,每一个类型的对象都对应一种缓存,比如inode_cache dentry_cache buffer_head vm_area_struct等等。
每一个缓存包含了很多slab(通常都很小,可能只有一个页那么大)

* Each cache can only support one memory type (GFP_DMA, GFP_HIGHMEM,
* normal). If you need a special memory type, then must create a new
* cache for that memory type.
*
注释大意:每个缓存只能支持一种内存的模式(GFP_DMA, GFP_HIGHMEM, normal,这些都在include/linux/mm.h中作为宏定义)
* In order to reduce fragmentation, the slabs are sorted in 3 groups:
* full slabs with 0 free objects
* partial slabs
* empty slabs with no allocated objects
*
注释大意:为了减少碎片,slab被分在3个组:
全都使用了的slab,没有空闲的对象
部分slab
全空的slab,没有分配任何对象

* If partial slabs exist, then new allocations come from these slabs,
* otherwise from empty slabs or new slabs are allocated.
*
注释大意:如果部分slab存在,则从这些slab中分配,如果不存在那就分配空的或者新的slab。
* kmem_cache_destroy() CAN CRASH if you try to allocate from the cache
* during kmem_cache_destroy(). The caller must prevent concurrent allocs.
*
注释大意:假如在执行kmem_cache_destory()的时候,又要求缓存分配,则会出现崩溃。调用的时候一定要注意避免并发申请。
* On SMP systems, each cache has a short per-cpu head array, most allocs
* and frees go into that array, and if that array overflows, then 1/2
* of the entries in the array are given back into the global cache.
* This reduces the number of spinlock operations.
*
注释大意:在对称多处理器器的系统上,每个缓存都有一个对应CPU的数组,
几乎所有的分配和释放操作都会进入这个数组。如果这个数组超过限制了,则数组中一般的内容送回到全局缓存中。
这样可以减少自旋锁的数目。

* The c_cpuarray may not be read with enabled local interrupts.
*
注释大意:当本地的中断处在激活的状态下,c_cpuarry是不可读的
* SMP synchronization:
* constructors and destructors are called without any locking.
* Several members in kmem_cache_t and slab_t never change, they
*are accessed without any locking.
* The per-cpu arrays are never accessed from the wrong cpu, no locking.
* The non-constant members are protected with a per-cache irq spinlock.
*
注释大意:多处理器的同步:
构建和析构都是在不加锁的情况下调用的
许多kmem_cache_t和slab_t的成员是永远不会改变的,因此不用加锁。
那些改变的成员通过每一个缓存的中断请求自旋锁来保护。

* Further notes from the original documentation:
*
注释大意:更多的资料:
* 11 April '97. Started multi-threading - markhe
*The global cache-chain is protected by the semaphore 'cache_chain_sem'.
*The sem is only needed when accessing/extending the cache-chain, which
*can never happen inside an interrupt (kmem_cache_create(),
*kmem_cache_shrink() and kmem_cache_reap()).
*
注释大意:1997年4月11日,markhe开始做多线程的支持工作。
全局缓存链通过互斥锁cache_chain_sem来保护
这个互斥锁只在访问或者扩展缓存链的时候才需要,不会在中断的过程中(kmem_cache_create(),kmem_cache_shrink(),kmem_cache_reap())出现

*To prevent kmem_cache_shrink() trying to shrink a 'growing' cache (which
*maybe be sleeping and therefore not holding the semaphore/lock), the
*growing field is used. This also prevents reaping from a cache.
*
注释大意:为了避免kmem_cache_shrink()试图收缩正在增长的缓存(处在睡眠状态,并且不持有互斥锁或者锁),
避免收缩正在被使用的增长的区域,(这个互斥锁)还可以避免回收缓存。

*At present, each engine can be growing a cache. This should be blocked.
*
注释大意:目前,每个部件都可以是一个正在增长的缓存,这是需要在未来做出改变的。
*/


#include<linux/config.h>编译的时候调用生成的autoconf.h
#include<linux/slab.h>自己的头文件
#include<linux/interrupt.h>中断相关的头文件
#include<linux/init.h>初始化相关的头文件
#include<linux/compiler.h>编译器相关的头文件
#include<linux/seq_file.h>对顺序文件作操作的头文件
#include<asm/uaccess.h>访问用户态内存操作的头文件

/*
* DEBUG- 1 for kmem_cache_create() to honour; SLAB_DEBUG_INITIAL,
* SLAB_RED_ZONE & SLAB_POISON.
* 0 for faster, smaller code (especially in the critical paths).
*
注释大意:如果宏DEBUG为1,则kmem_cache_create()中执行SLAB_DEBUG_INITIAL,SLAB_RED_ZONE,SLAB_POSION相关操作
* STATS- 1 to collect stats for /proc/slabinfo.
* 0 for faster, smaller code (especially in the critical paths).
*
注释大意:如果STATS为1,则从/proc/slabinfo中收集状态信息。为0的时候可以产生更快并且更小的代码(尤其是在重要的步骤中)
* FORCED_DEBUG- 1 enables SLAB_RED_ZONE and SLAB_POISON (if possible)
*/
注释大意:如果FORCED_DEBUG为1,则激活SLAB_RED_ZONE,并在可能的情况下激活SLAB_POSION

#ifdef CONFIG_DEBUG_SLAB预编译处理,如果定义了CONFIG_DEBUG_SLAB,则将下面三个宏定义为1,否则定义为0
#defineDEBUG1
#defineSTATS1
#defineFORCED_DEBUG1
#else
#defineDEBUG0
#defineSTATS0
#defineFORCED_DEBUG0
#endif

/*
* Parameters for kmem_cache_reap
*/
注释大意:缓存回收需要的参数
#define REAP_SCANLEN10
#define REAP_PERFECT10

/* Shouldn't this be in a header file somewhere? */注释大意:这个是否应该加入到某个头文件中?
#defineBYTES_PER_WORDsizeof(void *)

/* Legal flag mask for kmem_cache_create(). */注释大意:kmem_cache_create()法定的标志位
#if DEBUG条件编译,如果在调试模式下
# define CREATE_MASK(SLAB_DEBUG_INITIAL | SLAB_RED_ZONE | /
SLAB_POISON | SLAB_HWCACHE_ALIGN | /
SLAB_NO_REAP | SLAB_CACHE_DMA | /
SLAB_MUST_HWCACHE_ALIGN)
#else在非调试模式下
# define CREATE_MASK(SLAB_HWCACHE_ALIGN | SLAB_NO_REAP | /
SLAB_CACHE_DMA | SLAB_MUST_HWCACHE_ALIGN)
#endif

/*
* kmem_bufctl_t:
*
* Bufctl's are used for linking objs within a slab
* linked offsets.
*
注释大意:Bufctl是用来连接slab中的对象的
* This implementation relies on "struct page" for locating the cache &
* slab an object belongs to.
注释大意:这个调用通过寻找页面结构体来找对向所属的缓存和slab。
* This allows the bufctl structure to be small (one int), but limits
* the number of objects a slab (not a cache) can contain when off-slab
* bufctls are used. The limit is the size of the largest general cache
* that does not use off-slab slabs.
注释大意:bufctl结构体可以非常的小(比如一个整型),但是在off-slab bufctls使用后
slab(不是缓存)中的对象数目是有限的。这个限制数是不使用off-slab的slab最大的普通缓存的大小

* For 32bit archs with 4 kB pages, is this 56.
注释大意:对于32位结构的系统而言,4k的页面,这个限制数目为56。
* This is not serious, as it is only for large objects, when it is unwise
* to have too many per slab.
注释大意:这个限制并不是很严重,因为它只是针对大的对象而言的。
每个slab中包含很多大的对象是不明智的。

* Note: This limit can be raised by introducing a general cache whose size
* is less than 512 (PAGE_SIZE<<3), but greater than 256.
*/
注释大意:这个限制可以通过引入一个小于512(PAGE_SIZE<<3)但是大于256的普通缓存来提升。

#define BUFCTL_END 0xffffFFFF定义宏BUFCTL_END
#defineSLAB_LIMIT 0xffffFFFE定义宏SLAB_LIMIT
typedef unsigned int kmem_bufctl_t;定义类型kem_bufctl_t实际上是无符号的整型数

/* Max number of objs-per-slab for caches which use off-slab slabs.
* Needed to avoid a possible looping condition in kmem_cache_grow().
*/
注释大意:使用off-slab对象缓存的每个slab中对象的最大数目
在kmem_cache_grow()中需要避免可能出现的自我循环情况

static unsigned long offslab_limit;定义offslab_limit为一个无符号整型数

/*
* slab_t
*
* Manages the objs in a slab. Placed either at the beginning of mem allocated
* for a slab, or allocated from an general cache.
* Slabs are chained into three list: fully used, partial, fully free slabs.
*
注释大意:管理slab中的对象,出现在为slab分配的内存的起始处或者分配的普通缓存。
slab有3个链,一个是完全使用的,一个是部分使用的,一个是完全空的

typedef struct slab_s {
struct list_headlist;
unsigned longcolouroff;
void*s_mem;/* including colour offset */着色的偏移量
unsigned intinuse;/* num of objs active in slab */在slab中正在被使用的对象数
kmem_bufctl_tfree;slab中第一个空闲对象相对s_mem的偏移量。
} slab_t;slab的链状结构定义。

#define slab_bufctl(slabp) /
((kmem_bufctl_t *)(((slab_t*)slabp)+1))
宏定义slab_bufctl

/*
* cpucache_t
*
* Per cpu structures
* The limit is stored in the per-cpu structure to reduce the data cache
* footprint.
*/
注释大意:每个CPU的结构

typedef struct cpucache_s {
unsigned int avail;可用
unsigned int limit;限制
} cpucache_t;定义cpucache结构体

#define cc_entry(cpucache) /
((void **)(((cpucache_t*)(cpucache))+1))
宏定义cc_entry(cpu缓存入口)为一个函数指针
#define cc_data(cachep) /
((cachep)->cpudata[smp_processor_id()])
宏定义cc_data为缓存中cpu的标号
/*
* kmem_cache_t
*
* manages a cache.
*/


#define CACHE_NAMELEN20/* max name length for a slab cache */ 宏定义slab中最长的命名为20

struct kmem_cache_s {
/* 1) each alloc & free */对于每次申请和释放操作,都首先从满的和部分使用的slab开始,然后再是空的slab
/* full, partial first, then free */
struct list_headslabs_full;
struct list_headslabs_partial;
struct list_headslabs_free;前面说到的3个不同状态的链
unsigned intobjsize;对象的大小
unsigned int flags;/* constant flags */属性标志
属性标志可能存在的几种:
SLAB_POISON:标志未初始化的部分,用0xA5(即10100101)填充
SLAB_RED_ZONE: 标志红色区域。红色区域的开始和结束的位置有一个特殊标示来保存这个对象的状态
RED_MAGIC1(0x5A2CF071)为活跃状态,RED_MAGIC2(0x170FC2A5)为不活跃状态
当分配对象的时色区域变为活跃状态,初始化空闲对象和收回对象空间时变为不活跃。
红色区域可以防止堆栈溢出(划了边界了,不能越界)。
SLAB_NO_REAP: 即使内存紧缺也不自动收缩这块缓存
SLAB_HWCACHE_ALIGN: 使用硬件对齐
CFLAGS_OFF_SLAB: off-slab模式(对大的对象操作的时候用这个)
以上变量定义在include/linux/slab.h

unsigned intnum;/* # of objs per slab */每个slab中对象的数目
spinlock_tspinlock;自旋锁
#ifdef CONFIG_SMP如果定义了SMP
unsigned intbatchcount;则定义一个批处理计数
#endif

/* 2) slab additions /removals */ slab的增加和消除
/* order of pgs per slab (2^n) */
unsigned intgfporder;每个slab中页面数目是2的多少次方

/* force GFP flags, e.g. GFP_DMA */
unsigned intgfpflags;申请页面的时候的优先级,在include/linux/mm.h中定义

size_tcolour;/* cache colouring range */着色的范围
unsigned intcolour_off;/* colour offset */着色的偏移量
unsigned intcolour_next;/* cache colouring */下一个着色的
kmem_cache_t*slabp_cache;针对off slab模式指向cache_slabp缓冲区的指针
unsigned intgrowing;对正在增长的slab设置的标志,以便避免在增长的时候进行了收缩操作。
unsigned intdflags;/* dynamic flags */对动态作的标志

/* constructor func */
void (*ctor)(void *, kmem_cache_t *, unsigned long);构造函数

/* de-constructor func */
void (*dtor)(void *, kmem_cache_t *, unsigned long);析构函数

unsigned longfailures;失败标记

/* 3) cache creation/removal */缓存增加和消除
charname[CACHE_NAMELEN];缓存区的名字(在/proc/slabinfo中的名字)
struct list_headnext;指向下一个缓存结构的指针
#ifdef CONFIG_SMP编译预处理,如果是对称多处理器
/* 4) per-cpu data */
cpucache_t*cpudata[NR_CPUS];
则设置一个指向每一个CPU运行的进程的指针(NR_CPUS作为宏定义在include/linux/threads.h)

#endif
#if STATS 编译预处理,如果需要记录状态
unsigned longnum_active;活跃的数目
unsigned longnum_allocations;分配的数目
unsigned longhigh_mark;最多活跃的标记
unsigned longgrown;增长标记
unsigned longreaped;回收的标记
unsigned long errors;出错的标记
#ifdef CONFIG_SMP 编译预处理,如果定义了对称多处理器
atomic_tallochit;原子计数器分配命中数
atomic_tallocmiss;原子计数器分配未命中数
atomic_tfreehit;原子计数器释放命中数
atomic_tfreemiss;原子计数器释放未命中数
#endif
#endif
};

/* internal c_flags */
#defineCFLGS_OFF_SLAB0x010000UL/* slab management in own cache */ slab管理自己的缓存
#defineCFLGS_OPTIMIZE0x020000UL/* optimized slab lookup */ 优化slab查找

/* c_dflags (dynamic flags). Need to hold the spinlock to access this member */动态标志,访问的时候要加一个自旋锁
#defineDFLGS_GROWN0x000001UL/* don't reap a recently grown */

#defineOFF_SLAB(x)((x)->flags & CFLGS_OFF_SLAB)设置off slab模式
#defineOPTIMIZE(x)((x)->flags & CFLGS_OPTIMIZE)设置优化模式
#defineGROWN(x)((x)->dlags & DFLGS_GROWN)设置动态增长标志

#if STATS 编译预处理,如果察看状态
#defineSTATS_INC_ACTIVE(x)((x)->num_active++)活跃加1
#defineSTATS_DEC_ACTIVE(x)((x)->num_active--)活跃减1
#defineSTATS_INC_ALLOCED(x)((x)->num_allocations++)已经分配加1
#defineSTATS_INC_GROWN(x)((x)->grown++)增长加1
#defineSTATS_INC_REAPED(x)((x)->reaped++)回收加1
#defineSTATS_SET_HIGH(x)do { if ((x)->num_active > (x)->high_mark) /
(x)->high_mark = (x)->num_active; /
} while (0)
设置最多活跃
#defineSTATS_INC_ERR(x)((x)->errors++)错误加1
#else 编译预处理,如果不察看状态,那么都是空操作
#defineSTATS_INC_ACTIVE(x)do { } while (0)
#defineSTATS_DEC_ACTIVE(x)do { } while (0)
#defineSTATS_INC_ALLOCED(x)do { } while (0)
#defineSTATS_INC_GROWN(x)do { } while (0)
#defineSTATS_INC_REAPED(x)do { } while (0)
#defineSTATS_SET_HIGH(x)do { } while (0)
#defineSTATS_INC_ERR(x)do { } while (0)

#endif

#if STATS && defined(CONFIG_SMP)编译预处理,如果察看状态并且是对称多处理器
#define STATS_INC_ALLOCHIT(x)atomic_inc(&(x)->allochit)原子操作增加分配命中
#define STATS_INC_ALLOCMISS(x)atomic_inc(&(x)->allocmiss)原子操作增加分配没有命中
#define STATS_INC_FREEHIT(x)atomic_inc(&(x)->freehit)原子操作增加释放命中的
#define STATS_INC_FREEMISS(x)atomic_inc(&(x)->freemiss)
原子操作增加释放没有命中的
#else 编译预处理,如果不察看状态,那么都是空操作
#define STATS_INC_ALLOCHIT(x)do { } while (0)
#define STATS_INC_ALLOCMISS(x)do { } while (0)
#define STATS_INC_FREEHIT(x)do { } while (0)
#define STATS_INC_FREEMISS(x)do { } while (0)
#endif

#if DEBUG编译预处理,如果设置了查错模式
/* Magic nums for obj red zoning.
* Placed in the first word before and the first word after an obj.
*/
为红色区域标记的magic number.(前面已经提到过)
#defineRED_MAGIC10x5A2CF071UL/* when obj is active */
#defineRED_MAGIC20x170FC2A5UL/* when obj is inactive */

/* ...and for poisoning */没有初始化标记
#definePOISON_BYTE0x5a/* byte value for poisoning */01011010作为起始标记
#definePOISON_END0xa5/* end-byte of poisoning */10100101作为结束标记
额外的知识:使用0xA5填充未初始化的区域的原因:
对于为初始化的区域,也可以考虑用0xFF或0x00填充,但是用0xA5填充可以确保不出现偶然的相邻位的短路:
例如,D0 D1 D2 D3 ....D7,其中D1-D2出现了短路
对于用0x00填充而言:D0-D7 00000000
对于用0xFF填充而言:D0-D7 11111111
对于用0xA5填充而言:D0-D7 10000101
可以非常容易的检查出来硬件的失效或者偶然的错误
参考:Software-Based Memory Testing 1997 by Michael Barr http://www.netrino.com/Articles/MemoryTesting/paper.html

#endif

/* maximum size of an obj (in 2^order pages) */ 对象可以占用的最大的页面的2的幂
#defineMAX_OBJ_ORDER5/* 32 pages */ 2的5次方等于32

/*
* Do not go above this order unless 0 objects fit into the slab.
*/
当没有对象适合在slab中的时候,空闲页最多不超过4个页,最少不小于2个页。
#defineBREAK_GFP_ORDER_HI2
#defineBREAK_GFP_ORDER_LO1
static int slab_break_gfp_order = BREAK_GFP_ORDER_LO;初始为2个页

/*
* Absolute limit for the gfp order
最多的空闲页的硬上限为2的5次方,即32
*/

#defineMAX_GFP_ORDER5/* 32 pages */


/* Macros for storing/retrieving the cachep and or slab from the
* global 'mem_map'. These are used to find the slab an obj belongs to.
* With kfree(), these are used to find the cache which an obj belongs to.
*/
注释大意:下面的这些宏是用来在全局mem_map(内存映射)中存储/找回cachep或slab。
这些宏是用来找到对象所属的slab,通过使用kfree()来找到对象所属的缓存。

#defineSET_PAGE_CACHE(pg,x) ((pg)->list.next = (struct list_head *)(x))
#defineGET_PAGE_CACHE(pg) ((kmem_cache_t *)(pg)->list.next)
#defineSET_PAGE_SLAB(pg,x) ((pg)->list.prev = (struct list_head *)(x))
#defineGET_PAGE_SLAB(pg) ((slab_t *)(pg)->list.prev)
上面这些宏是通过对页面的链来做操作实现功能的
/* Size description struct for general caches. */下面的结构体是对于普通缓存的描述
typedef struct cache_sizes {
size_t cs_size;缓存的大小
kmem_cache_t*cs_cachep;指向cache_cache中kmem_cache_cache_s型通用缓存区描述结构
kmem_cache_t*cs_dmacachep;指向cache_cache中kmem_cache_cache_s型通用缓存区描述结构,处理dma数据块用的
} cache_sizes_t;

static cache_sizes_t cache_sizes[] = {定义缓存的大小
#if PAGE_SIZE == 4096 编译预处理,假如页面的大小为4096
{ 32,NULL, NULL},
#endif
{ 64,NULL, NULL},
{ 128,NULL, NULL},
{ 256,NULL, NULL},
{ 512,NULL, NULL},
{ 1024,NULL, NULL},
{ 2048,NULL, NULL},
{ 4096,NULL, NULL},
{ 8192,NULL, NULL},
{ 16384,NULL, NULL},
{ 32768,NULL, NULL},
{ 65536,NULL, NULL},
{131072,NULL, NULL},
{ 0,NULL, NULL}
};后面的NULL就是为cs_cachep和cs_dmacachep准备的

/* internal cache of cache description objs */ 内部缓存的缓存描述对象结构体
static kmem_cache_t cache_cache = {
slabs_full:LIST_HEAD_INIT(cache_cache.slabs_full),
slabs_partial:LIST_HEAD_INIT(cache_cache.slabs_partial),
slabs_free:LIST_HEAD_INIT(cache_cache.slabs_free), 三种状态的链表
objsize:sizeof(kmem_cache_t), 对象的大小
flags:SLAB_NO_REAP, 设置标志为不自动回收
spinlock:SPIN_LOCK_UNLOCKED,设置自旋锁为不锁定状态
colour_off:L1_CACHE_BYTES,设定着色范围为1级缓存的大小
name:"kmem_cache",设置名称
};

/* Guard access to the cache-chain. */
static struct semaphorecache_chain_sem;设置互斥锁,以便保护缓存链

/* Place maintainer for reaping. */准备回收用的指针
static kmem_cache_t *clock_searchp = &cache_cache;

#define cache_chain (cache_cache.next)宏定义缓存链

#ifdef CONFIG_SMP 编译预处理,如果是对称多处理器
/*
* chicken and egg problem: delay the per-cpu array allocation
* until the general caches are up.
*/
注释大意:先有鸡还是先有蛋的问题:等普通缓存就绪之后再分配每个CPU的数组。
static int g_cpucache_up;定义普通缓存是否就绪的状态变量

static void enable_cpucache (kmem_cache_t *cachep);激活cpu缓存
static void enable_all_cpucaches (void);激活所有cpu缓存
#endif

/* Cal the num objs, wastage, and bytes left over for a given slab size. */
本函数负责计算对象的数目,浪费的空间,以及在所给的slab中剩余的空间。
static void kmem_cache_estimate (unsigned long gfporder, size_t size,
int flags, size_t *left_over, unsigned int *num)

{
int i;
size_t wastage = PAGE_SIZE<<gfporder;
size_t extra = 0;
size_t base = 0;

if (!(flags & CFLGS_OFF_SLAB)) {
base = sizeof(slab_t);
extra = sizeof(kmem_bufctl_t);
}
i = 0;
while (i*size + L1_CACHE_ALIGN(base+i*extra) <= wastage)
i++;
if (i > 0)
i--;

if (i > SLAB_LIMIT)
i = SLAB_LIMIT;

*num = i;
wastage -= i*size;
wastage -= L1_CACHE_ALIGN(base+i*extra);
*left_over = wastage;计算出来的浪费的空间
}

/* Initialisation - setup the `cache' cache. */
本函数负责初始化缓存的"缓存"
void __init kmem_cache_init(void)
{
size_t left_over;

init_MUTEX(&cache_chain_sem);
INIT_LIST_HEAD(&cache_chain);

kmem_cache_estimate(0, cache_cache.objsize, 0,
&left_over, &cache_cache.num);
if (!cache_cache.num)
BUG();

cache_cache.colour = left_over/cache_cache.colour_off;
cache_cache.colour_next = 0;
}


/* Initialisation - setup remaining internal and general caches.
* Called after the gfp() functions have been enabled, and before smp_init().
*/
初始化cache_size表的过程。设置保留的内部和普通缓存。在函数gfp() (GFP, get free page)已经被激活之后再调用,
并且在smp_init() (对称多处理器初始化) 执行后再调用。

void __init kmem_cache_sizes_init(void)
{
cache_sizes_t *sizes = cache_sizes;
char name[20];显然有问题的,前面已经定义了CACHE_NAMELEN,这里竟然不用!
显然是开发不统一造成的,未来修改代码的时候很可能造成不好影响

/*
* Fragmentation resistance on low memory - only use bigger
* page orders on machines with more than 32MB of memory.
*/
为了避免在小内存的时候出现碎片,只有当内存大于32M的时候才会用比较大的页面数,2^n(幂)
if (num_physpages > (32 << 20) >> PAGE_SHIFT)
slab_break_gfp_order = BREAK_GFP_ORDER_HI;
do {
/* For performance, all the general caches are L1 aligned.
* This should be particularly beneficial on SMP boxes, as it
* eliminates "false sharing".
* Note for systems short on memory removing the alignment will
* allow tighter packing of the smaller caches. */
注释大意:出于性能的考虑,所有的普通缓存都是按照L1缓存的大小对齐的。
这样做对对称多处理器的系统来说是非常有益的,这是由于对称多处理器系统消除了假共享。

snprintf(name, sizeof(name), "size-%Zd",sizes->cs_size);为/proc/slabinfo做准备
if (!(sizes->cs_cachep =
kmem_cache_create(name, sizes->cs_size,
0, SLAB_HWCACHE_ALIGN, NULL, NULL))) {
BUG();
}如果创建缓存失败,则报错。

/* Inc off-slab bufctl limit until the ceiling is hit. */增加off-slab模式的控制限制,直到到达底线
if (!(OFF_SLAB(sizes->cs_cachep))) {
offslab_limit = sizes->cs_size-sizeof(slab_t);
offslab_limit /= 2;这里实际上有问题,应该写成offslab_limit /=sizeof(kmem_bufctl_t)
如果按照/2计算的话,那永远都不会到达底线了。这个问题在2.6的内核中已经修正
参考资料:http://www.cs.helsinki.fi/linux/linux-kernel/2001-17/1193.html

}
snprintf(name, sizeof(name), "size-%Zd(DMA)",sizes->cs_size);设置名称
sizes->cs_dmacachep = kmem_cache_create(name, sizes->cs_size, 0,
SLAB_CACHE_DMA|SLAB_HWCACHE_ALIGN, NULL, NULL);
if (!sizes->cs_dmacachep)
BUG();
sizes++;
} while (sizes->cs_size);
}

int __init kmem_cpucache_init(void)
{
#ifdef CONFIG_SMP 编译预处理,如果是多处理器
g_cpucache_up = 1;设置普通缓存已经激活标志
enable_all_cpucaches();激活所有cpu的缓存 怀疑有问题,不作为原子操作可以吗?而且先设置激活然后执行?
#endif
return 0;
}

__initcall(kmem_cpucache_init);

/* Interface to system's page allocator. No need to hold the cache-lock.
*/
对系统页分配器的借口,不需要加缓存锁。
static inline void * kmem_getpages (kmem_cache_t *cachep, unsigned long flags)
{
void*addr;

/*
* If we requested dmaable memory, we will get it. Even if we
* did not request dmaable memory, we might get it, but that
* would be relatively rare and ignorable.
*/
如果我们要求dma方式的内存,那么我们将获得。即使我们没有要求可以dma方式的内存,
我们仍然可能会获取到,但是通常情况下这个是不会被忽略的。

flags |= cachep->gfpflags;
addr = (void*) __get_free_pages(flags, cachep->gfporder);
/* Assume that now we have the pages no one else can legally
* messes with the 'struct page's.
* However vm_scan() might try to test the structure to see if
* it is a named-page or buffer-page. The members it tests are
* of no interest here.....
*/
到此为止,我们已经有了别人不能弄乱的页面了。尽管vm_scan()有可能会去检测这个结构,看看是一个命名了的页还是一个缓存页。
对于成员的测试在这里并不关心。

return addr;
}

/* Interface to system's page release. */
系统释放页面的接口。
static inline void kmem_freepages (kmem_cache_t *cachep, void *addr)
{
unsigned long i = (1<<cachep->gfporder);
struct page *page = virt_to_page(addr);

/* free_pages() does not clear the type bit - we do that.
* The pages have been unlinked from their cache-slab,
* but their 'struct page's might be accessed in
* vm_scan(). Shouldn't be a worry.
*/
free_page()不清除标志位,我们这里手工去做。
这些页面从slab缓存中移出,但是他们的结构化页仍然可能被vm_scan()访问到,但是不必担心。

while (i--) {
PageClearSlab(page);清除标记位
page++;
}
free_pages((unsigned long)addr, cachep->gfporder);释放页
}

#if DEBUG 条件编译,如果设置调试模式
static inline void kmem_poison_obj (kmem_cache_t *cachep, void *addr)
{
int size = cachep->objsize;
if (cachep->flags & SLAB_RED_ZONE) {
addr += BYTES_PER_WORD;
size -= 2*BYTES_PER_WORD;
}留出红区
memset(addr, POISON_BYTE, size);设置未初始化的地址内容为POSION_BYTE
*(unsigned char *)(addr+size-1) = POISON_END;写结尾
}

static inline int kmem_check_poison_obj (kmem_cache_t *cachep, void *addr)检查未初始化的空间的对象
{
int size = cachep->objsize;
void *end;
if (cachep->flags & SLAB_RED_ZONE) {
addr += BYTES_PER_WORD;
size -= 2*BYTES_PER_WORD;
}红区
end = memchr(addr, POISON_END, size);
if (end != (addr+size-1))
return 1;出错退出
return 0;正常退出
}
#endif

/* Destroy all the objs in a slab, and release the mem back to the system.
* Before calling the slab must have been unlinked from the cache.
* The cache-lock is not held/needed.
*/
注释大意:销毁slab中的所有对象,释放内存给系统。
调用前,slab必须已经和cache取消了连接
缓存锁不被占用,也不需要缓存锁。

static void kmem_slab_destroy (kmem_cache_t *cachep, slab_t *slabp)
{
if (cachep->dtor
#if DEBUG
|| cachep->flags & (SLAB_POISON | SLAB_RED_ZONE)如果调试模式,则作红区处理
#endif
) {
int i;
for (i = 0; i < cachep->num; i++) {
void* objp = slabp->s_mem+cachep->objsize*i;
#if DEBUG
if (cachep->flags & SLAB_RED_ZONE) {
if (*((unsigned long*)(objp)) != RED_MAGIC1)
BUG();
if (*((unsigned long*)(objp + cachep->objsize
-BYTES_PER_WORD)) != RED_MAGIC1)
BUG();红区的边界不对,则报错
objp += BYTES_PER_WORD;
}
#endif
if (cachep->dtor)
(cachep->dtor)(objp, cachep, 0);清空
#if DEBUG
if (cachep->flags & SLAB_RED_ZONE) {
objp -= BYTES_PER_WORD;减去一个字的长度
}
if ((cachep->flags & SLAB_POISON) &&
kmem_check_poison_obj(cachep, objp))检查未初始化的部分,如果有问题则报错
BUG();
#endif
}
}

kmem_freepages(cachep, slabp->s_mem-slabp->colouroff);释放资源
if (OFF_SLAB(cachep))释放off-slab模式的资源
kmem_cache_free(cachep->slabp_cache, slabp);
}

/**
* kmem_cache_create - Create a cache.
* @name: A string which is used in /proc/slabinfo to identify this cache.
* @size: The size of objects to be created in this cache.
* @offset: The offset to use within the page.
* @flags: SLAB flags
* @ctor: A constructor for the objects.
* @dtor: A destructor for the objects.
*
* Returns a ptr to the cache on success, NULL on failure.
* Cannot be called within a int, but can be interrupted.
* The @ctor is run when new pages are allocated by the cache
* and the @dtor is run before the pages are handed back.
* The flags are
*
* %SLAB_POISON - Poison the slab with a known test pattern (a5a5a5a5)
* to catch references to uninitialised memory.
*
* %SLAB_RED_ZONE - Insert `Red' zones around the allocated memory to check
* for buffer overruns.
*
* %SLAB_NO_REAP - Don't automatically reap this cache when we're under
* memory pressure.
*
* %SLAB_HWCACHE_ALIGN - Align the objects in this cache to a hardware
* cacheline. This can be beneficial if you're counting cycles as closely
* as davem.
*/

kmem_cache_t *
kmem_cache_create (const char *name, size_t size, size_t offset,
unsigned long flags, void (*ctor)(void*, kmem_cache_t *, unsigned long),
void (*dtor)(void*, kmem_cache_t *, unsigned long))

{
const char *func_nm = KERN_ERR "kmem_create: ";
size_t left_over, align, slab_size;
kmem_cache_t *cachep = NULL;

/*
* Sanity checks... these are all serious usage bugs.
*/
健壮性检察
if ((!name) ||
((strlen(name) >= CACHE_NAMELEN - 1)) ||
in_interrupt() ||
(size < BYTES_PER_WORD) ||
(size > (1<<MAX_OBJ_ORDER)*PAGE_SIZE) ||
(dtor && !ctor) ||
(offset < 0 || offset > size))
BUG();

#if DEBUG 条件编译
if ((flags & SLAB_DEBUG_INITIAL) && !ctor) {
/* No constructor, but inital state check requested */没有构建者,但是要求初始化检查
printk("%sNo con, but init state check requested - %s/n", func_nm, name);
flags &= ~SLAB_DEBUG_INITIAL;
}

if ((flags & SLAB_POISON) && ctor) {在没有构建者的情况下要求设置未初始化
/* request for poisoning, but we can't do that with a constructor */
printk("%sPoisoning requested, but con given - %s/n", func_nm, name);
flags &= ~SLAB_POISON;
}
#if FORCED_DEBUG
if ((size < (PAGE_SIZE>>3)) && !(flags & SLAB_MUST_HWCACHE_ALIGN))
/*
* do not red zone large object, causes severe
* fragmentation.
*/
不将大的对象放入到红区,否则会造成大量碎片
flags |= SLAB_RED_ZONE;
if (!ctor)
flags |= SLAB_POISON;
#endif
#endif

/*
* Always checks flags, a caller might be expecting debug
* support which isn't available.
*/

BUG_ON(flags & ~CREATE_MASK);

/* Get cache's description obj. */调用kmem_cache_alloc从cache_cache中分配一个对象
cachep = (kmem_cache_t *) kmem_cache_alloc(&cache_cache, SLAB_KERNEL);
if (!cachep)
goto opps;
memset(cachep, 0, sizeof(kmem_cache_t));将新分配的空间全都设置为0

/* Check that size is in terms of words. This is needed to avoid
* unaligned accesses for some archs when redzoning is used, and makes
* sure any on-slab bufctl's are also correctly aligned.
*/
检查每个字的大小。在某些体系结构的系统中,要通过这个避免再没有对齐的情况下对红区的访问
并且确认所有on-slab 缓存控制结构体已经正确对齐

if (size & (BYTES_PER_WORD-1)) {
size += (BYTES_PER_WORD-1);
size &= ~(BYTES_PER_WORD-1);
printk("%sForcing size word alignment - %s/n", func_nm, name);
}

#if DEBUG
if (flags & SLAB_RED_ZONE) {
/*
* There is no point trying to honour cache alignment
* when redzoning.
*/

flags &= ~SLAB_HWCACHE_ALIGN;
size += 2*BYTES_PER_WORD;/* words for redzone */红区的字数
}
#endif
align = BYTES_PER_WORD;
if (flags & SLAB_HWCACHE_ALIGN)如果要求硬件对齐,则按照CPU L1缓存的大小对齐,否则按照字长对齐
align = L1_CACHE_BYTES;

/* Determine if the slab management is 'on' or 'off' slab. */
if (size >= (PAGE_SIZE>>3))判断on-slab还是off-slab
/*
* Size is large, assume best to place the slab management obj
* off-slab (should allow better packing of objs).
*/
如果大(超过512字节),那就采用off-slab模式
flags |= CFLGS_OFF_SLAB;

if (flags & SLAB_HWCACHE_ALIGN) {
/* Need to adjust size so that objs are cache aligned. */
/* Small obj size, can get at least two per cache line. */
/* FIXME: only power of 2 supported, was better */
调整对象的大小,以便能和缓存对齐
while (size < align/2)
align /= 2;
size = (size+align-1)&(~(align-1));
}

/* Cal size (in pages) of slabs, and the num of objs per slab.
* This could be made much more intelligent. For now, try to avoid
* using high page-orders for slabs. When the gfp() funcs are more
* friendly towards high-order requests, this should be changed.
*/
计算页面中slab的大小,每个slab中对象的个数。
do {
unsigned int break_flag = 0;
cal_wastage:
kmem_cache_estimate(cachep->gfporder, size, flags,
&left_over, &cachep->num);
计算消耗,left_over保存剩余的空间,cachep->num保存slab块中可以存放的对象个数

if (break_flag)
break;
if (cachep->gfporder >= MAX_GFP_ORDER)如果超大(32*4096=128K),则退出循环
break;
if (!cachep->num)超过可以保存的数目了,则退出循环。
goto next;
if (flags & CFLGS_OFF_SLAB && cachep->num > offslab_limit) {超过offslab最大的限制,则重新计算花费,然后退出
/* Oops, this num of objs will cause problems. */
cachep->gfporder--;
break_flag++;
goto cal_wastage;
}

/*
* Large num of objs is good, but v. large slabs are currently
* bad for the gfp()s.
*/
对象数目越多越好,但是过多的slab目前会对gfp()造成不好的影响
if (cachep->gfporder >= slab_break_gfp_order)
break;

if ((left_over*8) <= (PAGE_SIZE<<cachep->gfporder))
防止浪费的操作:假设slab只比对象大一点点,那么可能会造成一个对象的空间大的浪费,增加slab的大小,以便能存放更多的对象。
如果浪费小于等于1/8则不再增长slab

break;/* Acceptable internal fragmentation. */
next:
cachep->gfporder++;
} while (1);

if (!cachep->num) {如果超限,则报错释放资源,返回
printk("kmem_cache_create: couldn't create cache %s./n", name);
kmem_cache_free(&cache_cache, cachep);
cachep = NULL;
goto opps;
}
slab_size = L1_CACHE_ALIGN(cachep->num*sizeof(kmem_bufctl_t)+sizeof(slab_t));slab块中管理变量的大小总和(L1 cache对齐)

/*
* If the slab has been placed off-slab, and we have enough space then
* move it on-slab. This is at the expense of any extra colouring.
*/
能用on-slab的时候就用on-slab
if (flags & CFLGS_OFF_SLAB && left_over >= slab_size) {
flags &= ~CFLGS_OFF_SLAB;
left_over -= slab_size;
}

/* Offset must be a multiple of the alignment. */将offset设置为合适的对齐偏移量
offset += (align-1);
offset &= ~(align-1);
if (!offset)
offset = L1_CACHE_BYTES;如果没有偏移,那就按照L1缓存的大小对齐。
cachep->colour_off = offset;着色偏移量
cachep->colour = left_over/offset;当前着色

/* init remaining fields */初始化其他的部分
if (!cachep->gfporder && !(flags & CFLGS_OFF_SLAB))
flags |= CFLGS_OPTIMIZE;

cachep->flags = flags;设标志
cachep->gfpflags = 0;
if (flags & SLAB_CACHE_DMA)
cachep->gfpflags |= GFP_DMA;
spin_lock_init(&cachep->spinlock);初始化锁
cachep->objsize = size;设大小
INIT_LIST_HEAD(&cachep->slabs_full);
INIT_LIST_HEAD(&cachep->slabs_partial);
INIT_LIST_HEAD(&cachep->slabs_free);初始化3个队列

if (flags & CFLGS_OFF_SLAB)
cachep->slabp_cache = kmem_find_general_cachep(slab_size,0);指向cache_cache中与slab_size对应的变量
cachep->ctor = ctor;构建者
cachep->dtor = dtor;析构
/* Copy name over so we don't have problems with unloaded modules */
strcpy(cachep->name, name);为了避免模块被卸载后出现问题,在这里保存一下名字。

#ifdef CONFIG_SMP 条件编译,对称多处理器的情况下,如果普通缓存激活了,那么激活cpu缓存。
if (g_cpucache_up)
enable_cpucache(cachep);
#endif
/* Need the semaphore to access the chain. */
down(&cache_chain_sem);设置信号量,以便可以访问缓存链
{
struct list_head *p;

list_for_each(p, &cache_chain) {
kmem_cache_t *pc = list_entry(p, kmem_cache_t, next);

/* The name field is constant - no lock needed. */名称出错,则报错
if (!strcmp(pc->name, name))
BUG();
}
}

/* There is no reason to lock our new cache before we
* link it in - no one knows about it yet...
*/
在加入链之前没有必要锁定新的缓存,因为还没有任何进程可以知道他
list_add(&cachep->next, &cache_chain);
up(&cache_chain_sem);锁定
opps:
return cachep;返回新建缓存区的指针
}


#if DEBUG 条件编译
/*
* This check if the kmem_cache_t pointer is chained in the cache_cache
* list. -arca
*/
检查kmem_cache_t是否连接到了cache_cache链表中
static int is_chained_kmem_cache(kmem_cache_t * cachep)
{
struct list_head *p;
int ret = 0;

/* Find the cache in the chain of caches. */
down(&cache_chain_sem);
list_for_each(p, &cache_chain) {
if (p == &cachep->next) {
ret = 1;
break;
}
}
up(&cache_chain_sem);

return ret;
}
#else如果不要求调试,则定义空操作
#define is_chained_kmem_cache(x) 1
#endif

#ifdef CONFIG_SMP 条件编译,对称多处理器
/*
* Waits for all CPUs to execute func().
*/
在所有的CPU上都执行某个函数
static void smp_call_function_all_cpus(void (*func) (void *arg), void *arg)
{
local_irq_disable();关中断
func(arg);执行函数
local_irq_enable();开中断

if (smp_call_function(func, arg, 1, 1))如果掉用函数有问题,则报错
BUG();
}
typedef struct ccupdate_struct_s
{
kmem_cache_t *cachep;
cpucache_t *new[NR_CPUS];
} ccupdate_struct_t;

static void do_ccupdate_local(void *info)
{
ccupdate_struct_t *new = (ccupdate_struct_t *)info;
cpucache_t *old = cc_data(new->cachep);

cc_data(new->cachep) = new->new[smp_processor_id()];
new->new[smp_processor_id()] = old;
}本地作cpu缓存更新

static void free_block (kmem_cache_t* cachep, void** objpp, int len);

static void drain_cpu_caches(kmem_cache_t *cachep)
{耗尽cpu缓存
ccupdate_struct_t new;
int i;

memset(&new.new,0,sizeof(new.new));

new.cachep = cachep;

down(&cache_chain_sem);
smp_call_function_all_cpus(do_ccupdate_local, (void *)&new);

for (i = 0; i < smp_num_cpus; i++) {
cpucache_t* ccold = new.new[cpu_logical_map(i)];
if (!ccold || (ccold->avail == 0))
continue;
local_irq_disable();
free_block(cachep, cc_entry(ccold), ccold->avail);
local_irq_enable();
ccold->avail = 0;
}
smp_call_function_all_cpus(do_ccupdate_local, (void *)&new);
up(&cache_chain_sem);
}

#else 不是对称多处理器的情况定义一个空操作
#define drain_cpu_caches(cachep)do { } while (0)
#endif

/*
* Called with the &cachep->spinlock held, returns number of slabs released
*/
调用的时候要保持自旋锁,返回释放的slab数目
static int __kmem_cache_shrink_locked(kmem_cache_t *cachep)
{
slab_t *slabp;
int ret = 0;

/* If the cache is growing, stop shrinking. */如果正在增长,那么不可以收缩
while (!cachep->growing) {
struct list_head *p;

p = cachep->slabs_free.prev;
if (p == &cachep->slabs_free)
break;

slabp = list_entry(cachep->slabs_free.prev, slab_t, list);
#if DEBUG
if (slabp->inuse)
BUG();
#endif
list_del(&slabp->list);遍历链表,从后向前删除

spin_unlock_irq(&cachep->spinlock);发解除自旋锁的中断
kmem_slab_destroy(cachep, slabp);删除缓存区
ret++;
spin_lock_irq(&cachep->spinlock);重新加锁
}
return ret;
}

static int __kmem_cache_shrink(kmem_cache_t *cachep)收缩操作
{
int ret;

drain_cpu_caches(cachep);

spin_lock_irq(&cachep->spinlock);加自旋锁
__kmem_cache_shrink_locked(cachep);执行删除
ret = !list_empty(&cachep->slabs_full) ||
!list_empty(&cachep->slabs_partial);检查是否为空链表,如果是的话返回1,否则返回0
spin_unlock_irq(&cachep->spinlock);解锁
return ret;
}

/**
* kmem_cache_shrink - Shrink a cache.
* @cachep: The cache to shrink.
*
* Releases as many slabs as possible for a cache.
* Returns number of pages released.
*/
收缩操作,释放尽量多的slab,返回释放的页面数
int kmem_cache_shrink(kmem_cache_t *cachep)
{
int ret;

if (!cachep || in_interrupt() || !is_chained_kmem_cache(cachep))如果要收缩NULL,正在中断中,要收缩链上的缓存,则报错
BUG();

spin_lock_irq(&cachep->spinlock);发加锁中断
ret = __kmem_cache_shrink_locked(cachep);收缩
spin_unlock_irq(&cachep->spinlock);发解锁中断

return ret << cachep->gfporder;
}

/**
* kmem_cache_destroy - delete a cache
* @cachep: the cache to destroy
*
注释大意:kmem_cache_destory删除一个缓存,cachep是将要被删除的缓存
* Remove a kmem_cache_t object from the slab cache.
* Returns 0 on success.
*
返回0则成功
* It is expected this function will be called by a module when it is
* unloaded. This will remove the cache completely, and avoid a duplicate
* cache being allocated each time a module is loaded and unloaded, if the
* module doesn't have persistent in-kernel storage across loads and unloads.
*
在模块被卸载的时候,这个函数需要被调用。他将完全清除缓存,
以便避免不能永久访问内核存储区的模块每次卸载和挂载的时候会出现重复的缓存被要求分配。

* The cache must be empty before calling this function.
*
在调用前,缓存必须是空的
* The caller must guarantee that noone will allocate memory from the cache
* during the kmem_cache_destroy().
*/
调用者必须确认,在执行清除缓存操作的时候,不会有人申请内存
int kmem_cache_destroy (kmem_cache_t * cachep)
{
if (!cachep || in_interrupt() || cachep->growing)如果要销毁NULL,正在中断中,要销毁正在增长的缓存,则报错
BUG();

/* Find the cache in the chain of caches. */
down(&cache_chain_sem);允许访问缓存链
/* the chain is never empty, cache_cache is never destroyed */
if (clock_searchp == cachep)如果链不空,则无法销毁
clock_searchp = list_entry(cachep->next.next,
kmem_cache_t, next);
list_del(&cachep->next);
up(&cache_chain_sem);

if (__kmem_cache_shrink(cachep)) {如果无法一直收缩(非空),则报错
printk(KERN_ERR "kmem_cache_destroy: Can't free all objects %p/n",
cachep);
down(&cache_chain_sem);
list_add(&cachep->next,&cache_chain);
up(&cache_chain_sem);
return 1;
}
#ifdef CONFIG_SMP条件编译,多处理器情况
{
int i;
for (i = 0; i < NR_CPUS; i++)
kfree(cachep->cpudata[i]);释放每个cpu相应的缓存的数组
}
#endif
kmem_cache_free(&cache_cache, cachep);释放内存

return 0;
}

/* Get the memory for a slab management obj. */为slab管理对象申请内存
static inline slab_t * kmem_cache_slabmgmt (kmem_cache_t *cachep,
void *objp, int colour_off, int local_flags)
{
slab_t *slabp;

if (OFF_SLAB(cachep)) {
/* Slab management obj is off-slab. */如果是off-slab模式的
slabp = kmem_cache_alloc(cachep->slabp_cache, local_flags);
if (!slabp)出错处理
return NULL;
} else {on-slab模式
/* FIXME: change to
slabp = objp
* if you enable OPTIMIZE
*/

slabp = objp+colour_off;根据着色空间进行偏移
colour_off += L1_CACHE_ALIGN(cachep->num *
sizeof(kmem_bufctl_t) + sizeof(slab_t));计算着色空间更新
}
slabp->inuse = 0;活动对象为0
slabp->colouroff = colour_off;着色空间更新
slabp->s_mem = objp+colour_off;slab块中第一个对象的地址

return slabp;
}

static inline void kmem_cache_init_objs (kmem_cache_t * cachep,
slab_t * slabp, unsigned long ctor_flags)

{
int i;

for (i = 0; i < cachep->num; i++) {对slab块中每一个对象进行构造
void* objp = slabp->s_mem+cachep->objsize*i;
#if DEBUG

if (cachep->flags & SLAB_RED_ZONE) {
*((unsigned long*)(objp)) = RED_MAGIC1;
*((unsigned long*)(objp + cachep->objsize -
BYTES_PER_WORD)) = RED_MAGIC1;
objp += BYTES_PER_WORD;
}
#endif

/*
* Constructors are not allowed to allocate memory from
* the same cache which they are a constructor for.
* Otherwise, deadlock. They must also be threaded.
*/
构造函数不能从他们自己构造的同一个缓存中申请内存,否则会造成死锁。
if (cachep->ctor)
cachep->ctor(objp, cachep, ctor_flags);
#if DEBUG
if (cachep->flags & SLAB_RED_ZONE)
objp -= BYTES_PER_WORD;
if (cachep->flags & SLAB_POISON)
/* need to poison the objs */
kmem_poison_obj(cachep, objp);
if (cachep->flags & SLAB_RED_ZONE) {
if (*((unsigned long*)(objp)) != RED_MAGIC1)
BUG();
if (*((unsigned long*)(objp + cachep->objsize -
BYTES_PER_WORD)) != RED_MAGIC1)
BUG();
}
#endif
slab_bufctl(slabp)[i] = i+1;
}
slab_bufctl(slabp)[i-1] = BUFCTL_END;标示结束(0xffffFFFF)
slabp->free = 0;第一个空闲对象(对象数组的下标设置为0)
}

/*
* Grow (by 1) the number of slabs within a cache. This is called by
* kmem_cache_alloc() when there are no active objs left in a cache.
*/
增加1个slab在cache中。这个函数缓存中没有活跃对象时,被kmem_cache_alloc()调用
static int kmem_cache_grow (kmem_cache_t * cachep, int flags)
{
slab_t*slabp;
struct page*page;
void*objp;
size_t offset;
unsigned int i, local_flags;
unsigned long ctor_flags;
unsigned long save_flags;

/* Be lazy and only check for valid flags here,
* keeping it out of the critical path in kmem_cache_alloc().
*/
只检查不合法的标示,重要的完整性检查在kmem_cache_alloc中完成
if (flags & ~(SLAB_DMA|SLAB_LEVEL_MASK|SLAB_NO_GROW))
BUG();
if (flags & SLAB_NO_GROW)如果不允许增加,则直接返回
return 0;

/*
* The test for missing atomic flag is performed here, rather than
* the more obvious place, simply to reduce the critical path length
* in kmem_cache_alloc(). If a caller is seriously mis-behaving they
* will eventually be caught here (where it matters).
*/
原子操作位一定要设置,以便防止被中断处理程序调用的时候中断处理时间过长。
如果GFP_WAIT GFP_HIGH GFP_IO GFP_HIGHMEM没有被设置则报错
如果正在中断中,报错

if (in_interrupt() && (flags & SLAB_LEVEL_MASK) != SLAB_ATOMIC)
BUG();

ctor_flags = SLAB_CTOR_CONSTRUCTOR;
local_flags = (flags & SLAB_LEVEL_MASK);
if (local_flags == SLAB_ATOMIC)
/*
* Not allowed to sleep. Need to tell a constructor about
* this - it might need to know...
*/
告诉构造者,不能进入睡眠状态
ctor_flags |= SLAB_CTOR_ATOMIC;

/* About to mess with non-constant members - lock. */对变量的处理,存中断
spin_lock_irqsave(&cachep->spinlock, save_flags);

/* Get colour for the slab, and cal the next value. */获得slab颜色,算下一个着色值
offset = cachep->colour_next;
cachep->colour_next++;
if (cachep->colour_next >= cachep->colour)如果计数器>=最大值
cachep->colour_next = 0;绕了一圈,回到起点
offset *= cachep->colour_off; offset等于当前slab所有偏移量的大小(offset=数目*每一个对象的偏移量)
cachep->dflags |= DFLGS_GROWN;设置动态增长位

cachep->growing++;标志正在增长
spin_unlock_irqrestore(&cachep->spinlock, save_flags);恢复中断

/* A series of memory allocations for a new slab.
* Neither the cache-chain semaphore, or cache-lock, are
* held, but the incrementing c_growing prevents this
* cache from being reaped or shrunk.
* Note: The cache could be selected in for reaping in
* kmem_cache_reap(), but when the final test is made the
* growing value will be seen.
*/
注释大意:为新的slab做的一系列的内存申请操作。
通过c_growing来避免这块缓存被回收或者收缩,而不是通过加互斥锁或者缓存锁来实现。
需要注意,这块缓存可以被选进回收的函数kmem_cache_reap(),但是在那里面会发现这块缓存正在增长所以不回收。


/* Get mem for the objs. */
if (!(objp = kmem_getpages(cachep, flags)))为对象申请页面
goto failed;出错处理

/* Get slab management. */获得slab管理对象
if (!(slabp = kmem_cache_slabmgmt(cachep, objp, offset, local_flags)))
goto opps1;出错处理

/* Nasty!!!!!! I hope this is OK. */
i = 1 << cachep->gfporder;i=slab块的页面数
page = virt_to_page(objp);page是指向这个对象所在页面指针
do {
SET_PAGE_CACHE(page, cachep);设置页所在的cache,即(page->list.next=(struct list_head *)cachep
SET_PAGE_SLAB(page, slabp);设置页所在的slab,即(page->list.prev=(struct list_head *)slabp
PageSetSlab(page);设置页在slab中
page++;下一页
} while (--i);循环,直到在slab中的页都设置好了

kmem_cache_init_objs(cachep, slabp, ctor_flags);初始化对象

spin_lock_irqsave(&cachep->spinlock, save_flags);存中断
cachep->growing--;标记缓存不增长

/* Make slab active. */
list_add_tail(&slabp->list, &cachep->slabs_free);将新创建的slab放在该缓存区slab链的尾部
STATS_INC_GROWN(cachep);增长状态
cachep->failures = 0;缓存失效次数等于0

spin_unlock_irqrestore(&cachep->spinlock, save_flags);解锁,回复中断状态
return 1;正常返回
opps1:
kmem_freepages(cachep, objp);释放页面
failed:
spin_lock_irqsave(&cachep->spinlock, save_flags);
cachep->growing--;不增长状态
spin_unlock_irqrestore(&cachep->spinlock, save_flags);
return 0;
}

/*
* Perform extra freeing checks:
* - detect double free
* - detect bad pointers.
* Called with the cache-lock held.
*/
检查是否有多余的free,是否有野指针,调用的时候需要有缓存锁

#if DEBUG 条件编译,如果是查错模式
static int kmem_extra_free_checks (kmem_cache_t * cachep,
slab_t *slabp, void * objp)

{
int i;
unsigned int objnr = (objp-slabp->s_mem)/cachep->objsize;

if (objnr >= cachep->num)
BUG();
if (objp != slabp->s_mem + objnr*cachep->objsize)
BUG();

/* Check slab's freelist to see if this obj is there. */检查对象是否在施放列表里
for (i = slabp->free; i != BUFCTL_END; i = slab_bufctl(slabp)[i]) {
if (i == objnr)
BUG();
}
return 0;
}
#endif

static inline void kmem_cache_alloc_head(kmem_cache_t *cachep, int flags)
{
if (flags & SLAB_DMA) {
if (!(cachep->gfpflags & GFP_DMA))如果flags包含SLAB_DMA并且缓存要求新页的flag不包含GFP_DMA则报错
BUG();
} else {
if (cachep->gfpflags & GFP_DMA)如果flags不包含SLAB_DMA并且缓存要求新页的flag包含GFP_DMA则报错
BUG();
}
}

static inline void * kmem_cache_alloc_one_tail (kmem_cache_t *cachep,
slab_t *slabp)
{
void *objp;

STATS_INC_ALLOCED(cachep);
STATS_INC_ACTIVE(cachep);
STATS_SET_HIGH(cachep);统计操作(如果STATS不是1,则相当于空操作)

/* get obj pointer */
slabp->inuse++;slab的活动对象加1
objp = slabp->s_mem + slabp->free*cachep->objsize;
指向该slab中第一个空闲对象地址的指针(=第一个对象地址+空闲数目*对象的大小)

slabp->free=slab_bufctl(slabp)[slabp->free];当前对象的下一个空闲对象在对象数组中的下标(即第几个)

if (unlikely(slabp->free == BUFCTL_END)) {满了,把它放到满了的slab列表中。
list_del(&slabp->list);从当前列表中删除
list_add(&slabp->list, &cachep->slabs_full);放到满了的列表里
}
#if DEBUG 条件编译,调试模式下
if (cachep->flags & SLAB_POISON)在未初始化的slab区域
if (kmem_check_poison_obj(cachep, objp))如果是未初始化的对象,则报错
BUG();
if (cachep->flags & SLAB_RED_ZONE) {在红区
/* Set alloc red-zone, and check old one. */
if (xchg((unsigned long *)objp, RED_MAGIC2) !=如果边界有问题则报错
RED_MAGIC1)
BUG();
if (xchg((unsigned long *)(objp+cachep->objsize -
BYTES_PER_WORD), RED_MAGIC2) != RED_MAGIC1)对象大小出错则报错
BUG();
objp += BYTES_PER_WORD;
}
#endif
return objp;返回新分配的对象地址
}

/*
* Returns a ptr to an obj in the given cache.
* caller must guarantee synchronization
* #define for the goto optimization 8-)
*/
返回一个指向给定的cache中一个对象的指针
调用者必须确保同步
使用define以便保证对goto的优化

#define kmem_cache_alloc_one(cachep)/
({/
struct list_head * slabs_partial, * entry;/
slab_t *slabp;/
/
slabs_partial = &(cachep)->slabs_partial;/
entry = slabs_partial->next;/
if (unlikely(entry == slabs_partial)) {/
struct list_head * slabs_free;/
slabs_free = &(cachep)->slabs_free;/
entry = slabs_free->next;/
if (unlikely(entry == slabs_free))/
goto alloc_new_slab;/
list_del(entry);/
list_add(entry, slabs_partial);/
}/
/
slabp = list_entry(entry, slab_t, list);/
kmem_cache_alloc_one_tail(cachep, slabp);/
})


#ifdef CONFIG_SMP 条件编译,对称多处理机的支持,申请一批缓存
void* kmem_cache_alloc_batch(kmem_cache_t* cachep, cpucache_t* cc, int flags)
{
int batchcount = cachep->batchcount;

spin_lock(&cachep->spinlock);
while (batchcount--) {
struct list_head * slabs_partial, * entry;
slab_t *slabp;
/* Get slab alloc is to come from. */
slabs_partial = &(cachep)->slabs_partial;
entry = slabs_partial->next;
if (unlikely(entry == slabs_partial)) {
struct list_head * slabs_free;
slabs_free = &(cachep)->slabs_free;
entry = slabs_free->next;
if (unlikely(entry == slabs_free))
break;
list_del(entry);
list_add(entry, slabs_partial);
}

slabp = list_entry(entry, slab_t, list);
cc_entry(cc)[cc->avail++] =
kmem_cache_alloc_one_tail(cachep, slabp);
}
spin_unlock(&cachep->spinlock);

if (cc->avail)
return cc_entry(cc)[--cc->avail];
return NULL;
}
#endif

static inline void * __kmem_cache_alloc (kmem_cache_t *cachep, int flags)
{
unsigned long save_flags;
void* objp;

kmem_cache_alloc_head(cachep, flags);检查申请是否合理
try_again:
local_irq_save(save_flags);保存中断
#ifdef CONFIG_SMP 条件编译,对称多处理器的情况
{
cpucache_t *cc = cc_data(cachep);cpu缓存指针指向送入的缓存

if (cc) {有缓存的情况
if (cc->avail) {如果CPU缓存可用
STATS_INC_ALLOCHIT(cachep);命中计数
objp = cc_entry(cc)[--cc->avail];对象的指针指向cpu缓存,并且cpu缓存可用数目减1
} else {cpu缓存不可用
STATS_INC_ALLOCMISS(cachep);没有命中计数加1
objp = kmem_cache_alloc_batch(cachep,cc,flags);对象指针指向批量申请缓存返回的地址
if (!objp)
goto alloc_new_slab_nolock;没有申请下缓存,则不加锁申请新的slab块
}
} else {没有缓存的情况
spin_lock(&cachep->spinlock);加锁
objp = kmem_cache_alloc_one(cachep);申请新的缓存
spin_unlock(&cachep->spinlock);解锁
}
}
#else
objp = kmem_cache_alloc_one(cachep);单处理器,直接申请一个新的对象空间
#endif
local_irq_restore(save_flags);恢复中断
return objp;返回申请的对象指针
alloc_new_slab:
#ifdef CONFIG_SMP 条件编译,如果是对称多处理器,则解锁
spin_unlock(&cachep->spinlock);
alloc_new_slab_nolock:
#endif
local_irq_restore(save_flags);恢复中断
if (kmem_cache_grow(cachep, flags))如果在给定的cache中成功增加了一个slab
/* Someone may have stolen our objs. Doesn't matter, we'll
* just come back here again.
*/
其他的人可能会直接要走刚刚申请下来的对象空间。如果出现这种情况,那么还会执行这里
goto try_again;则重新分配对象的空间
return NULL;如果不能给对象分配空间,并且不能在缓存里创建新的slab块,则返回空指针。
}

/*
* Release an obj back to its cache. If the obj has a constructed
* state, it should be in this state _before_ it is released.
* - caller is responsible for the synchronization
*/


#if DEBUG
# define CHECK_NR(pg)/
do {/
if (!VALID_PAGE(pg)) {/
printk(KERN_ERR "kfree: out of range ptr %lxh./n", /
(unsigned long)objp);/
BUG();/
} /
} while (0)
# define CHECK_PAGE(page)/
do {/
CHECK_NR(page);/
if (!PageSlab(page)) {/
printk(KERN_ERR "kfree: bad ptr %lxh./n", /
(unsigned long)objp);/
BUG();/
}/
} while (0)

#else
# define CHECK_PAGE(pg)do { } while (0)
#endif
之所以使用do{}while(0)这样的写法是为了保证在任何情况宏被调用的时候都能保持相同的语义
参考:http://www.rtems.com/rtems/maillistArchives/rtems-users/2001/august/msg00056.html

static inline void kmem_cache_free_one(kmem_cache_t *cachep, void *objp)
{
slab_t* slabp;

CHECK_PAGE(virt_to_page(objp));检查指向对象的页
/* reduces memory footprint
*
if (OPTIMIZE(cachep))
slabp = (void*)((unsigned long)objp&(~(PAGE_SIZE-1)));
else
*/

slabp = GET_PAGE_SLAB(virt_to_page(objp));获取对象所在的slab块

#if DEBUG
if (cachep->flags & SLAB_DEBUG_INITIAL)
/* Need to call the slab's constructor so the
* caller can perform a verify of its state (debugging).
* Called without the cache-lock held.
*/

cachep->ctor(objp, cachep, SLAB_CTOR_CONSTRUCTOR|SLAB_CTOR_VERIFY);

if (cachep->flags & SLAB_RED_ZONE) {
objp -= BYTES_PER_WORD;
if (xchg((unsigned long *)objp, RED_MAGIC1) != RED_MAGIC2)
/* Either write before start, or a double free. */
BUG();
if (xchg((unsigned long *)(objp+cachep->objsize -
BYTES_PER_WORD), RED_MAGIC1) != RED_MAGIC2)
/* Either write past end, or a double free. */
BUG();
}
if (cachep->flags & SLAB_POISON)
kmem_poison_obj(cachep, objp);
if (kmem_extra_free_checks(cachep, slabp, objp))
return;
#endif
{
unsigned int objnr = (objp-slabp->s_mem)/cachep->objsize;

slab_bufctl(slabp)[objnr] = slabp->free;
slabp->free = objnr;上限等于新的对象的数目
}
STATS_DEC_ACTIVE(cachep);减少一个活跃的对象计数

/* fixup slab chains */修复slab链
{
int inuse = slabp->inuse;
if (unlikely(!--slabp->inuse)) {
/* Was partial or full, now empty. */满的或者半满的移到空的里面(只有一个在用的)
list_del(&slabp->list);
list_add(&slabp->list, &cachep->slabs_free);
} else if (unlikely(inuse == cachep->num)) {
/* Was full. */满的变成半满
list_del(&slabp->list);
list_add(&slabp->list, &cachep->slabs_partial);
}
}
}

#ifdef CONFIG_SMP 条件编译,对称多处理器
static inline void __free_block (kmem_cache_t* cachep,
void** objpp, int len)

{
for ( ; len > 0; len--, objpp++)
kmem_cache_free_one(cachep, *objpp);只要还有,就一直释放,直到空了
}

static void free_block (kmem_cache_t* cachep, void** objpp, int len)
{
spin_lock(&cachep->spinlock);
__free_block(cachep, objpp, len);加锁,调用__free_block(),解锁
spin_unlock(&cachep->spinlock);
}
#endif

/*
* __kmem_cache_free
* called with disabled ints
*/

static inline void __kmem_cache_free (kmem_cache_t *cachep, void* objp)
{
#ifdef CONFIG_SMP 条件编译,对称多处理器支持,对每个CPU的缓存作释放操作并且计数(如果定义STATS为1,否则不计数)。
cpucache_t *cc = cc_data(cachep);

CHECK_PAGE(virt_to_page(objp));
if (cc) {
int batchcount;
if (cc->avail < cc->limit) {
STATS_INC_FREEHIT(cachep);
cc_entry(cc)[cc->avail++] = objp;
return;
}
STATS_INC_FREEMISS(cachep);
batchcount = cachep->batchcount;
cc->avail -= batchcount;
free_block(cachep,
&cc_entry(cc)[cc->avail],batchcount);
cc_entry(cc)[cc->avail++] = objp;
return;
} else {
free_block(cachep, &objp, 1);
}
#else
kmem_cache_free_one(cachep, objp); 单CPU,直接释放
#endif
}

/**
* kmem_cache_alloc - Allocate an object
* @cachep: The cache to allocate from.
* @flags: See kmalloc().
*
* Allocate an object from this cache. The flags are only relevant
* if the cache has no available objects.
*/
申请一个对象空间
void * kmem_cache_alloc (kmem_cache_t *cachep, int flags)
{
return __kmem_cache_alloc(cachep, flags);
}

/**
* kmalloc - allocate memory
申请内核态内存
* @size: how many bytes of memory are required.
需要申请的大小
* @flags: the type of memory to allocate.
申请的类型
*
* kmalloc is the normal method of allocating memory
* in the kernel.
*
* The @flags argument may be one of:
*
* %GFP_USER - Allocate memory on behalf of user. May sleep.
根据用户来申请,申请的时候有可能会转入睡眠
*
* %GFP_KERNEL - Allocate normal kernel ram. May sleep.
申请普通内核内存,申请的时候有可能会转入睡眠
*
* %GFP_ATOMIC - Allocation will not sleep. Use inside interrupt handlers.
申请原子操作过程,申请的过程不会被打断。在中断处理中使用。
*
* Additionally, the %GFP_DMA flag may be set to indicate the memory
* must be suitable for DMA. This can mean different things on different
* platforms. For example, on i386, it means that the memory must come
* from the first 16MB.
*/
申请支持DMA方式访问。在不同的平台上有不同的含义,比如i386平台指必须来自前16M内存。
void * kmalloc (size_t size, int flags)
{
cache_sizes_t *csizep = cache_sizes;

for (; csizep->cs_size; csizep++) {
if (size > csizep->cs_size)遍历缓存链表,找到第一个符合要求的slab块
continue;
return __kmem_cache_alloc(flags & GFP_DMA ? 根据是否是DMA来决定在哪个链上找
csizep->cs_dmacachep : csizep->cs_cachep, flags);
}
return NULL;不成功则返回NULL
}

/**
* kmem_cache_free - Deallocate an object
* @cachep: The cache the allocation was from.
* @objp: The previously allocated object.
*
* Free an object which was previously allocated from this
* cache.
*/
释放缓存中的一个对象
void kmem_cache_free (kmem_cache_t *cachep, void *objp)
{
unsigned long flags;
#if DEBUG 条件编译
CHECK_PAGE(virt_to_page(objp));
if (cachep != GET_PAGE_CACHE(virt_to_page(objp)))如果对象不在缓存的页里,那么报错
BUG();
#endif

local_irq_save(flags);
__kmem_cache_free(cachep, objp);存中断,执行释放,恢复中断
local_irq_restore(flags);
}

/**
* kfree - free previously allocated memory
* @objp: pointer returned by kmalloc.
*
* Don't free memory not originally allocated by kmalloc()
* or you will run into trouble.
*/
释放以前申请的内存,不要释放不是kmalloc()分配的内存,否则会出现问题
void kfree (const void *objp)
{
kmem_cache_t *c;
unsigned long flags;

if (!objp)空无法释放
return;
local_irq_save(flags);存中断
CHECK_PAGE(virt_to_page(objp));检查对象所在的页
c = GET_PAGE_CACHE(virt_to_page(objp));c指向对象所在的页
__kmem_cache_free(c, (void*)objp);释放这个对象
local_irq_restore(flags);恢复中断
}

unsigned int kmem_cache_size(kmem_cache_t *cachep)
{
#if DEBUG 条件编译,调试模式下
if (cachep->flags & SLAB_RED_ZONE)
return (cachep->objsize - 2*BYTES_PER_WORD);如果是红区剪掉边沿的2个字占用的空间
#endif
return cachep->objsize;返回缓存中对象空间的大小
}

kmem_cache_t * kmem_find_general_cachep (size_t size, int gfpflags)
{寻找普通缓存
cache_sizes_t *csizep = cache_sizes;

/* This function could be moved to the header file, and
* made inline so consumers can quickly determine what
* cache pointer they require.
*/
注释建议将本函数移动到某一头文件中
for ( ; csizep->cs_size; csizep++) {
if (size > csizep->cs_size)
continue;
break;
}
return (gfpflags & GFP_DMA) ? csizep->cs_dmacachep : csizep->cs_cachep;判读是否要求dma,返回相应的
}

#ifdef CONFIG_SMP 条件编译,对称多处理器的支持:

/* called with cache_chain_sem acquired. */需要有cache_chain_sem允许才可以被调用
static int kmem_tune_cpucache (kmem_cache_t* cachep, int limit, int batchcount)
{对于cpu缓存的优化
ccupdate_struct_t new;
int i;

/*
* These are admin-provided, so we are more graceful.
*/

if (limit < 0)
return -EINVAL;
if (batchcount < 0)
return -EINVAL;
if (batchcount > limit)
return -EINVAL;
if (limit != 0 && !batchcount)
return -EINVAL;

memset(&new.new,0,sizeof(new.new));对新的cpu缓存更新结构体置0
if (limit) {
for (i = 0; i< smp_num_cpus; i++) {对每个CPU都按照上限大小+结构体大小分配内核态内存
cpucache_t* ccnew;

ccnew = kmalloc(sizeof(void*)*limit+
sizeof(cpucache_t), GFP_KERNEL);
if (!ccnew)
goto oom;出错处理
ccnew->limit = limit;设置上限
ccnew->avail = 0;0个可用
new.new[cpu_logical_map(i)] = ccnew;新的cpu逻辑映射
}
}
new.cachep = cachep;指向原有缓存区
spin_lock_irq(&cachep->spinlock);加锁
cachep->batchcount = batchcount;计数
spin_unlock_irq(&cachep->spinlock);解锁

smp_call_function_all_cpus(do_ccupdate_local, (void *)&new);给每个cpu都做这个操作

for (i = 0; i < smp_num_cpus; i++) {每个释放原有的cpu缓存
cpucache_t* ccold = new.new[cpu_logical_map(i)];
if (!ccold)
continue;
local_irq_disable();
free_block(cachep, cc_entry(ccold), ccold->avail);
local_irq_enable();
kfree(ccold);
}
return 0;
oom:
for (i--; i >= 0; i--)
kfree(new.new[cpu_logical_map(i)]);释放新的cpu逻辑映射
return -ENOMEM;没有内存错误
}

static void enable_cpucache (kmem_cache_t *cachep)
{
int err;
int limit;

/* FIXME: optimize */设上限
if (cachep->objsize > PAGE_SIZE)
return;
if (cachep->objsize > 1024)
limit = 60;
else if (cachep->objsize > 256)
limit = 124;
else
limit = 252;

err = kmem_tune_cpucache(cachep, limit, limit/2);
if (err)
printk(KERN_ERR "enable_cpucache failed for %s, error %d./n",
cachep->name, -err);
}激活CPU缓存

static void enable_all_cpucaches (void)
{
struct list_head* p;

down(&cache_chain_sem);允许操作缓存链

p = &cache_cache.next;
do {
kmem_cache_t* cachep = list_entry(p, kmem_cache_t, next);

enable_cpucache(cachep);挨个激活
p = cachep->next.next;
} while (p != &cache_cache.next);

up(&cache_chain_sem);禁止操作缓存链
}
#endif

/**
* kmem_cache_reap - Reclaim memory from caches.
* @gfp_mask: the type of memory required.
*
* Called from do_try_to_free_pages() and __alloc_pages()
*/
从缓存中回收内存,在do_try_to_free_pages()和alloc_pages()中调用
int kmem_cache_reap (int gfp_mask)
{
slab_t *slabp;
kmem_cache_t *searchp;
kmem_cache_t *best_cachep;
unsigned int best_pages;
unsigned int best_len;
unsigned int scan;
int ret = 0;

if (gfp_mask & __GFP_WAIT)需要的内存中包含了GFP_WAIT位
down(&cache_chain_sem);允许链操作
else
if (down_trylock(&cache_chain_sem))非阻塞允许链操作
return 0;

scan = REAP_SCANLEN (扫描长度10)
best_len = 0;
best_pages = 0;
best_cachep = NULL;
searchp = clock_searchp;指上次成功回收内存的缓存区结构
do {
unsigned int pages;
struct list_head* p;
unsigned int full_free;

/* It's safe to test this without holding the cache-lock. */
if (searchp->flags & SLAB_NO_REAP)如果不让回收,则到下一轮
goto next;
spin_lock_irq(&searchp->spinlock);加锁
if (searchp->growing)如果正在增长,则到下一轮(同时解锁定)
goto next_unlock;
if (searchp->dflags & DFLGS_GROWN) {
searchp->dflags &= ~DFLGS_GROWN;
goto next_unlock;如果是动态变量,则将dflags清0,然后到下一轮(同时解锁定)
}
#ifdef CONFIG_SMP 条件编译。对称多处理器的情况
{
cpucache_t *cc = cc_data(searchp);
if (cc && cc->avail) { 如果有cpu缓存,并且可用,则释放
__free_block(searchp, cc_entry(cc), cc->avail);
cc->avail = 0;
}
}
#endif

full_free = 0;
p = searchp->slabs_free.next;指向下一个空slab的对象空间
while (p != &searchp->slabs_free) {
slabp = list_entry(p, slab_t, list);
#if DEBUG 条件编译,如果空的里面有正在使用的对象则报错
if (slabp->inuse)
BUG();
#endif
full_free++;
p = p->next;
}统计共有多少空的(保存在full_free)

/*
* Try to avoid slabs with constructors and/or
* more than one page per slab (as it can be difficult
* to get high orders from gfp()).
*/

pages = full_free * (1<<searchp->gfporder); page等于所有的空闲slab块占有的页面数
if (searchp->ctor)如果有自己的构建函数
pages = (pages*4+1)/5;
if (searchp->gfporder)如果超过了1个页面
pages = (pages*4+1)/5;
if (pages > best_pages) {如果页面数比原有最多的页面数多,则最佳的准备释放的缓存变为新的
best_cachep = searchp;
best_len = full_free;
best_pages = pages;
if (pages >= REAP_PERFECT) {如果大于REAP_PREFECT(10),则将clock_searchp指向这些页,然后直接跳转到perfect
clock_searchp = list_entry(searchp->next.next,
kmem_cache_t,next);
goto perfect;
}
}
next_unlock:
spin_unlock_irq(&searchp->spinlock);解锁
next:
searchp = list_entry(searchp->next.next,kmem_cache_t,next);指向下一个链
} while (--scan && searchp != clock_searchp);搜索达到限制次数或者已经找了所有的slab

clock_searchp = searchp;

if (!best_cachep)找不到可以回收的
/* couldn't find anything to reap */
goto out;退出

spin_lock_irq(&best_cachep->spinlock);加锁
perfect:
/* free only 50% of the free slabs */
best_len = (best_len + 1)/2;认为可以释放的只有一半
for (scan = 0; scan < best_len; scan++) {
struct list_head *p;

if (best_cachep->growing)如果正在增长,则推出
break;
p = best_cachep->slabs_free.prev;
if (p == &best_cachep->slabs_free)如果指向了头,则退出
break;
slabp = list_entry(p,slab_t,list);
#if DEBUG 条件编译
if (slabp->inuse) 如果正在使用,则报错
BUG();
#endif
list_del(&slabp->list);删除链
STATS_INC_REAPED(best_cachep);回收计数

/* Safe to drop the lock. The slab is no longer linked to the
* cache.
*/
由于slab已经不在链中,所以可以解锁
spin_unlock_irq(&best_cachep->spinlock);解锁
kmem_slab_destroy(best_cachep, slabp);删除这个slab,将内存还给系统
spin_lock_irq(&best_cachep->spinlock);加锁
}
spin_unlock_irq(&best_cachep->spinlock);解锁
ret = scan * (1 << best_cachep->gfporder);返回释放的个数
out:
up(&cache_chain_sem);禁止对缓存链操作
return ret;
}

#ifdef CONFIG_PROC_FS 下面是对/proc/slabinfo支持的操作。
static void *s_start(struct seq_file *m, loff_t *pos)
{
loff_t n = *pos;
struct list_head *p;

down(&cache_chain_sem);
if (!n)
return (void *)1;
p = &cache_cache.next;
while (--n) {
p = p->next;
if (p == &cache_cache.next)
return NULL;
}
return list_entry(p, kmem_cache_t, next);
}

static void *s_next(struct seq_file *m, void *p, loff_t *pos)
{
kmem_cache_t *cachep = p;
++*pos;
if (p == (void *)1)
return &cache_cache;
cachep = list_entry(cachep->next.next, kmem_cache_t, next);
return cachep == &cache_cache ? NULL : cachep;
}

static void s_stop(struct seq_file *m, void *p)
{
up(&cache_chain_sem);
}

static int s_show(struct seq_file *m, void *p)
{
kmem_cache_t *cachep = p;
struct list_head *q;
slab_t*slabp;
unsigned longactive_objs;
unsigned longnum_objs;
unsigned longactive_slabs = 0;
unsigned longnum_slabs;
const char *name;

if (p == (void*)1) {
/*
* Output format version, so at least we can change it
* without _too_ many complaints.
*/

seq_puts(m, "slabinfo - version: 1.1"
#if STATS
" (statistics)"
#endif
#ifdef CONFIG_SMP
" (SMP)"
#endif
"/n");
return 0;
}

spin_lock_irq(&cachep->spinlock);
active_objs = 0;
num_slabs = 0;
list_for_each(q,&cachep->slabs_full) {
slabp = list_entry(q, slab_t, list);
if (slabp->inuse != cachep->num)
BUG();
active_objs += cachep->num;
active_slabs++;
}
list_for_each(q,&cachep->slabs_partial) {
slabp = list_entry(q, slab_t, list);
if (slabp->inuse == cachep->num || !slabp->inuse)
BUG();
active_objs += slabp->inuse;
active_slabs++;
}
list_for_each(q,&cachep->slabs_free) {
slabp = list_entry(q, slab_t, list);
if (slabp->inuse)
BUG();
num_slabs++;
}
num_slabs+=active_slabs;
num_objs = num_slabs*cachep->num;

name = cachep->name;
{
char tmp;
mm_segment_told_fs;
old_fs = get_fs();
set_fs(KERNEL_DS);
if (__get_user(tmp, name))
name = "broken";
set_fs(old_fs);
}

seq_printf(m, "%-17s %6lu %6lu %6u %4lu %4lu %4u",
name, active_objs, num_objs, cachep->objsize,
active_slabs, num_slabs, (1<<cachep->gfporder));

#if STATS
{
unsigned long errors = cachep->errors;
unsigned long high = cachep->high_mark;
unsigned long grown = cachep->grown;
unsigned long reaped = cachep->reaped;
unsigned long allocs = cachep->num_allocations;

seq_printf(m, " : %6lu %7lu %5lu %4lu %4lu",
high, allocs, grown, reaped, errors);
}
#endif
#ifdef CONFIG_SMP
{
cpucache_t *cc = cc_data(cachep);
unsigned int batchcount = cachep->batchcount;
unsigned int limit;

if (cc)
limit = cc->limit;
else
limit = 0;
seq_printf(m, " : %4u %4u",
limit, batchcount);
}
#endif
#if STATS && defined(CONFIG_SMP)
{
unsigned long allochit = atomic_read(&cachep->allochit);
unsigned long allocmiss = atomic_read(&cachep->allocmiss);
unsigned long freehit = atomic_read(&cachep->freehit);
unsigned long freemiss = atomic_read(&cachep->freemiss);
seq_printf(m, " : %6lu %6lu %6lu %6lu",
allochit, allocmiss, freehit, freemiss);
}
#endif
spin_unlock_irq(&cachep->spinlock);
seq_putc(m, '/n');
return 0;
}

/**
* slabinfo_op - iterator that generates /proc/slabinfo
*
* Output layout:
* cache-name
* num-active-objs
* total-objs
* object size
* num-active-slabs
* total-slabs
* num-pages-per-slab
* + further values on SMP and with statistics enabled
*/


struct seq_operations slabinfo_op = {
start:s_start,
next:s_next,
stop:s_stop,
show:s_show
};

#define MAX_SLABINFO_WRITE 128
/**
* slabinfo_write - SMP tuning for the slab allocator
* @file: unused
* @buffer: user buffer
* @count: data len
* @data: unused
*/

ssize_t slabinfo_write(struct file *file, const char *buffer,
size_t count, loff_t *ppos)

{
#ifdef CONFIG_SMP
char kbuf[MAX_SLABINFO_WRITE+1], *tmp;
int limit, batchcount, res;
struct list_head *p;

if (count > MAX_SLABINFO_WRITE)
return -EINVAL;
if (copy_from_user(&kbuf, buffer, count))
return -EFAULT;
kbuf[MAX_SLABINFO_WRITE] = '/0';

tmp = strchr(kbuf, ' ');
if (!tmp)
return -EINVAL;
*tmp = '/0';
tmp++;
limit = simple_strtol(tmp, &tmp, 10);
while (*tmp == ' ')
tmp++;
batchcount = simple_strtol(tmp, &tmp, 10);

/* Find the cache in the chain of caches. */
down(&cache_chain_sem);
res = -EINVAL;
list_for_each(p,&cache_chain) {
kmem_cache_t *cachep = list_entry(p, kmem_cache_t, next);

if (!strcmp(cachep->name, kbuf)) {
res = kmem_tune_cpucache(cachep, limit, batchcount);
break;
}
}
up(&cache_chain_sem);
if (res >= 0)
res = count;
return res;
#else
return -EINVAL;
#endif
}
#endif
原创粉丝点击
热门问题 老师的惩罚 人脸识别 我在镇武司摸鱼那些年 重生之率土为王 我在大康的咸鱼生活 盘龙之生命进化 天生仙种 凡人之先天五行 春回大明朝 姑娘不必设防,我是瞎子 青桔单车忘了锁怎么办 华为手机反应太慢了怎么办 魅族关机键失灵怎么办 oppa79手机开不开机怎么办 黑衣服洗完发白怎么办 白衣服被黑衣服染色了怎么办 评职称单位领导不推荐怎么办 支付宝被限制收款怎么办 在淘宝上下单想写两个地址怎么办 注销了的支付宝怎么办 狗狗黑色毛发红怎么办 蘑菇街直播间被禁言了怎么办 收了发票不付款怎么办 退款要先收发票怎么办 淘宝退款了又收到货怎么办 商家收货后拒绝退款怎么办 申请退货退款卖家不处理怎么办 淘宝买东西换货卖家不发货怎么办 淘宝自动默认付款没发货怎么办 支付宝支付失败可钱扣了怎么办 苹果nfc感应坏了怎么办 老鼠添过的盘子怎么办 ie浏览器页面显示网页错误怎么办 Ⅵvo手机声音小怎么办 小米手机预约错了怎么办 小米note二手没解锁怎么办 艾灸后脸色越黑怎么办 淘宝软件类目不能上架宝贝怎么办 ae中没有mpg格式怎么办 淘宝小二处理不公怎么办 遇到卖保险的人怎么办 租房合同没理家电清单怎么办 普雪油烟机坏了怎么办 我累了 真的累了怎么办 u盘15g变成4g了怎么办 属兔的买了东户怎么办 玩时时彩输了2万怎么办 胸变的又软又小怎么办 u盘16g变成4g了怎么办 1岁宝宝吃了就吐怎么办 脚崴了肿了很痛怎么办