memcached内存管理(3) ----------------assoc
来源:互联网 发布:python in action 编辑:程序博客网 时间:2024/05/01 06:56
assoc.{h,c}
这个文件的作用是通过item的hash值来对item进行存取和删除的操作,当然删除是指从hash表中删除,而不是真正的释放内存
一些重要的全局变量
typedef unsigned long int ub4; /* unsigned 4-byte quantities */typedef unsigned char ub1; /* unsigned 1-byte quantities *//* how many powers of 2's worth of buckets we use */static unsigned int hashpower = HASHPOWER_DEFAULT; //决定hash表的大小 2^hashpower#define hashsize(n) ((ub4)1<<(n))#define hashmask(n) (hashsize(n)-1)/* Main hash table. This is where we look except during expansion. 主hash表*/static item** primary_hashtable = 0;/* * Previous hash table. During expansion, we look here for keys that haven't * been moved over to the primary yet. */static item** old_hashtable = 0; //当hash表在扩张时,保存主表信息,并用于存取操作,主表完成扩张时就释放占用的内存/* Number of items in the hash table. */static unsigned int hash_items = 0;/* Flag: Are we in the middle of expanding now? 是否正在扩张*/ static bool expanding = false;/* * During expansion we migrate values with bucket granularity; this is how * far we've gotten so far. Ranges from 0 .. hashsize(hashpower - 1) - 1. */static unsigned int expand_bucket = 0; //在扩张时,是以桶为粒度进行的,这是告诉我们扩张到哪个桶了。从0 到 hashsize(hashpower - 1) - 1
初始化hash表
void assoc_init(const int hashtable_init) { if (hashtable_init) { hashpower = hashtable_init; } primary_hashtable = calloc(hashsize(hashpower), sizeof(void *)); if (! primary_hashtable) { fprintf(stderr, "Failed to init hashtable.\n"); exit(EXIT_FAILURE); } STATS_LOCK(); stats.hash_power_level = hashpower; stats.hash_bytes = hashsize(hashpower) * sizeof(void *); STATS_UNLOCK();}
非常简单,就是给primary_hash表分配对应大小的内存,并置0
primary_hash 是item **,指向item *的指针, 就是说primary_hash[1]就是一个item *,指向一个item,其实这个表并不存放item,就是说充当item中的h_next指针。
查找操作
item *assoc_find(const char *key, const size_t nkey, const uint32_t hv) { item *it; unsigned int oldbucket; if (expanding && (oldbucket = (hv & hashmask(hashpower - 1))) >= expand_bucket) { it = old_hashtable[oldbucket]; } else { it = primary_hashtable[hv & hashmask(hashpower)]; } item *ret = NULL; int depth = 0; while (it) { if ((nkey == it->nkey) && (memcmp(key, ITEM_key(it), nkey) == 0)) { ret = it; break; } it = it->h_next; ++depth; } MEMCACHED_ASSOC_FIND(key, nkey, depth); return ret;}参数 key 就是键值, nkey 键长, hv是经过hash操作得到的值
在thread.c中有调用
uint32_t hv; hv = hash(key, nkey, 0);
可以看出hash值跟只跟key和nkey有关,跟具体data无关。
插入过程也很简单:
1.看桶的index,如果正在扩张,且index >= extend_bucket则选择old_hashtable,否则选择primary_hashtable。
2. 遍历对应的桶查找key相等的item,找到就就返回该item
这里有个depth变量,看需要在桶中找几次才能找到该key,如果depth太深表示在该bucket上有太多的item,hash表可能分配并不均匀。
插入操作
/* Note: this isn't an assoc_update. The key must not already exist to call this */int assoc_insert(item *it, const uint32_t hv) { unsigned int oldbucket;// assert(assoc_find(ITEM_key(it), it->nkey) == 0); /* shouldn't have duplicately named things defined */ if (expanding && (oldbucket = (hv & hashmask(hashpower - 1))) >= expand_bucket) { it->h_next = old_hashtable[oldbucket]; old_hashtable[oldbucket] = it; } else { it->h_next = primary_hashtable[hv & hashmask(hashpower)]; primary_hashtable[hv & hashmask(hashpower)] = it; } hash_items++; if (! expanding && hash_items > (hashsize(hashpower) * 3) / 2) { assoc_expand(); } MEMCACHED_ASSOC_INSERT(ITEM_key(it), it->nkey, hash_items); return 1;}
操作跟find差不多,就是找到对应的桶后,将新的item作为bucket的第一个元素。
要注意的是当hash_item即hash表中的元素大于hashsize(hashpower)的1.5倍时,要扩张hash表了。
删除操作
void assoc_delete(const char *key, const size_t nkey, const uint32_t hv) { item **before = _hashitem_before(key, nkey, hv); //before 指向前一个item 的h_next(如果有个话) // *before h_next的值,就是该删除的item的地址 if (*before) { item *nxt; hash_items--; /* The DTrace probe cannot be triggered as the last instruction * due to possible tail-optimization by the compiler */ MEMCACHED_ASSOC_DELETE(key, nkey, hash_items); nxt = (*before)->h_next; //(*before)->h_next就是该删除的item的下一个item,也可能是null (*before)->h_next = 0; /* probably pointless, but whatever. */ *before = nxt; return; } /* Note: we never actually get here. the callers don't delete things they can't find. */ assert(*before != 0);}就是单链表的删除操作,这里的具体过程如下
1. 找到指向 将要被删除的item 的指针, 即 item ** before,如果有2个以上元素的桶,其实就是指向上一个item的h_next,只有一个元素则是primary_hash[n] ,这里调用来_hashitem_before
2. 正常的删除单向链表操作。
可以看到这里只是单纯的从链表删除,没有任何的释放内存操作。
扩张hash表
/* grows the hashtable to the next power of 2. */static void assoc_expand(void) { old_hashtable = primary_hashtable; primary_hashtable = calloc(hashsize(hashpower + 1), sizeof(void *)); if (primary_hashtable) { if (settings.verbose > 1) fprintf(stderr, "Hash table expansion starting\n"); hashpower++; expanding = true; expand_bucket = 0; STATS_LOCK(); stats.hash_power_level = hashpower; stats.hash_bytes += hashsize(hashpower) * sizeof(void *); stats.hash_is_expanding = 1; STATS_UNLOCK(); pthread_cond_signal(&maintenance_cond); } else { primary_hashtable = old_hashtable; /* Bad news, but we can keep running. */ }}这里成功的话,只是将正在primary_hashtable交由old_hashtable处理,然后申请新的内存来存取新表,新表是旧表的2倍
最后通过
pthread_cond_signal(&maintenance_cond);唤醒正在等待这个条件变量的线程。
static pthread_t maintenance_tid;int start_assoc_maintenance_thread() { int ret; char *env = getenv("MEMCACHED_HASH_BULK_MOVE"); if (env != NULL) { hash_bulk_move = atoi(env); if (hash_bulk_move == 0) { hash_bulk_move = DEFAULT_HASH_BULK_MOVE; } } if ((ret = pthread_create(&maintenance_tid, NULL, assoc_maintenance_thread, NULL)) != 0) { fprintf(stderr, "Can't create thread: %s\n", strerror(ret)); return -1; } return 0;}创建新线程,线程运行函数
static volatile int do_run_maintenance_thread = 1;#define DEFAULT_HASH_BULK_MOVE 1int hash_bulk_move = DEFAULT_HASH_BULK_MOVE;static void *assoc_maintenance_thread(void *arg) { while (do_run_maintenance_thread) { int ii = 0; /* Lock the cache, and bulk move multiple buckets to the new * hash table. */ mutex_lock(&cache_lock); for (ii = 0; ii < hash_bulk_move && expanding; ++ii) { item *it, *next; int bucket; //将旧表中的元素按新的hash方法 hashpower + 1了,放到新的hash表中 for (it = old_hashtable[expand_bucket]; NULL != it; it = next) { next = it->h_next; bucket = hash(ITEM_key(it), it->nkey, 0) & hashmask(hashpower); it->h_next = primary_hashtable[bucket]; primary_hashtable[bucket] = it; } old_hashtable[expand_bucket] = NULL; expand_bucket++; if (expand_bucket == hashsize(hashpower - 1)) { //旧表中的东西全部搬完后,关闭expanding状态。释放old_hashtable expanding = false; free(old_hashtable); STATS_LOCK(); stats.hash_bytes -= hashsize(hashpower - 1) * sizeof(void *); stats.hash_is_expanding = 0; STATS_UNLOCK(); if (settings.verbose > 1) fprintf(stderr, "Hash table expansion done\n"); } } if (!expanding) { /* We are done expanding.. just wait for next invocation */ //等待唤醒 pthread_cond_wait(&maintenance_cond, &cache_lock); } mutex_unlock(&cache_lock); } return NULL;}
大概就这样了。
- memcached内存管理(3) ----------------assoc
- memcached学习之assoc部分
- memcached 内存管理 分析
- memcached内存管理
- memcached内存管理
- Memcached内存管理分析
- Memcached内存管理
- memcached 中内存管理
- Memcached内存管理slabclass
- memcached源码分析(assoc.c)
- Memcached 内存管理(一)
- memcached内存管理(1) ----------------slabs
- memcached内存管理(2) ----------------items
- Memcached内存管理源码阅读
- memcached 内存管理 分析(转)
- Memcached 内存管理(一)
- python内存管理与Memcached内存管理的理解
- memcached 内存管理的一点变化
- Linux程序设计:终端
- linux tty pty pts tts概念 区别
- 大量数据进行数组操作的Redim Preserve替代方法
- 韦东山linux视频第11个例子对应的 TQ2440 7.0寸屏出现的问题
- 彻底解决VB.NET获取网页源代码的问题
- memcached内存管理(3) ----------------assoc
- Oracle10g学习笔记之Scott的所有表结构及字段含义
- JavaScriptCore, WebKit的JS实现(一)
- 改变网络世界的WebRTC要来了?!
- AMR文件格式分析
- hdu 4366 Successor
- CentOS编译安装perl、python及问题解决
- POJ 1112 Team Them Up!
- Hibernate Metamodel Generator在STS(eclipse)里的配置