Linux Per-CPU Data

来源:互联网 发布:为什么做.net不做java 编辑:程序博客网 时间:2024/06/06 02:57

Linux Per-CPU Data

Reasons for Using Per-CPU Data

There are a couple benefits to using per-CPU data.The first is the reduction inlocking requirements(减少locking的使用需求). Depending on thesemantics by which processors access the per-CPU data,you might not need any locking at all. Keepin mind that the "only thisprocessor accesses this data" rule is only a programmingconvention. You need to ensure that the local processor accesses only itsunique data. Nothing stops you from cheating.

Second, per-CPUdata greatly reduces cache invalidation(减少CPU之间数据同步时出现的缓存失效). This occurs as processors try to keeptheir caches in sync. If one processor manipulates data held in anotherprocessor's cache, that processor must flush or otherwise update its cache.Constant cache invalidation is called thrashing the cache and wreaks havoc on system performance.The use of per-CPU data keeps cache effects to a minimum because processorsideally access only their own data. The percpu interface cache-aligns all data to ensure thataccessing one processor's data does not bring in another processor's data onthe same cache line.

Consequently, theuse of per-CPU data often removes (or at least minimizes) the need for locking.Theonly safety requirement for the use of per-CPU data is disabling kernelpreemption, which is much cheaper than locking, and the interface doesso automatically. Per-CPU data can safely be used from either interrupt orprocess context. Note, however, that you cannot sleep in the middle ofaccessing per-CPU data (or else you might end up on a different processor).

No one is currently required to use the new per-CPUinterface. Doing things manually (with an array as originally discussed) isfine, as long as you disable kernel preemption. The new interface, however, ismuch easier to use and might gain additional optimizations in the future. Ifyou do decide to use per-CPU data in your kernel code, con sider the newinterface. One caveat against itsuse is that it is not backward compatible with earlier kernels.

The New percpu Interface

The 2.6 kernel introduced a new interface, known as percpu, for creating andmanipulating per-CPU data(Percpu Data是2.6内核引入的一个新的接口). This interfacegeneralizes the previous example. Creation and manipulation of per-CPU data issimplified with this new approach.

The previously discussed method of creating and accessingper-CPU data is still valid and accepted. This new interface, however, grew outof the needs for a simpler and more powerful method for manipulating per-CPUdata on large symmetrical multiprocessing computers.

The header <linux/percpu.h> declaresall the routines. You can find the actual definitions there, in mm/slab.c, and in <asm/percpu.h>.

PercpuData struct as follows:

structpercpu_data {

      void *ptrs[NR_CPUS];

};

Per-CPU Data atCompile-Time

Defining a per-CPU variable at compile-time is quite easy(在预编译时定义一个percpu变量):

DEFINE_PER_CPU(type, name);

This creates an instance of a variable of type type, named name, for each processor on the system(创建并实例化一个percpu对象如下). If you need adeclaration of the variable elsewhere, to avoid compile warnings, the followingmacro is your friend:

DECLARE_PER_CPU(type, name);

You can manipulate the variables with the get_cpu_var() and put_cpu_var() routines. A call to get_cpu_var() returns an lvalue for the given variable on the currentprocessor. It also disables preemption, which put_cpu_var() correspondinglyenables.

获取name变量并且禁止抢占,同时name变量++

get_cpu_var(name)++;   /* increment name on this processor */

释放这个变量的使用权,并允许内核抢占

put_cpu_var(name);     /* done; enable kernel preemption */

You can obtain the value of another processor'sper-CPU data, too:

per_cpu(name, cpu)++;  /* increment name on the given processor */

You need to be careful with this approach because per_cpu() neither disables kernel preemption nor provides any sort oflocking mechanism. The lockless nature of per-CPU data exists only if thecurrent processor is the only manipulator of the data. If other processorstouch other processors' data, you need locks. Be careful. Chapter 8,"Kernel Synchronization Introduction," and Chapter 9,"Kernel Synchronization Methods," discuss locking.

Another subtle note: These compile-time per-CPU examplesdo not work for mod ules because the linker actually creates them in a uniqueexecutable section (for the curious, .data.percpu). If you need to access per-CPU data from modules, or ifyou need to create such data dynamically, there is hope.

Per-CPU Data atRuntime

The kernel implements a dynamic allocator, similar to kmalloc(), for creating per-CPU data. Thisroutine creates an instance of the requested memory for each processor on thesystems. The prototypes are in <linux/percpu.h>:

void *alloc_percpu(type); /* a macro */

void *__alloc_percpu(size_t size, size_t align);

void free_percpu(const void *);

The alloc_percpu() macro allocates oneinstance of an object of the given type for every processor on the system. Itis a wrapper around __alloc_percpu(),which takes the actual number of bytes to allocate as a parameter and thenumber of bytes on which to align the allocation. The alloc_percpu() macroaligns the allocation on a byte boundary that is the natural alignment of thegiven type. Such alignment is the usual behavior. For example,

struct rabid_cheetah = alloc_percpu(structrabid_cheetah);

is the same as

struct rabid_cheetah = __alloc_percpu(sizeof (structrabid_cheetah),

                                     __alignof__ (struct rabid_cheetah));

The __alignof__ construct is a gccfeature that returns the required (or recommended, in the case of weirdarchitectures with no alignment requirements) alignment in bytes for a giventype or lvalue. Its syntax is just like that of sizeof. For example,

__alignof__ (unsigned long)

would return four on x86. When given an lvalue, thereturn value is the largest alignment that the lvalue might have. For example,an lvalue inside a structure could have a greater alignment requirement than ifan instance of the same type were created outside of the structure, because of structurealignment requirements. Issues of alignment are further discussed in Chapter 19,"Portability."

A corresponding call to free_percpu() frees the given dataon all processors.

A call to alloc_percpu() or __alloc_percpu() returnsa pointer, which is used to indirectly reference the dynamically createdper-CPU data. The kernel provides two macros to make this easy:

get_cpu_ptr(ptr);   /* return a void pointer to this processor's copy of ptr */

put_cpu_ptr(ptr);   /* done; enable kernel preemption */

The get_cpu_ptr() macro returns apointer to the specific instance of the current processor's data. It alsodisables kernel preemption, which a call to put_cpu_ptr() then enables.

Let's look at a full example of using these functions. Ofcourse, this example is a bit silly because you would normally allocate thememory once (perhaps in some initialization function), use it in variousplaces, and free it once (perhaps in some shutdown function). Nevertheless,this example should make usage quite clear:

void *percpu_ptr;

unsigned long *foo;

percpu_ptr = alloc_percpu(unsigned long);

if (!ptr)

        /* errorallocating memory .. */

foo = get_cpu_ptr(percpu_ptr);

/* manipulate foo .. */

put_cpu_ptr(percpu_ptr);

Finally, the function per_cpu_ptr() returns a givenprocessor's unique data:

per_cpu_ptr(ptr, cpu);

Again, it does not disable kernel preemptionand if youtouch another processor's datakeep in mind that you probably need to implementlocking.

0 0