Linux内核笔记(1)

来源:互联网 发布:股价历史数据库 编辑:程序博客网 时间:2024/06/06 22:48

  • Kernel
    • BPF
    • CGroup
    • Events
    • IRQ
      • IPI中断
      • spurious interrupt
    • RCU机制
      • 自旋锁
      • Mutex 和 Semaphore
      • MCS 锁
  • MM
    • cma
    • bootmem memblock
    • buddy slab
    • ksm
    • compaction
    • highmem
    • LRU 页回收
    • mmap
    • 内存同步和Cache处理
  • IPC
  • Net
  • Security
  • 使用技巧
  • 读核分析

Kernel

BPF

BPF的全名是(Berkeley Packet Filter)[1] ,开始被用在UNIX系统上是用来做网络数据的过滤的分流的, 我们在使用libpcap抓取数据包的时候也会用到它,因为具有很强的扩展性,后来被扩展到性能监控. BPF具体是怎样工作的呢?

首先,如果我们要使用libpcap抓取数据并做分析的话,最直接的方式是这样的:
1. 监听对应的设备并在每次收发数据包的时候拿到数据.
2. 在我们的程序中实现数据包的分析,并根据分析结果调整程序中变量的值.

在这个过程中因为内核不知道我们要如何去分析数据,所以必须将生数据交给用户,由用户来分析.对于少量的数据包而言这个方法是可行的的,但是对于 1M events/sec的设备而言,这种开支是十分庞大的.但是无论是出于安全考虑还是具体的实现而言(内核不能总根据用户需求而修改数据包dump的过程),用户都不能直接深入的内核中去拿想要的数据.

BPF的设计实现解决了这个问题.BPF设计了一个结构精良的虚拟机(或者叫沙箱),通过一套指令集来实现分析一块数据所需要的所有操作,由于过程简单这个状态机并不需要多级栈帧以及其他的结构,它将结果整理到一个数据块中,最终将这个数据块反馈给用户即可.这个设计即避免了安全问题,又能把自定义的处理过程动态的植入内核中,不需要修改内核代码,这种方式在大数据量的高可定制的数据分析中具有很强的实用性.bcc是bpf程序的编译器[2]socket_filtertracing_filter 是BPF的两种类别,通常使用BPF的方法是通过VFS文件系统或者系统调用接口创建BPF程序,由文件系统管理。系统将这些BPF管理起来(在适当位置执行BPF)。BPF的结果存放在map中,map在实现上分为array of maphashtab mapstack map 具体的操作步骤后面补上。
1. 考虑为什么要将bpf这个模块放进kernel这个目录里?
2. BPF的框架已经实现,能够在内核中执行沙箱程序,并将结果输出给用户。那么BPF在何时何地由谁调用?又是如何调用的?似乎离内核Trace和perf还有些距离,这些应当如何实现?
这是在对socket过滤的部分代码,所以说上面维护好整个的结构,在内核适当的地方插入对应的切面即可:

static inline u32 bpf_prog_run_save_cb(const struct bpf_prog *prog,                       struct sk_buff *skb){    u8 *cb_data = bpf_skb_cb(skb);    u8 cb_saved[BPF_SKB_CB_LEN];    u32 res;    if (unlikely(prog->cb_access)) {        memcpy(cb_saved, cb_data, sizeof(cb_saved));        memset(cb_data, 0, sizeof(cb_saved));    }    res = BPF_PROG_RUN(prog, skb);    if (unlikely(prog->cb_access))        memcpy(cb_data, cb_saved, sizeof(cb_saved));    return res;}

[:linux4.12/include/linux/filter.h]

"One of the more interesting features in this cycle is the ability to attach eBPF programs (user-defined,  This    allows  userdefined instrumentabon on a live kernel kernel negatively."                            –Ingo Molnár (Linux developer)                        Source:https://lkml.org/lkml/2015/4/14/232

BPF 应用

CGroup

定义:

@@@@ @@@@ CGroup CGroup 将一组task和一个和多个子系统的参数相关联. SubSystem 子系统是利用CGroup的task分组功能来实现task分组的模块. Hierarchy 指一组用树来管理的的Cgroups,系统中的每一个task都应当属于hierarchy中的一个CGroup. 每一个子系统都和hierarchy中的一个CGroup相关联. 每个hierarchy都会被挂接到VFS上来管理. 在一个运行时可能会有多个hierarchy同时存在,每个管理系统中的一部分task.
从用户而言,通过虚拟文件系统构建并管理CGroup,将task加入到对应的CGroup中. 从内核角度而言,只要构建各个子系统在通用CGroup系统的接口,并插入相应的切面,根据CGroup中的参数控制各个子系统的运行过程,实现性能监控.

kernel/cgroup目录中实现了cgroupsubsystem 的多对多的组织关系和CGroup的VFS接口. 实现了cpuset,namespace,pids,freezer,rdma[2] 等和内核直接相关的子系统 .

Events

下面两块代码是Perf中CallChain实现时使用的,在callchain中使用__weak来描述弱符号(Weak Symbol),在具体平台中具体实现。

__weak void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry,                  struct pt_regs *regs){}__weak void perf_callchain_user(struct perf_callchain_entry_ctx *entry,                struct pt_regs *regs){}

[:linux/kernel/events/callchain.c]

voidperf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs){    struct stackframe fr;    if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {        /* We don't support guest os callchain now */        return;    }    arm_get_current_stackframe(regs, &fr);    walk_stackframe(&fr, callchain_trace, entry);}

[:linux/arch/arm/kernel/perf_callchain.c]

Call Chain 主要实现对perf处的调用栈进行回溯,列出perf处对应用户栈的调用栈。perf主要实现在多CPU环境下对各个task实现性能监控。

pmu: performance monitor uint。性能监控单元,通过编写pmu可以实现不同的性能监控,下面是perf_event_init 中列出的支持的性能监控事件组件

    perf_pmu_register(&perf_swevent, "software", PERF_TYPE_SOFTWARE);    perf_pmu_register(&perf_cpu_clock, NULL, -1);    perf_pmu_register(&perf_task_clock, NULL, -1);    perf_tp_register();     // -> perf_pmu_register(&perf_tracepoint, "tracepoint", PERF_TYPE_TRACEPOINT);    ret = init_hw_breakpoint();    // -> perf_pmu_register(&perf_breakpoint, "breakpoint", PERF_TYPE_BREAKPOINT);

[:/linux4.13.12/kernel/events/core.c]
通过维护pmus可以初始化各个pmu的events对象,通过event上下文可以代表一个pmu的实例,对于task而言,由于可能使用多个pmu在多处理器监控性能,因此需要维护一个上下文,在创建新的task时要根据实际情况继承该上下文,在cgroup perf子系统中也要实现对任务上下文的管理。

`/** * struct perf_event_context - event context structure * * Used as a container for task events and CPU events as well: */`
struct cgroup_subsys perf_event_cgrp_subsys = {    .css_alloc  = perf_cgroup_css_alloc,    .css_free   = perf_cgroup_css_free,    .attach     = perf_cgroup_attach,    /*     * Implicitly enable on dfl hierarchy so that perf events can     * always be filtered by cgroup2 path as long as perf_event     * controller is not mounted on a legacy hierarchy.     */    .implicit_on_dfl = true,};

[:/linux4.13.12/kernel/events/core.c]
在读取event信息时,会拉取用户task对应的寄存器,调用栈等信息。结果会存储到相应的Buffer中,通过文件(use User Space Buffer)接口和mmap(use RingBuffer)反馈给用户。

static const struct vm_operations_struct perf_mmap_vmops = {    .open       = perf_mmap_open,    .close      = perf_mmap_close, /* non mergable */    .fault      = perf_mmap_fault,    .page_mkwrite   = perf_mmap_fault,};static const struct file_operations perf_fops = {    .llseek         = no_llseek,    .release        = perf_release,    .read           = perf_read,    .poll           = perf_poll,    .unlocked_ioctl     = perf_ioctl,    .compat_ioctl       = perf_compat_ioctl,    .mmap           = perf_mmap,    .fasync         = perf_fasync,};

[:/linux4.13.12/kernel/events/core.c]
hw_breakpoint是上面提到的硬件断点的下层抽象实现,会根据具体架构调用perf函数控制断点。

uprobe是一套在内核实现的用户task跟踪工具,可以用来对用户task进行调试跟踪。主要实现方法是对task注册uprobe,获取task运行环境并适时插入软件断点以实现单步,步入和调用栈跟踪等。

IRQ

众所周知,CPU分为CU、ALU和MMU三个重要的部分,通常情况下CU控制MMU读取指令和数据到相应的寄存器和缓存中来,然后在时钟周期中调用ALU完成计算.然而实际使用过程中处理器并不只做计算,尤其是在多核心的处理器上,控制也是处理器的主要功能之一. 当你按下按键的时候机器要能够响应你而不是自顾自的执行,在各种内部和外部的事件通告的时候都需要停下来,及时做出处理.这就是处理器的中断过程.计算机系统通常使用中断控制器来控制中断(包括最早的8259芯片和的lapic+io-apic高级可编程中断控制器等),这些控制器记录各个中断源的中断,按照一定的仲裁规则将中断推送到各个处理器上,处理器相应的进入中断处理过程,处理相应的中断,接下来通常设置相应的寄存器告诉中断控制器中断处理完毕。有些中断处理过程可能比较复杂,这时候在中断上下文中只记录下中断发生的状态就退出中断上下文,在相应的内核线程处理完中断之后再打开中断。

对于中断控制器而言,最基本的而言就是要根据一定的规则触发或分发中断到指定的处理器上.如8259的仲裁机制,在同一时刻根据一定优先级来提交中断.另外还需要实现中断的Mask和Unmask,在指定的时刻能够打开和关闭指定的中断.在多CPU系统上对外部中断进行分发时通常还需要考虑各个CPU的情况,设置中断的亲核性.8259控制器使用IO端口来进行控制,给指定端口发送IO指令可以实现相应的功能.APIC通过MMIO来进行控制器的管理,通过在MMU上指定一块用来与APIC交互的内存,然后读写内存实现对APIC的编程. 这块内存映射了了中断控制器中的一块存储空间,控制着控制器的的重定向、触发方式等。

struct IO_APIC_route_entry {    __u32   vector      :  8,        delivery_mode   :  3,   /* 000: FIXED                     * 001: lowest prio                     * 111: ExtINT                     */        dest_mode   :  1,   /* 0: physical, 1: logical */        delivery_status :  1,        polarity    :  1,        irr     :  1,        trigger     :  1,   /* 0: edge, 1: level */        mask        :  1,   /* 0: enabled, 1: disabled */        __reserved_2    : 15;    __u32   __reserved_3    : 24,        dest        :  8;} __attribute__ ((packed));struct IR_IO_APIC_route_entry {    __u64   vector      : 8,        zero        : 3,        index2      : 1,        delivery_status : 1,        polarity    : 1,        irr     : 1,        trigger     : 1,        mask        : 1,        reserved    : 31,        format      : 1,        index       : 15;} __attribute__ ((packed));

[:/linux4.13.12/arch/x86/include/asm/io_apic.h]
由于各种中断控制器的实现存在差异,在使用过程中分配和管理的方式都是存在差异的,这时就需要irq_domian进行管理。

enum irq_alloc_type {    X86_IRQ_ALLOC_TYPE_IOAPIC = 1,    X86_IRQ_ALLOC_TYPE_HPET,    X86_IRQ_ALLOC_TYPE_MSI,    X86_IRQ_ALLOC_TYPE_MSIX,    X86_IRQ_ALLOC_TYPE_DMAR,    X86_IRQ_ALLOC_TYPE_UV,};

[:/linux4.13.12/arch/x86/include/asm/hw_irq.h]
在系统初始化的时候,会对系统的各个中断向量进行初始化,X86平台上的中断除了系统默认的中断之外,其他都交给common_interrupt来转到do_IRQ来处理,可以作为资源进行分配。

/* * Build the entry stubs with some assembler magic. * We pack 1 stub into every 8-byte block. */    .align 8ENTRY(irq_entries_start)    vector=FIRST_EXTERNAL_VECTOR    .rept (FIRST_SYSTEM_VECTOR - FIRST_EXTERNAL_VECTOR)    pushl   $(~vector+0x80)           /* Note: always in signed byte range */    vector=vector+1    jmp common_interrupt    .align  8    .endrEND(irq_entries_start)

[:/linux4.13.12/arch/x86/entry/entry_32.S]

void __init native_init_IRQ(void){    int i;    /* Execute any quirks before the call gates are initialised: */    x86_init.irqs.pre_vector_init();    apic_intr_init();    /*     * Cover the whole vector space, no vector can escape     * us. (some of these will be overridden and become     * 'special' SMP interrupts)     */    i = FIRST_EXTERNAL_VECTOR;#ifndef CONFIG_X86_LOCAL_APIC#define first_system_vector NR_VECTORS#endif    for_each_clear_bit_from(i, used_vectors, first_system_vector) {        /* IA32_SYSCALL_VECTOR could be used in trap_init already. */        set_intr_gate(i, irq_entries_start +                8 * (i - FIRST_EXTERNAL_VECTOR));    }#ifdef CONFIG_X86_LOCAL_APIC    for_each_clear_bit_from(i, used_vectors, NR_VECTORS)        set_intr_gate(i, spurious_interrupt);#endif    if (!acpi_ioapic && !of_ioapic && nr_legacy_irqs())        setup_irq(2, &irq2);    irq_ctx_init(smp_processor_id());}

[:/linux4.13.12/arch/x86/kernel/irqinit.c]

对于X86而言,它直接根据IRQ的编号索引到desc,调用相应的handle来处理,可以看出,linux基本上处理所有问题都是这么解决的,上层维护数据结构,下层参考数据结构控制流程

    X86 IRQ 处理内核路径:        do_IRQ              ->  handle_irq                -> static inline void generic_handle_irq_desc                    -> 执行IRQ对应的handle

IRQ映射的处理过程

(1)首先找到root interrupt controller对应的irq domain。(2)根据HW 寄存器信息和irq domain信息获取HW interrupt ID(3)调用irq_find_mapping找到HW interrupt ID对应的irq number(4)调用handle_IRQ(对于ARM平台)来处理该irq number

IRQ domain是为了解决多中断控制器的时候各个控制器对应的IRQ的分配和管理,贴一段原文在这里。

The current design of the Linux kernel uses a single large numberspace where each separate IRQ source is assigned a different number.This is simple when there is only one interrupt controller, but insystems with multiple interrupt controllers the kernel must ensurethat each one gets assigned non-overlapping allocations of LinuxIRQ numbers.The number of interrupt controllers registered as unique irqchipsshow a rising tendency: for example subdrivers of different kindssuch as GPIO controllers avoid reimplementing identical callbackmechanisms as the IRQ core system by modelling their interrupthandlers as irqchips, i.e. in effect cascading interrupt controllers.Here the interrupt number loose all kind of correspondence tohardware interrupt numbers: whereas in the past, IRQ numbers couldbe chosen so they matched the hardware IRQ line into the rootinterrupt controller (i.e. the component actually fireing theinterrupt line to the CPU) nowadays this number is just a number.For this reason we need a mechanism to separate controller-localinterrupt numbers, called hardware irq's, from Linux IRQ numbers.The irq_alloc_desc*() and irq_free_desc*() APIs provide allocation ofirq numbers, but they don't provide any support for reverse mapping ofthe controller-local IRQ (hwirq) number into the Linux IRQ numberspace.The irq_domain library adds mapping between hwirq and IRQ numbers ontop of the irq_alloc_desc*() API.  An irq_domain to manage mapping ispreferred over interrupt controller drivers open coding their ownreverse mapping scheme.irq_domain also implements translation from an abstract irq_fwspecstructure to hwirq numbers (Device Tree and ACPI GSI so far), and canbe easily extended to support other IRQ topology data sources.

IPI中断

在实现了irq_chip之后,中断控制器就有一定程度的抽象意义,不一定是真实存在的中断控制器. 通过在IPI 的irq_domain中分配核心相关IRQ中断线,然后在处理器上给中断控制器发送请求,在指定核心上产生中断。IPI中断可以用在核心间通信和数据传输,也能在核心休眠之后唤醒指定的处理器核心。

spurious interrupt

我可能触发了一个假中断。。。

  SPU -- a spurious interrupt is some interrupt that was raised then lowered  by some IO device before it could be fully processed by the APIC.  Hence  the APIC sees the interrupt but does not know what device it came from.  For this case the APIC will generate the interrupt with a IRQ vector  of 0xff. This might also be generated by chipset bugs.

[:/linux4.13.12/Documentation/filesystems/proc.txt]

RCU机制

RCU[3] 本质上是一种通过版本控制来实现的少锁或无锁的数据同步(或更新), 在读和写不平衡的环境下使用能够极大的减少花费在锁上的时间(比如Pub~Sub模式,Zookeeper的节点同步).

在多核或者多个线程执行过程中,需要对多运行时共用的数据进行保护,防止多个运行环境在同时操作数据时因为按照不确定的顺序操作内存数据而对数据造成破坏。对于一个多运行时的处理机而言,要实现对数据的保护就不能存在对数据的同时访问,这时就要使用机器提供的原子操作,在一个指令的时间之内完成对内存数据的修改并将至写入内存中。基于这个功能可以实现比指令略粗一点的自旋锁,以及更粗粒度一点的信号量和互斥量等功能。

自旋锁

在使用自旋锁时要保存并禁用中断、禁止抢占以避免锁过程被打断,因为加锁和解锁本身是一个非常快的操作,但是在处理这个过程如果被打断去执行IRQ操作然后再被其他高优先级运行时所抢占的话,既影响了锁的高效执行,更主要的是会因为数据不同步而造成锁死等错误。通过锁保证在一个运行时在上锁之后和解锁之前对数据的操作都不会因为其他运行时的干预而造成错误,其他运行时通过在处理器上自旋(一组Nop指令)来等待其运行。自旋锁在一方面可以保护数据。另一方面,自旋锁也可以用来同步两个运行时,一个运行时可以通过占用锁使另外一个运行时在请求该锁时等待,当锁释放之后两个运行时的执行位置可以在一定粒度下实现同步。

#define BUILD_LOCK_OPS(op, locktype)                    \void __lockfunc __raw_##op##_lock(locktype##_t *lock)           \{                                   \    for (;;) {                          \        preempt_disable();                  \        if (likely(do_raw_##op##_trylock(lock)))        \            break;                      \        preempt_enable();                   \                                    \        if (!(lock)->break_lock)                \            (lock)->break_lock = 1;             \        while (!raw_##op##_can_lock(lock) && (lock)->break_lock)\            arch_##op##_relax(&lock->raw_lock);     \    }                               \    (lock)->break_lock = 0;                     \}                                   \                                    \unsigned long __lockfunc __raw_##op##_lock_irqsave(locktype##_t *lock)  \{                                   \    unsigned long flags;                        \                                    \    for (;;) {                          \        preempt_disable();                  \        local_irq_save(flags);                  \        if (likely(do_raw_##op##_trylock(lock)))        \            break;                      \        local_irq_restore(flags);               \        preempt_enable();                   \                                    \        if (!(lock)->break_lock)                \            (lock)->break_lock = 1;             \        while (!raw_##op##_can_lock(lock) && (lock)->break_lock)\            arch_##op##_relax(&lock->raw_lock);     \    }                               \    (lock)->break_lock = 0;                     \    return flags;                           \}                                   \                                    \void __lockfunc __raw_##op##_lock_irq(locktype##_t *lock)       \{                                   \    _raw_##op##_lock_irqsave(lock);                 \}                                   \                                    \void __lockfunc __raw_##op##_lock_bh(locktype##_t *lock)        \{                                   \    unsigned long flags;                        \                                    \    /*                          */  \    /* Careful: we must exclude softirqs too, hence the */  \    /* irq-disabling. We use the generic preemption-aware   */  \    /* function:                        */  \    /**/                                \    flags = _raw_##op##_lock_irqsave(lock);             \    local_bh_disable();                     \    local_irq_restore(flags);                   \}                                   \

[:/linux4.13.12/kernel/locking/spinlock.c]

Mutex 和 Semaphore

建立在自旋锁和硬件指令提供的Atomic功能之上,我们就可以根据需求实现更加灵活的锁机制。在对线程加锁之后其他线程并不一定要白白的占用机器时间来等待同步,将线程挂起来,等到它可以拿到锁的时候再放出来,这样就可以吧时间腾出来给其他的线程来使用。通过引入信号量可以限制多个运行时对同一块数据的访问,当一个运行时需要访问数据时信号量递减一级,释放时递增一级,当递减到零时后面请求资源的线程将被挂起来,打水是一个比较直观的例子(一个水井、M个水桶和一个水缸),因为桶的数量是有限的,所以要防止因为瞬间大量的请求阻塞或者破坏线程的执行(就像桶资源是受限制的)。

static __always_inline bool __mutex_trylock_fast(struct mutex *lock){    unsigned long curr = (unsigned long)current;    if (!atomic_long_cmpxchg_acquire(&lock->owner, 0UL, curr))        return true;    return false;}static __always_inline bool __mutex_unlock_fast(struct mutex *lock){    unsigned long curr = (unsigned long)current;    if (atomic_long_cmpxchg_release(&lock->owner, curr, 0UL) == curr)        return true;    return false;}/* * Lock a mutex (possibly interruptible), slowpath: */static __always_inline int __sched__mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,            struct lockdep_map *nest_lock, unsigned long ip,            struct ww_acquire_ctx *ww_ctx, const bool use_ww_ctx){    struct mutex_waiter waiter;    bool first = false;    struct ww_mutex *ww;    int ret;    might_sleep();    ww = container_of(lock, struct ww_mutex, base);    if (use_ww_ctx && ww_ctx) {        if (unlikely(ww_ctx == READ_ONCE(ww->ctx)))            return -EALREADY;    }    preempt_disable();    mutex_acquire_nest(&lock->dep_map, subclass, 0, nest_lock, ip);    if (__mutex_trylock(lock) ||        mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx, NULL)) {        /* got the lock, yay! */        lock_acquired(&lock->dep_map, ip);        if (use_ww_ctx && ww_ctx)            ww_mutex_set_context_fastpath(ww, ww_ctx);        preempt_enable();        return 0;    }    spin_lock(&lock->wait_lock);    /*     * After waiting to acquire the wait_lock, try again.     */    if (__mutex_trylock(lock)) {        if (use_ww_ctx && ww_ctx)            __ww_mutex_wakeup_for_backoff(lock, ww_ctx);        goto skip_wait;    }    debug_mutex_lock_common(lock, &waiter);    debug_mutex_add_waiter(lock, &waiter, current);    lock_contended(&lock->dep_map, ip);    if (!use_ww_ctx) {        /* add waiting tasks to the end of the waitqueue (FIFO): */        list_add_tail(&waiter.list, &lock->wait_list);#ifdef CONFIG_DEBUG_MUTEXES        waiter.ww_ctx = MUTEX_POISON_WW_CTX;#endif    } else {        /* Add in stamp order, waking up waiters that must back off. */        ret = __ww_mutex_add_waiter(&waiter, lock, ww_ctx);        if (ret)            goto err_early_backoff;        waiter.ww_ctx = ww_ctx;    }    waiter.task = current;    if (__mutex_waiter_is_first(lock, &waiter))        __mutex_set_flag(lock, MUTEX_FLAG_WAITERS);    set_current_state(state);    for (;;) {        /*         * Once we hold wait_lock, we're serialized against         * mutex_unlock() handing the lock off to us, do a trylock         * before testing the error conditions to make sure we pick up         * the handoff.         */        if (__mutex_trylock(lock))            goto acquired;        /*         * Check for signals and wound conditions while holding         * wait_lock. This ensures the lock cancellation is ordered         * against mutex_unlock() and wake-ups do not go missing.         */        if (unlikely(signal_pending_state(state, current))) {            ret = -EINTR;            goto err;        }        if (use_ww_ctx && ww_ctx && ww_ctx->acquired > 0) {            ret = __ww_mutex_lock_check_stamp(lock, &waiter, ww_ctx);            if (ret)                goto err;        }        spin_unlock(&lock->wait_lock);        schedule_preempt_disabled();        /*         * ww_mutex needs to always recheck its position since its waiter         * list is not FIFO ordered.         */        if ((use_ww_ctx && ww_ctx) || !first) {            first = __mutex_waiter_is_first(lock, &waiter);            if (first)                __mutex_set_flag(lock, MUTEX_FLAG_HANDOFF);        }        set_current_state(state);        /*         * Here we order against unlock; we must either see it change         * state back to RUNNING and fall through the next schedule(),         * or we must see its unlock and acquire.         */        if (__mutex_trylock(lock) ||            (first && mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx, &waiter)))            break;        spin_lock(&lock->wait_lock);    }    spin_lock(&lock->wait_lock);acquired:    __set_current_state(TASK_RUNNING);    mutex_remove_waiter(lock, &waiter, current);    if (likely(list_empty(&lock->wait_list)))        __mutex_clear_flag(lock, MUTEX_FLAGS);    debug_mutex_free_waiter(&waiter);skip_wait:    /* got the lock - cleanup and rejoice! */    lock_acquired(&lock->dep_map, ip);    if (use_ww_ctx && ww_ctx)        ww_mutex_set_context_slowpath(ww, ww_ctx);    spin_unlock(&lock->wait_lock);    preempt_enable();    return 0;err:    __set_current_state(TASK_RUNNING);    mutex_remove_waiter(lock, &waiter, current);err_early_backoff:    spin_unlock(&lock->wait_lock);    debug_mutex_free_waiter(&waiter);    mutex_release(&lock->dep_map, 1, ip);    preempt_enable();    return ret;}static int __sched__mutex_lock(struct mutex *lock, long state, unsigned int subclass,         struct lockdep_map *nest_lock, unsigned long ip){    return __mutex_lock_common(lock, state, subclass, nest_lock, ip, NULL, false);}void __sched mutex_lock(struct mutex *lock){    might_sleep();    if (!__mutex_trylock_fast(lock))        __mutex_lock_slowpath(lock);}/* * Release the lock, slowpath: */static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigned long ip){    struct task_struct *next = NULL;    DEFINE_WAKE_Q(wake_q);    unsigned long owner;    mutex_release(&lock->dep_map, 1, ip);    /*     * Release the lock before (potentially) taking the spinlock such that     * other contenders can get on with things ASAP.     *     * Except when HANDOFF, in that case we must not clear the owner field,     * but instead set it to the top waiter.     */    owner = atomic_long_read(&lock->owner);    for (;;) {        unsigned long old;#ifdef CONFIG_DEBUG_MUTEXES        DEBUG_LOCKS_WARN_ON(__owner_task(owner) != current);        DEBUG_LOCKS_WARN_ON(owner & MUTEX_FLAG_PICKUP);#endif        if (owner & MUTEX_FLAG_HANDOFF)            break;        old = atomic_long_cmpxchg_release(&lock->owner, owner,                          __owner_flags(owner));        if (old == owner) {            if (owner & MUTEX_FLAG_WAITERS)                break;            return;        }        owner = old;    }    spin_lock(&lock->wait_lock);    debug_mutex_unlock(lock);    if (!list_empty(&lock->wait_list)) {        /* get the first entry from the wait-list: */        struct mutex_waiter *waiter =            list_first_entry(&lock->wait_list,                     struct mutex_waiter, list);        next = waiter->task;        debug_mutex_wake_waiter(lock, waiter);        wake_q_add(&wake_q, next);    }    if (owner & MUTEX_FLAG_HANDOFF)        __mutex_handoff(lock, next);    spin_unlock(&lock->wait_lock);    wake_up_q(&wake_q);}void __sched mutex_unlock(struct mutex *lock){#ifndef CONFIG_DEBUG_LOCK_ALLOC    if (__mutex_unlock_fast(lock))        return;#endif    __mutex_unlock_slowpath(lock, _RET_IP_);}

[:/linux4.13.12/kernel/locking/mutex.c]
上面是Mutex的主要实现部分,主要过程就是检查Mutex是不是被占用,如果它没有被占用就直接走快速通道,如果被占用了就走普通通道(将自己挂在锁的等待链表上),解锁过程也是如此。

MCS 锁

MCS locks and qspinlocks

MM

cma

bootmem & memblock

buddy && slab

ksm

compaction

highmem

LRU 页回收

mmap

内存同步和Cache处理

IPC

Net

Security

使用技巧

  • 使用 *** ##_op实现C++中虚函数的功能.
static const struct vm_operations_struct perf_mmap_vmops = {    .open       = perf_mmap_open,    .close      = perf_mmap_close, /* non mergable */    .fault      = perf_mmap_fault,    .page_mkwrite   = perf_mmap_fault,};
  • 使用 workqueue实现job的异步批处理
  • 使用 iter 实现迭代器状态机
  • 使用container_of实现容器和侵入式数据结构
  • 使用__weak__实现可patch的弱符号
/* * Function to perform processor-specific cleanup during unregistration */__weak void arch_unregister_hw_breakpoint(struct perf_event *bp){    /*     * A weak stub function here for those archs that don't define     * it inside arch/.../kernel/hw_breakpoint.c     */}
  • 实现网络栈时的时候的分层切片的架构,支持实现复杂的安全控制和自定义的网络使用

读核分析

  1. 内核中使用了过多的链表式的数据索引方式,这在使用Linux时给人一种比Windows更加直接的感受, 可能这样的方式也比较快吧, 但相反Windows,VMware Workstation等软件尽管可能没有那么直接,但是直观感受上很稳固, 感觉在这些项目中使用连续的内存表和链块比链表更多一些,可能是闭源软件QA做的比较好吧.
  2. 内核中使用模块化可插拔的设计非常多, 整体模式都是:

    • 构建一个数据模型
    • 实现static的数据操作函数和业务逻辑
    • 在内核相应的处理过程中加入HOOK,按构建的数据结构来实现控制
    • 留出相应的系统调用或文件系统接口供用户使用

    这样的结构适合大团队维护一个复杂的模块独立的系统,模块化很强,很面向对象,但是深入看来不够抽象,个人感觉没有LLVM的代码好看,尽管复杂(看完花了两个月…)但是抽象性很强,很规范,在想象Linux如果能写成LLVM那样得多好…

  3. 接着上面的问题, 个人有个比较奇怪的想法(求大神评论区指点):
    将Linux的内存按照使用时的属性分成几个基本的类型,挂接到VFS上进行管理.通过VFS分配内存到内存分配器

    • 固定内存(Pin)
    • 可移动内存(Movable)
    • 可重定位内存
    • 可在一定范围内调整大小的内存Resizable
    • 可压缩内存Compressable
    • 可交换内存(Swap)
    • DMA memory
    • 可合并内存(Mergeable )
    • 内核使用内存和用户使用内存
    • 可分配内存和洞(Hole)
    • 远端内存(FileBased or Remote Device memory)
      拿到内存之后,结合分配器构建内存文件,将至和进程虚存绑定使用,这样可以省去shm,pipe等多种数据交互方式,通过ioctl控制内存性质.这样KSM等技术都可借助VFS在文件系统中实现。基于内存的CGroup资源控制也能转移到文件系统中。另外展望一下,如果能在内存片上直接做计算的话,这种方式无疑更加适合资源的管理。

latent_entropy: Mark functions with __latent_entropyThe __latent_entropy gcc attribute can be used only on functions andvariables.  If it is on a function then the plugin will instrument it forgathering control-flow entropy. If the attribute is on a variable thenthe plugin will initialize it with random contents.  The variable mustbe an integer, an integer array type or a structure with integer fields.These specific functions have been selected because they are initfunctions (to help gather boot-time entropy), are called at unpredictabletimes, or they have variable loops, each of which provide some level oflatent entropy.Signed-off-by: Emese Revfy <re.emese@gmail.com>[kees: expanded commit message]Signed-off-by: Kees Cook <keescook@chromium.org> 
原创粉丝点击