Linux2.6.32 PCB内tast_struct的简述

来源:互联网 发布:python r语言 编辑:程序博客网 时间:2024/05/21 07:57

linux下的PCB定义 (centos)

本分类下文章大多整理自《深入分析linux内核源代码》一书。如果想深入了解请下载该书,深入理解。在看此片博文前,如若对操作系统不甚了解的同学,须提前了解相关知识。首先,我们来了解一下什么叫task_struct;Linux中的每个进程由一个task_struct数

一 task_struct 结构描述

1.进程调度信息

调度程序利用这部分信息决定系统中哪个进程最应该运行,并结合进程的状态信息保证系统运转的公平和高效。这一部分信息通常包括进程的类别(普通进程还是实时进程)、进程的优先级等
进程状态:
R running 可运行状态
S sleeping 可中断睡眠状态
D disk sleep 不可中断的休眠状态 常规的方法不能终止(关机重启) 通常会等待I/O的结束
T stoppod
t tracing stop 可追踪状态
X dead
Z zombie 僵尸状态(占资源,代码执行完毕,维护状态等待检查 需要被回收)

(1)可运行状态

处于这种状态的进程,要么正在运行、要么正准备运行。正在运行的进程就是当前进程(由current 宏 所指向的进程),而准备运行的进程只要得到CPU 就可以立即投入运行,CPU 是这些进程唯一等待的系统资源。系统中有一个运行队列(run_queue),用来容纳所有处于可运行状态的进程,调度程序执行时,从中选择一个进程投入运行。当前运行进程一直处于该队列中,也就是说,current总是指向运行队列中的某个元素,只是具体指向谁由调度程序决定。

(2)等待状态

处于该状态的进程正在等待某个事件(Event)或某个资源,它位于系统中的某个等待队列(wait_queue)中。Linux 中处于等待状态的进程分为两种:可中断的等待状态和不可中断的等待状态。处于可中断等待态的进程可以被信号唤醒,如果收到信号,该进程就从等待状态进入可运行状态,并且加入到运行队列中,等待被调度;而处于不可中断等待态的进程是因为硬件环境不能满足而等待,例如等待特定的系统资源,它任何情况下都不能被打断,只能用特定的方式来唤醒它,例如唤醒函数wake_up()等。

(3)暂停状态

此时的进程暂时停止运行来接受某种特殊处理。通常当进程接收到 19)SIGSTOP、SIGTSTP、SIGTTIN 或 SIGTTOU 信号后就处于这种状态。例如,正接受调试的进程就处于这种状态。
kill -SIGSTOP 7344(进程的pid)停止进程
kill -SIGCONT 7344(进程的pid)继续当前进程

(4)僵死状态

进程虽然已经终止,但由于某种原因,父进程还没有执行wait()系统调用,终止进程的信息也还没有回收。顾名思义,处于该状态的进程就是死进程,这种进程实际上是系统中的垃圾,必须进行相应处理以释放其占用的资源。

2.进程调度信息

调度程序利用这部分信息决定系统中哪个进程最应该运行,并结合进程的状态信息保证系统运转的公平和高效。这一部分信息通常包括进程的类别(普通进程还是实时进程)、进程的优先级等
当need_resched 被设置时,在“下一次的调度机会”就调用调度程序schedule();
Counter 代表进程剩余的时间片,是进程调度的主要依据,也可以说是进程的动态优先级,因为这个值在不断地减少;
Nice 是进程的静态优先级,同时也代表进程的时间片,用于对Counter 赋值,可以用nice()系统调用改变这个值;
Policy是适用于该进程的调度策略,实时进程和普通进程的调度策略是不同的;
rt_priority 只对实时进程有意义,它是实时进程调度的依据。

3.标识符(Identifiers)

4.进程通信有关信息(IPC,Inter_Process Communication)

为了使进程能在同一项任务上协调工作,进程之间必须能进行通信即交流数据。Linux 支持多种不同形式的通信机制。它支持典型的UNIX 通信机制(IPC Mechanisms):信号(Signals)、管道(Pipes),也支持System V / Posix 通信机制:共享内存(Shared Memory)、信号量和消息队列(Message Queues)

5.虚拟内存信息(Virtual Memory)

除了内核线程(Kernel Thread),每个进程都拥有自己的地址空间(也叫虚拟空间),用mm_struct 来描述。另外Linux 2.4 还引入了另外一个域active_mm,这是为内核线程而引入的。因为内核线程没有自己的地址空间,为了让内核线程与普通进程具有统一的上下文切换方式,当内核线程进行上下文切换时,让切换进来的线程的active_mm 指向刚被调度出去的进程的mm_struct。

6.内核线程

内核线程(kernel thread)
这类线程周期性被内核唤醒和调度,主要用于实现系统后台操作,如页面对换,刷新磁盘缓存,网络连接等系统工作。
• 内核线程执行的是内核中的函数,而普通进程只有通过系统调用才能执行内核中的函数。
• 内核线程只运行在内核态,而普通进程既可以运行在用户态,也可以运行在内核态。
• 因为内核线程指只运行在内核态,因此,它只能使用大于PAGE_OFFSET(3G)的地址空间。另一方面,不管在用户态还是内核态,普通进程可以使用4GB 的地址空间

7.等待队列

进程必须经常等待某些事件的发生,例如,等待一个磁盘操作的终止,等待释放系统资源或等待时间走过固定的间隔。等待队列实现在
事件上的条件等待,也就是说,希望等待特定事件的进程把自己放进合适的等待队列,并放弃控制权。因此,等待队列表示一组睡眠的进程,当某一条件变为真时,由内核唤醒它们。等待队列由循环链表实现。

8.运行队列

当内核要寻找一个新的进程在CPU 上运行时,必须只考虑处于可运行状态的进程(即在TASK_RUNNING 状态的进程),因为扫描整个进程链表是相当低效的,所以引入了可运行状态进程的双向循环链表,也叫运行队列(run queue)。
该队列通过task_struct 结构中的两个指针run_list 链表来维持。队列的标志有两个:一个是“空进程”idle_task;一个是队列的长度,,也就是系统中处于可运行状态(TASK_RUNNING)的进程数目,用全局整型变量nr_running 表示。

代码块

    struct task_struct {      volatile long state;        /* -1 unrunnable, 0 runnable, >0 stopped */      void *stack;        //stack should points to a threadinfo struct      atomic_t usage;     //有几个进程正在使用该结构      unsigned int flags;     /* per process flags, defined below */    //反应进程状态的信息,但不是运行状态      unsigned int ptrace;  #ifdef CONFIG_SMP      struct task_struct *wake_entry;      int on_cpu;   //在哪个CPU上运行  #endif      int on_rq;  //on_rq denotes whether the entity is currently scheduled on a run queue or not.      int prio, static_prio, normal_prio;  //静态优先级,动态优先级  /* the task structure employs three elements to denote the priority of a process: prio and normal_prio indicate the dynamic priorities, static_prio the static priority of a process. The static priority is the priority assigned to the process when it was started. It can be modified with the nice and sched_setscheduler system calls, but remains otherwise constant during the process’ run time. normal_priority denotes a priority that is computed based on the static priority and the scheduling policy of the process. Identical static priorities will therefore result in different normal priorities depending on whether a process is a regular or a real-time process. When a process forks, the child process will inherit the normal priority. However, the priority considered by the scheduler is kept in prio. A third element is required because situations can arise in which the kernel needs to temporarily boost the priority of a pro- cess. Since these changes are not permanent, the static and normal priorities are unaffected by this. */      unsigned int rt_priority;  //实时任务的优先级      const struct sched_class *sched_class;  //与调度相关的函数      struct sched_entity se; //调度实体      struct sched_rt_entity rt; //实时任务调度实体  #ifdef CONFIG_PREEMPT_NOTIFIERS      /* list of struct preempt_notifier: */      struct hlist_head preempt_notifiers; //与抢占有关的  #endif      /*      * fpu_counter contains the number of consecutive context switches      * that the FPU is used. If this is over a threshold, the lazy fpu      * saving becomes unlazy to save the trap. This is an unsigned char      * so that after 256 times the counter wraps and the behavior turns      * lazy again; this to deal with bursty apps that only use FPU for      * a short time      */      unsigned char fpu_counter;  #ifdef CONFIG_BLK_DEV_IO_TRACE      unsigned int btrace_seq;  #endif      unsigned int policy;  //调度策略      cpumask_t cpus_allowed;//多核体系结构中管理CPU的位图:Cpumasks provide a bitmap suitable                                  //for representing the set of CPU's in a system, one bit position per CPU number.                                  // In general, only nr_cpu_ids (<= NR_CPUS) bits are valid.  #ifdef CONFIG_PREEMPT_RCU      int rcu_read_lock_nesting; //RCU是一种新型的锁机制可以参考博文:http://blog.csdn.net/sunnybeike/article/details/6866473。      char rcu_read_unlock_special;  #if defined(CONFIG_RCU_BOOST) && defined(CONFIG_TREE_PREEMPT_RCU)      int rcu_boosted;  #endif /* #if defined(CONFIG_RCU_BOOST) && defined(CONFIG_TREE_PREEMPT_RCU) */      struct list_head rcu_node_entry;  #endif /* #ifdef CONFIG_PREEMPT_RCU */  #ifdef CONFIG_TREE_PREEMPT_RCU      struct rcu_node *rcu_blocked_node;  #endif /* #ifdef CONFIG_TREE_PREEMPT_RCU */  #ifdef CONFIG_RCU_BOOST      struct rt_mutex *rcu_boost_mutex;  #endif /* #ifdef CONFIG_RCU_BOOST */  #if defined(CONFIG_SCHEDSTATS) || defined(CONFIG_TASK_DELAY_ACCT)      struct sched_info sched_info;   //调度相关的信息,如在CPU上运行的时间/在队列中等待的时间等。  #endif      struct list_head tasks;   //任务队列  #ifdef CONFIG_SMP      struct plist_node pushable_tasks;  #endif      struct mm_struct *mm, *active_mm;   //mm是进程的内存管理信息  /*关于mm和active_mm lazy TLB应该是指在切换进程过程中如果下一个执行进程不会访问用户空间,就没有必要flush TLB; kernel thread运行在内核空间,它的mm_struct指针mm是0,它不会访问用户空间。 if (unlikely(!mm))是判断切换到的新进程是否是kernel thread, 如果是,那么由于内核要求所有进程都需要一个mm_struct结构,所以需要把被切换出去的进程(oldmm)的mm_struct借过来存储在 active_mm( next->active_mm = oldmm;),这样就产生了一个anomymous user, atomic_inc(&oldmm->mm_count)就用于增加被切换进程的mm_count, 然后就利用 enter_lazy_tlb标志进入lazeTLB模式(MP),对于UP来说就这个函数不需要任何动作; if (unlikely(!prev->mm))这句话是判断被切换出去的进程是不是kernel thread,如果是的话就要释放它前面借来的mm_struct。 而且如果切换到的进程与被切换的kernel thread的page table相同,那么就要flush与这些page table 相关的entry了。 注意这里的连个if都是针对mm_struct结构的mm指针进行判断,而设置要切换到的mm_struct用的是active_mm; 对于MP来说,假如某个CPU#1发出需要flushTLB的要求,对于其它的CPU来说如果该CPU执行kernel thread,那么由CPU设置其进入lazyTLB模式, 不需要flush TLB,当从lazyTLB模式退出的时候,如果切换到的下个进程需要不同的PageTable,那此时再flush TLB;如果该CPU运行的是普通的进程和#1相同, 它就要立即flush TLB了 大多数情况下mm和active_mm中的内容是一样的;但是在这种情况下是不一致的,就是创建的进程是内核线程的时候,active_mm = oldmm(之前进程的mm), mm = NULL, (具体的请参考深入Linux内核的78页。)  参考文章:http://www.linuxsir.org/bbs/thread166288.html  ```*/  #ifdef CONFIG_COMPAT_BRK      unsigned brk_randomized:1;  #endif  #if defined(SPLIT_RSS_COUNTING)      struct task_rss_stat    rss_stat;  //RSS is the total memory actually held in RAM for a process.                                             //请参考博文:http://blog.csdn.net/sunnybeike/article/details/6867112  #endif  /* task state */      int exit_state;  //进程退出时的状态      int exit_code, exit_signal; //进程退出时发出的信号      int pdeath_signal;  /*  The signal sent when the parent dies  */      unsigned int group_stop;    /* GROUP_STOP_*, siglock protected */      /* ??? */      unsigned int personality;  //由于Unix有许多不同的版本和变种,应用程序也有了适用范围。                                     //所以根据执行程序的不同,每个进程都有其个性,在personality.h文件中有相应的宏定义      unsigned did_exec:1;    //根据POSIX程序设计的标准,did_exec是用来表示当前进程是在执行原来的代码还是在执行由execve调度的新的代码。      unsigned in_execve:1;   /* Tell the LSMs that the process is doing an                  * execve */      unsigned in_iowait:1;       /* Revert to default priority/policy when forking */      unsigned sched_reset_on_fork:1;      unsigned sched_contributes_to_load:1;      pid_t pid;  //进程ID      pid_t tgid; //线程组ID  #ifdef CONFIG_CC_STACKPROTECTOR      /* Canary value for the -fstack-protector gcc feature */      unsigned long stack_canary;  #endif      /*       * pointers to (original) parent process, youngest child, younger sibling,      * older sibling, respectively.  (p->father can be replaced with       * p->real_parent->pid)      */      struct task_struct *real_parent; /* real parent process */       struct task_struct *parent; /* recipient of SIGCHLD, wait4() reports */      /*      * children/sibling forms the list of my natural children      */      struct list_head children;  /* list of my children */      struct list_head sibling;   /* linkage in my parent's children list */      struct task_struct *group_leader;   /* threadgroup leader */      /*      * ptraced is the list of tasks this task is using ptrace on.      * This includes both natural children and PTRACE_ATTACH targets.      * p->ptrace_entry is p's link on the p->parent->ptraced list.      */      struct list_head ptraced;       struct list_head ptrace_entry;      /* PID/PID hash table linkage. */      struct pid_link pids[PIDTYPE_MAX];      struct list_head thread_group;      struct completion *vfork_done;      /* for vfork() */  /* If the vfork mechanism was used (the kernel recognizes this by the fact that the CLONE_VFORK flag is set), the completions mechanism of the child process must be enabled. The vfork_done element of the child process task structure is used for this purpose.  */           int __user *set_child_tid; /* CLONE_CHILD_SETTID */          int __user *clear_child_tid; /* CLONE_CHILD_CLEARTID */c          putime_t utime, stime, utimescaled, stimescaled;  // utime是进程用户态耗费的时间,stime是用户内核态耗费的时间。                                                                  //而后边的两个值应该是不同单位的时间cputime_t gtime;   //??          #ifndef CONFIG_VIRT_CPU_ACCOUNTING           cputime_t prev_utime, prev_stime;          #endif          unsigned long nvcsw, nivcsw; /* context switch counts */          struct timespec start_time; /* monotonic time */          struct timespec real_start_time; /* boot based time */          /* mm fault and swap info: this can arguably be seen as either mm-specific or thread-specific */          unsigned long min_flt, maj_flt;           struct task_cputime cputime_expires;  //进程到期的时间?          struct list_head cpu_timers[3];  //???/* process credentials */                    //请参考cred结构定义文件的注释说明  const struct cred __rcu *real_cred; /* objective and real subjective task * credentials (COW) */          const struct cred __rcu *cred; /* effective (overridable) subjective task * credentials (COW) */          struct cred *replacement_session_keyring; /* for KEYCTL_SESSION_TO_PARENT */          char comm[TASK_COMM_LEN]; /* executable name excluding path - access with [gs]et_task_comm (which lock it with task_lock()) - initialized normally by setup_new_exec */          /* file system info */          int link_count, total_link_count;  //硬连接的数量?          #ifdef CONFIG_SYSVIPC/* ipc stuff */  //进程间通信相关的东西          struct sysv_sem sysvsem;  //          #endif          #ifdef CONFIG_DETECT_HUNG_TASK/* hung task detection */           unsigned long last_switch_count;          #endif/* CPU-specific state of this task */  struct thread_struct thread; /*因为task_stcut是与硬件体系结构无关的,因此用thread_struct这个结构来包容不同的体系结构*/          /* filesystem information */          struct fs_struct *fs;          /* open file information */          struct files_struct *files;          /* namespaces */  //关于命名空间深入讨论,参考深入Professional Linux Kernel Architecture  2.3.2节                           // 或者http://book.51cto.com/art/201005/200881.htm          struct nsproxy *nsproxy;/* signal handlers */          struct signal_struct *signal;          struct sighand_struct *sighand;          sigset_t blocked, real_blocked;          sigset_t saved_sigmask; /* restored if set_restore_sigmask() was used */          struct sigpending pending;  //表示进程收到了信号但是尚未处理。          unsigned long sas_ss_sp;size_t sas_ss_size;          /*Although signal handling takes place in the kernel, the installed signal handlers run in usermode ― otherwise,           it would be very easy to introduce malicious or faulty code into the kernel andundermine the system security mechanisms.           Generally, signal handlers use the user mode stack ofthe process in question.           However, POSIX mandates the option of running signal handlers on a stackset up specifically for this purpose (using the           sigaltstack system call). The address and size of this additional stack (which must be explicitly allocated by the           user application) are held in sas_ss_sp andsas_ss_size, respectively. (Professional Linux Kernel Architecture Page384)*/          int (*notifier)(void *priv);          void *notifier_data;          sigset_t *notifier_mask;          struct audit_context *audit_context; //请参看 Professional Linux Kernel Architecture Page1100          #ifdef CONFIG_AUDITSYSCALL          uid_t loginuid;          unsigned int sessionid;          #endif          seccomp_t seccomp;          /* Thread group tracking */           u32 parent_exec_id;           u32 self_exec_id;/* Protection of (de-)allocation: mm, files, fs, tty, keyrings, mems_allowed, * mempolicy */          spinlock_t alloc_lock;          #ifdef CONFIG_GENERIC_HARDIRQS/* IRQ handler threads */          struct irqaction *irqaction;#endif/* Protection of the PI data structures: */   //PI --> Priority Inheritanceraw_spinlock_t pi_lock;          #ifdef CONFIG_RT_MUTEXES    //RT-->  RealTime Task 实时任务/* PI waiters blocked on a rt_mutex held by this task */          struct plist_head pi_waiters;/* Deadlock detection and priority inheritance handling */          struct rt_mutex_waiter *pi_blocked_on;          #endif          #ifdef CONFIG_DEBUG_MUTEXES/* mutex deadlock detection */          struct mutex_waiter *blocked_on;          #endif          #ifdef CONFIG_TRACE_IRQFLAGS          unsigned int irq_events;          unsigned long hardirq_enable_ip;          unsigned long hardirq_disable_ip;          unsigned int hardirq_enable_event;          unsigned int hardirq_disable_event;          int hardirqs_enabled;          int hardirq_context;          unsigned long softirq_disable_ip;          unsigned long softirq_enable_ip;          unsigned int softirq_disable_event;          unsigned int softirq_enable_event;          int softirqs_enabled;           int softirq_context;          #endif          #ifdef CONFIG_LOCKDEP          # define MAX_LOCK_DEPTH 48UL          u64 curr_chain_key;          int lockdep_depth; //锁的深度          unsigned int lockdep_recursion;          struct held_lock held_locks[MAX_LOCK_DEPTH];          gfp_t lockdep_reclaim_gfp;          #endif          /* journalling filesystem info */          void *journal_info; //文件系统日志信息          /* stacked block device info */          struct bio_list *bio_list; //块IO设备表          #ifdef CONFIG_BLOCK          /* stack plugging */          struct blk_plug *plug;          #endif          /* VM state */           struct reclaim_state *reclaim_state;          struct backing_dev_info *backing_dev_info;          struct io_context *io_context;          unsigned long ptrace_message;          siginfo_t *last_siginfo;          /* For ptrace use. */          struct task_io_accounting ioac; //a structure which is used for recording a single task's IO statistics.          #if defined(CONFIG_TASK_XACCT)          u64 acct_rss_mem1;           /* accumulated rss usage */          u64 acct_vm_mem1;           /* accumulated virtual memory usage */          cputime_t acct_timexpd;          /* stime + utime since last update */          #endif          #ifdef CONFIG_CPUSETS          nodemask_t mems_allowed;          /* Protected by alloc_lock */          int mems_allowed_change_disable;          int cpuset_mem_spread_rotor;          int cpuset_slab_spread_rotor;          #endif          #ifdef CONFIG_CGROUPS          /* Control Group info protected by css_set_lock */          struct css_set __rcu *cgroups;          /* cg_list protected by css_set_lock and tsk->alloc_lock */          struct list_head cg_list;          #endif          #ifdef CONFIG_FUTEX          struct robust_list_head __user *robust_list;          #ifdef CONFIG_COMPAT          struct compat_robust_list_head __user *compat_robust_list;          #endifstruct list_head pi_state_list;          struct futex_pi_state *pi_state_cache;          #endif          #ifdef CONFIG_PERF_EVENTS          struct perf_event_context *perf_event_ctxp[perf_nr_task_contexts];          struct mutex perf_event_mutex;          struct list_head perf_event_list;          #endif          #ifdef CONFIG_NUMA          struct mempolicy *mempolicy;          /* Protected by alloc_lock */          short il_next;          short pref_node_fork;          #endifatomic_t fs_excl; /* holding fs exclusive resources *///是否允许进程独占文件系统。为0表示否。          struct rcu_head rcu;/* * cache last used pipe for splice */          struct pipe_inode_info *splice_pipe;          #ifdef CONFIG_TASK_DELAY_ACCT          struct task_delay_info *delays;          #endif          #ifdef CONFIG_FAULT_INJECTION          int make_it_fail;          #endif          struct prop_local_single dirties;          #ifdef CONFIG_LATENCYTOP          int latency_record_count;          struct latency_record latency_record[LT_SAVECOUNT];          #endif          /* * time slack values; these are used to round up poll() and * select() etc timeout values.            These are in nanoseconds. */          unsigned long timer_slack_ns;          unsigned long default_timer_slack_ns;          struct list_head *scm_work_list;          #ifdef CONFIG_FUNCTION_GRAPH_TRACER          /* Index of current stored address in ret_stack */          int curr_ret_stack;/* Stack of return addresses for return function tracing */          struct ftrace_ret_stack *ret_stack;/* time stamp for last schedule */          unsigned long long ftrace_timestamp;          /* * Number of functions that haven't been traced * because of depth overrun. */          atomic_t trace_overrun;          /* Pause for the tracing */          atomic_t tracing_graph_pause;          #endif          #ifdef CONFIG_TRACING          /* state flags for use by tracers */          unsigned long trace;/* bitmask and counter of trace recursion */          unsigned long trace_recursion;          #endif /* CONFIG_TRACING */          #ifdef CONFIG_CGROUP_MEM_RES_CTLR           /* memcg uses this to do batch job */          struct memcg_batch_info {int do_batch; /* incremented when batch uncharge started */          struct mem_cgroup *memcg; /* target memcg of uncharge */          unsigned long nr_pages; /* uncharged usage */          unsigned long memsw_nr_pages; /* uncharged mem+swap usage */          } memcg_batch;          #endif          #ifdef CONFIG_HAVE_HW_BREAKPOINT          atomic_t ptrace_bp_refcnt;          #endif       };2、根据网上看源码的教程我是centos所以先su进入#用户模式下,cd到/目录下,注意不是/root,一定是/然后一路cd (/-----usr----include---linux---sched.h)就可以看到这样的文件#ifndef _LINUX_SCHED_H#define _LINUX_SCHED_H/* * cloning flags: */#define CSIGNAL     0x000000ff  /* signal mask to be sent at exit */#define CLONE_VM    0x00000100  /* set if VM shared between processes */#define CLONE_FS    0x00000200  /* set if fs info shared between processes */#define CLONE_FILES 0x00000400  /* set if open files shared between processes */#define CLONE_SIGHAND   0x00000800  /* set if signal handlers and blocked signals shared */#define CLONE_PTRACE    0x00002000  /* set if we want to let tracing continue on the child too */#define CLONE_VFORK 0x00004000  /* set if the parent wants the child to wake it up on mm_release */#define CLONE_PARENT    0x00008000  /* set if we want to have the same parent as the cloner */#define CLONE_THREAD    0x00010000  /* Same thread group? */#define CLONE_NEWNS 0x00020000  /* New namespace group? */#define CLONE_SYSVSEM   0x00040000  /* share system V SEM_UNDO semantics */#define CLONE_SETTLS    0x00080000  /* create a new TLS for the child */#define CLONE_PARENT_SETTID 0x00100000  /* set the TID in the parent */#define CLONE_CHILD_CLEARTID    0x00200000  /* clear the TID in the child */#define CLONE_DETACHED      0x00400000  /* Unused, ignored */#define CLONE_UNTRACED      0x00800000  /* set if the tracing process can't force CLONE_PTRACE on this clone */#define CLONE_CHILD_SETTID  0x01000000  /* set the TID in the child *//* 0x02000000 was previously the unused CLONE_STOPPED (Start in stopped state)   and is now available for re-use. */#define CLONE_NEWUTS        0x04000000  /* New utsname group? */#define CLONE_NEWIPC        0x08000000  /* New ipcs */#define CLONE_NEWUSER       0x10000000  /* New user namespace */#define CLONE_NEWPID        0x20000000  /* New pid namespace */#define CLONE_NEWNET        0x40000000  /* New network namespace */#define CLONE_IO        0x80000000  /* Clone io context *//* * Scheduling policies */#define SCHED_NORMAL        0#define SCHED_FIFO      1#define SCHED_RR        2#define SCHED_BATCH     3/* SCHED_ISO: reserved but not implemented yet */#define SCHED_IDLE      5/* Can be ORed in to make sure the process is reverted back to SCHED_NORMAL on fork */#define SCHED_RESET_ON_FORK     0x40000000#endif /* _LINUX_SCHED_H */
0 0
原创粉丝点击