内核栈空间和用户栈空间

来源:互联网 发布:天龙抢号软件 编辑:程序博客网 时间:2024/05/17 23:07

内核栈空间和用户栈空间 kernel stack and user space stack

简而言之,一个程序至少有一个进程,一个进程至少有一个线程. 线程的划分尺度小于进程,使得多线程程序的并发性高。
另外,进程在执行过程中拥有独立的内存单元,而多个线程共享内存,从而极大地提高了程序的运行效率。
同一个进程的多个子线程在进程的共享内存中分配独立的栈空间
栈:是个线程独有的,保存其运行状态和局部自动变量的。栈在线程开始的时候初始化,每个线程的栈互相独立,因此,栈是thread safe的。每个C++对象的数据成员也存在在栈中,每个函数都有自己的栈,栈被用来在函数之间传递参数。操作系统在切换线程的时候会自动的切换栈,就是切换SS/ESP寄存器。栈空间不需要在高级语言里面显式的分配和释放。
在Linux系统上每一个线程,实际上都有独立的2个栈空间,用户栈空间和内核栈空间。(注意是每个线程一个)
进程和线程在Linux上的唯一区别就是同一个进程的不同线程共享进程的地址空间,仅此而已。

http://stackoverflow.com/questions/12911841/kernel-stack-and-user-space-stack

What’s the difference between kernel stack and user stack ? In short,
nothing - apart from using a different location in memory (and hence a
different value for the stackpointer register), and usually different
memory access protections. I.e. when executing in user mode, kernel
memory (part of which is the kernel stack) will not be accessible even
if mapped. Vice versa, without explicitly being requested by the
kernel code (in Linux, through functions like copy_from_user()), user
memory (including the user stack) is not usually directly accessible.

Why is [ a separate ] kernel stack used ? Separation of privileges and
security. For one, userspace programs can make their stack(pointer)
anything they want, and there is usually no architectural requirement
to even have a valid one. The kernel therefore cannot trust the
userspace stackpointer to be valid nor usable, and therefore will
require one set under its own control. Different CPU architectures
implement this in different ways; x86 CPUs automatically switch
stackpointers when privilege mode switches occur, and the values to be
used for different privilege levels are configurable - by privileged
code (i.e. only the kernel).

If a local variable is declared in an ISR, where will it be stored ?
On the kernel stack. The kernel (Linux kernel, that is) does not hook
ISRs directly to the x86 architecture’s interrupt gates but instead
delegates the interrupt dispatch to a common kernel interrupt
entry/exit mechanism which saves pre-interrupt register state before
calling the registered handler(s). The CPU itself when dispatching an
interrupt might execute a privilege and/or stack switch, and this is
used/set up by the kernel so that the common interrupt entry code can
already rely on a kernel stack being present. That said, interrupts
that occur while executing kernel code will simply (continue to) use
the kernel stack in place at that point. This can, if interrupt
handlers have deeply nested call paths, lead to stack overflows (if a
deep kernel call path is interrupted and the handler causes another
deep path; in Linux, filesystem / software RAID code being interrupted
by network code with iptables active is known to trigger such in
untuned older kernels … solution is to increase kernel stack sizes
for such workloads).

Does each process have its own kernel stack ? Not just each process -
each thread has its own kernel stack (and, in fact, its own user stack
as well). Remember the only difference between processes and threads
(to Linux) is the fact that multiple threads can share an address
space (forming a process).

How does the process coordinate between both these stacks ? Not at all
- it doesn’t need to. Scheduling (how / when different threads are being run, how their state is saved and restored) is the operating
system’s task and processes don’t need to concern themselves with
this. As threads are created (and each process must have at least one
thread), the kernel creates kernel stacks for them, while userspace
stacks are either explicitly created/provided by whichever mechanism
is used to create a thread (functions like makecontext() or
pthread_create() allow the caller to specify a memory region to be
used for the “child” thread’s stack), or inherited (by on-access
memory cloning, usually called “copy on write” / COW, when creating a
new process). That said, the process can influence scheduling of its
threads and/or influence the context (state, amongst that is the
thread’s stackpointer). There are multiple ways for this: UNIX
signals, setcontext(), pthread_yield() / pthread_cancel(), … - but
this is disgressing a bit from the original question.

0 0
原创粉丝点击