一起来瞧虚拟化——vtpm实战

来源:互联网 发布:浙江大学软件学院硕士 编辑:程序博客网 时间:2024/06/12 14:58
一起来瞧虚拟化——vtpm实战2010-07-05 14:04:36

分类: LINUX

 http://blog.chinaunix.net/uid-20517852-id-1936376.html
hi,我又来了,今天直接进入正题——vTPM,这也是我主要分析的模块之一,这个系列文章我会通过分析xen源码实现学习介绍Xen的原理机制,分析的对象主要就是xen安全相关功能,比如说vTPM,它本身很好地体现了分离驱动的机制,并且为下一个分析对象——MiniOS作准备,不了解?看看Mini-OS 介绍吧。
QQ :   61304046
gmail: benbenhappy2008
我的百度空间:http://hi.baidu.com/from2_6_30_1/blog
转载请注明出处哦:)

虚拟TPM的实现有几个不同的架构模型[1],xen中实现的是IBM[2]和Intel[3]提出的,并分别完成了分离驱动模型中的vTPM前后端驱动,但与IBM论文中不同的是,在XEN中并没有物理TPM到vTPM的PCR映射,也就是说PCR0~8并不完全一样。主要问题是对于如何解决Quote时的签名问题存在分歧,vTPM会对整个quote签名但对于它如何使用映射过来的PCR是个问题,TPM设计之初只考虑了单一系统环境。还有就是在HVM中VM拥有自己的BIOS/bootloader,如何确定执行扩展操作时使用哪个度量值到哪个PCR中?另一不同之处是vTPM迁移协议,但目前XEN中的实现的vTPM似乎没有用到(哪位有研究不妨给我讲解一下)。

在XEN中,vTPM是由一个或多个vTPM实例,vTPM管理工具和Hot-plug脚本组成。vTPM实例其实就是一个VM中的TPM,每个需要TPM功能的VM在整个生命期内都与一个唯一的vTPM实例关联,也就是一一对应。在XEN中使用的是Mario Strasser开发的tpm-emulator[4]。
Overview of the TPM emulator package
图1: TPM emulator架构

tpm emulator作为一个vTPM实例使用就无须TPM设备驱动以及内核模块的支持,但为了能够与tpmd通讯需要修改命名管道。
先看一下/usr/src/xen-unstable/tools/vtpm/代码中的readme介绍:
28 vtpmd Flow (for vtpm_manager. vtpmd never run by default) 29 ============================ 30 - Launch the VTPM manager (vtpm_managerd) which which begins listening to the BE with one thread 31   and listens to a named fifo that is shared by the vtpms to commuincate with the manager. 32 - VTPM Manager listens to TPM BE. 33 - When xend launches a tpm frontend equipped VM it contacts the manager over the vtpm backend.  34 - When the manager receives the open message from the BE, it launches a vtpm 35 - Xend allows the VM to continue booting.  36 - When a TPM request is issued to the front end, the front end transmits the TPM request to the backend. 37 - The manager receives the TPM requests and uses a named fifo to forward the request to the vtpm. 38 - The fifo listener begins listening for the reply from vtpm for the request. 39 - Vtpm processes request and replies to manager over shared named fifo. 40 - If needed, the vtpm may send a request to the vtpm_manager at any time to save it's secrets to disk. 41 - Manager receives response from vtpm and passes it back to backend for forwarding to guest.
另外根据该目录下的Makefile和Rules.mk似乎指定了tpm emulator版本:
13 TPM_EMULATOR_NAME = tpm_emulator-0.5.1这个有点僵,我查了一下该软件的官网上已经有更新,不过没关系我们编译时可以选择编译自己下载的:# cd /usr/src# svn checkout http://svn.berlios.de/svnroot/repos/tpm-emulator/trunk tpm-emulator# cd /usr/src/tpm-emulator# make# make install然后修改/usr/src/xen-unstable/Config.mk为165 VTPM_TOOLS         ?= y不过要把/usr/src/xen-unstable/tools/vtpm/Rules.mk修改为26 BUILD_EMULATOR = n记得要安装gmp库哦:)该目录下剩余几个都是patch文件,没什么好看的,先放过。跳到/usr/src/xen-unstable/tools/vtpm_manager/readme告诉我们该目录结构以及基本流程: 68 Single-VM Flow 69 ============================ 70 - Launch the VTPM manager (vtpm_managerd) which which begins listening to the BE with one thread 71   and listens to a named fifo that is shared by the vtpms to commuincate with the manager. 72 - VTPM Manager listens to TPM BE. 73 - When xend launches a tpm frontend equipped VM it contacts the manager over the vtpm backend.  74 - When the manager receives the open message from the BE, it launches a vtpm 75 - Xend allows the VM to continue booting.  76 - When a TPM request is issued to the front end, the front end transmits the TPM request to the backend. 77 - The manager receives the TPM requests and uses a named fifo to forward the request to the vtpm. 78 - The fifo listener begins listening for the reply from vtpm for the request. 79 - Vtpm processes request and replies to manager over shared named fifo. 80 - If needed, the vtpm may send a request to the vtpm_manager at any time to save it's secrets to disk. 81 - Manager receives response from vtpm and passes it back to backend for forwarding to guest.我们看看manager目录下的代码:首先Makefile告诉我们将生成一个 4 BIN             = vtpm_managerd也就是vTPM管理工具守护进程,接着看我们的vtpmd.c
262   // -------------------- Initialize Manager ----------------- 263   if (VTPM_Init_Manager() != TPM_SUCCESS) {264     vtpmlogerror(VTPM_LOG_VTPM, "Closing vtpmd due to error during startup.\n");265     return -1;266   }
Initialize Manager斗大的字,干什么用的一目了然,注意里面调用的VTPM_Init_Manager()点一下跟进去看看:
Defined as a function in:
  • tools/vtpm_manager/manager/vtpm_manager.c, line 184
看看它究竟干了些什么:
202   // Create new TCS Object207   // Create TCS Context for service217   // Create OIAP session for service's authorized commands224   // If fails, create new Manager.225   serviceStatus = VTPM_LoadManagerData();235   //Load Storage Key 245   // Create entry for Dom0 for control messages呵呵,偷了些懒,不过这些信息足够了,这个初始化管理程序创建新的TCS对象和上下文环境以及OIAP对话,调用 VTPM_LoadManagerData()似乎是加载vTPM管理数据。OK,我们回到vtpmd.c285   // ------------------- Set up file ipc structures ----------调用函数vtpm_ipc_init()初始化FIFO文件313   // -------------------- Set up thread params ------------- 设置线程参数339   // --------------------- Launch Threads -----------------下面启动了be_thread、dmi_thread、hp_thread三个线程监听来之后端驱动、vTPM实例和热插件的命令和数据,其中的vtpm_manager_thread()会调用命令处理函数VTPM_Manager_Handler()处理命令请求: 83 void *vtpm_manager_thread(void *arg_void) { 84   TPM_RESULT *status = (TPM_RESULT *) malloc(sizeof(TPM_RESULT) ); 85   struct vtpm_thread_params_s *arg = (struct vtpm_thread_params_s *) arg_void; 86  87   *status = VTPM_Manager_Handler(arg->tx_ipc_h, arg->rx_ipc_h, 88                                  arg->fw_tpm, arg->fw_tx_ipc_h, arg->fw_rx_ipc_h, 89                                  arg->is_priv, arg->thread_name); 90  91   return (status); 92 } 93 继续跟下去 74 TPM_RESULT VTPM_Manager_Handler( vtpm_ipc_handle_t *tx_ipc_h,  75                                  vtpm_ipc_handle_t *rx_ipc_h, 76                                  BOOL fw_tpm,   // Forward TPM cmds? 77                                  vtpm_ipc_handle_t *fw_tx_ipc_h,  78                                  vtpm_ipc_handle_t *fw_rx_ipc_h, 79                                  BOOL is_priv, 80                                  char *thread_name) {  95   // ------------------------ Main Loop -------------------------------- 96   while(1) { 97      98     vtpmhandlerloginfo(VTPM_LOG_VTPM, "%s waiting for messages.\n", thread_name); 99 100     // --------------------- Read Cmd from Sender ----------------101     102     // Read command header 103     size_read = vtpm_ipc_read(rx_ipc_h, NULL, cmd_header, VTPM_COMMAND_HEADER_SIZE_SRV);239     size_write = vtpm_ipc_write(tx_ipc_h, (dmi_res ? dmi_res->tx_vtpm_ipc_h : NULL), reply, reply_size );
可以看到在函数主循环中调用 vtpm_ipc_read() 接收并解析命令消息,调用vtpm_ipc_write() 返回结果。
好了今天就到这里吧,下次结合起来分析前后端驱动的问题。
我的百度空间:http://hi.baidu.com/from2_6_30_1/blog
注: vTPM实例守护进程叫vtpmd,这个守护进程不是由用户启动的而是由vTPM管理工具启动。

[1] about vTPM
[2] vTPM: Virtualizing the Trusted Platform Module
[3] TPM Virtualization: Building a General Framework
[4] Software-based TPM Emulator for Unix
0 0