MPI 的基本数据结构
来源:互联网 发布:美图秀秀 mac版 编辑:程序博客网 时间:2024/04/28 17:23
http://www.lam-mpi.org/tutorials/one-step/datatypes.php
Heterogeneous computing requires that the data constituting a messagebe typed or described somehow so that its machine representation can beconverted between computer architectures. MPI can thoroughly describemessage datatypes, from the simple primitive machine types to complexstructures, arrays and indices.
MPI messaging functions accept a datatype parameter, whose C typedef isMPI_Datatype:
MPI_Send(void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm);
Basic Datatypes
Everybody uses the primitive machine datatypes. Some C examples arelisted below (with the corresponding C datatype in parentheses):MPI_CHAR (char) MPI_INT (int) MPI_FLOAT (float) MPI_DOUBLE (double)The count parameter in MPI_Send( ) refers to the number of elements ofthe given datatype, not the total number of bytes.
For messages consisting of a homogeneous, contiguous array of basicdatatypes, this is the end of the datatype discussion. For messagesthat contain more than one datatype or whose elements are not storedcontiguously in memory, something more is needed.
Strided Vector
Consider a mesh application with patches of a 2D array assigned todifferent processes. The internal boundary rows and columns aretransferred between north/south and east/west processes in the overallmesh. In C, the transfer of a row in a 2D array is simple - acontiguous vector of elements equal in number to the number of columnsin the 2D array. Conversely, storage of the elements of a singlecolumn are dispersed in memory; each vector element separated from itsnext and previous indices by the size of one entire row.An MPI derived datatype is a good solution for a non-contiguous datastructure. A code fragment to derive an appropriate datatype matchingthis strided vector and then transmit the last column is listed below:
#include <mpi.h>{ floatmesh[10][20]; intdest, tag; MPI_Datatypenewtype;/* * Do this once. */ MPI_Type_vector(10,/* # column elements */ 1,/* 1 column only */ 20,/* skip 20 elements */ MPI_FLOAT,/* elements are float */ &newtype);/* MPI derived datatype */ MPI_Type_commit(&newtype);/* * Do this for every new message. */ MPI_Send(&mesh[0][19], 1, newtype, dest, tag, MPI_COMM_WORLD);}
MPI_Type_commit( ) separates the datatypes you really want to save anduse from the intermediate ones that are scaffolded on the way to somevery complex datatype.
A nice feature of MPI derived datatypes is that once created, they canbe used repeatedly with no further set-up code. MPI has many otherderived datatype constructors.
C Structure
Consider an imaging application that is transferring fixed length scanlines of eight bit color pixels. Coupled with the pixel array is thescan line number, an integer. The message might be described in C as astructure:struct {intlineno;charpixels[1024]; } scanline;In addition to a derived datatype, message packing is a useful methodfor sending non-contiguous and/or heterogeneous data. A code fragmentto pack and send the above structure is listed below:
#include <mpi.h>{ unsigned intmembersize, maxsize; intposition; intdest, tag; char*buffer;/* * Do this once. */ MPI_Pack_size(1, /* one element */ MPI_INT,/* datatype integer */ MPI_COMM_WORLD,/* consistent comm. */ &membersize);/* max packing space req'd */ maxsize = membersize; MPI_Pack_size(1024, MPI_CHAR, MPI_COMM_WORLD, &membersize); maxsize += membersize; buffer = malloc(maxsize);/* * Do this for every new message. */ position = 0; MPI_Pack(&scanline.lineno,/* pack this element */ 1,/* one element */ MPI_INT,/* datatype int */ buffer,/* packing buffer */ maxsize,/* buffer size */ &position,/* next free byte offset */ MPI_COMM_WORLD);/* consistent comm. */ MPI_Pack(scanline.pixels, 1024, MPI_CHAR, buffer, maxsize, &position, MPI_COMM_WORLD); MPI_Send(buffer, position, MPI_PACKED, dest, tag, MPI_COMM_WORLD);}
A buffer is allocated once to contain the size of the packedstructure. The size must be computed because of implementationdependent overhead in the message. Variable sized messages can behandled by allocating a buffer large enough for the largest possiblemessage. The position parameter to MPI_Pack( ) always returns thecurrent size of the packed buffer.
A code fragment to unpack the message, assuming a receive buffer hasbeen allocated, is listed below:
{ int src; int msgsize; MPI_Status status; MPI_Recv(buffer, maxsize, MPI_PACKED, src, tag, MPI_COMM_WORLD, &status); position = 0; MPI_Get_count(&status, MPI_PACKED, &msgsize); MPI_Unpack(buffer,/* packing buffer */ msgsize,/* buffer size */ &position,/* next element byte offset */ &scanline.lineno,/* unpack this element */ 1,/* one element */ MPI_INT,/* datatype int */ MPI_COMM_WORLD);/* consistent comm. */ MPI_Unpack(buffer, msgsize, &position, scanline.pixels, 1024, MPI_CHAR, MPI_COMM_WORLD);}
You should be able to modify the above code fragments for anystructure. It is completely possible to alter the number of elementsto unpack based on application information unpacked previously in thesame message.
- MPI 的基本数据结构
- MPI master & slave 模式的基本框架
- MPI,openMP与pthread的基本demo
- mpi并行程序的基本框架
- MPI学习二 MPI并行程序的两种基本模式
- MPI基本编程框架
- 数据结构的基本内容
- 基本的数据结构
- 数据结构的基本内容
- caffe的基本数据结构
- 数据结构的基本类型
- 数据结构的基本定义
- 数据结构的基本认识
- opencv的基本数据结构
- opencv的基本数据结构
- 主要的MPI主页及MPI标准
- MPI学习笔记——MPI基本框架
- MPI的安装
- bat启动软件例子
- linux下如何恢复rm命令删除的文件
- 毕业于毕业设计
- 使用自定义分隔符分离字符串 解决Delphi7下TStringList.Delimiter分离无法跳过空格问题
- kernel source insight filelist
- MPI 的基本数据结构
- SQLNET.AUTHENTICATION_SERVICES值在不同操作系统下的含义
- 关于java中的static
- 编写扩展的理由,简实示例
- JVM系列一:JVM内存组成及分配
- Codejam Africa and Arabia 2011 qualification round question C
- oracle函数返回表
- Windows平台进行Objective-C开发
- Looper中的消息队列处理机制