AMGCL--vexcl
来源:互联网 发布:淘宝卖家怎么添加视频 编辑:程序博客网 时间:2024/06/05 14:08
VexCL is a vector expression template library for OpenCL. It has been created for ease of OpenCL development with C++. VexCL strives to reduce amount of boilerplate code needed to develop OpenCL applications. The library provides convenient and intuitive notation for vector arithmetic, reduction, sparse matrix-vector products, etc. Multi-device and even multi-platform computations are supported. The source code of the library is distributed under very permissive MIT license.
The code is available at https://github.com/ddemidov/vexcl.
Doxygen-generated documentation: http://ddemidov.github.io/vexcl.
Slides from VexCL-related talks:
- Meeting C++ 2012, Dusseldorf
- SIAM CSE 2013, Boston
- FOSDEM 2013, Brussels
The paper Programming CUDA and OpenCL: A Case Study Using Modern C++ Libraries compares both convenience and performance of several GPGPU libraries, including VexCL.
Table of contents
- Context initialization
- Memory allocation
- Copies between host and devices
- Vector expressions
- Builtin operations
- Element indices
- User-defined functions
- Random number generation
- Reductions
- Sparse matrix-vector products
- Stencil convolutions
- Fast Fourier Transform
- Multivectors
- Converting generic C++ algorithms to OpenCL
- Custom kernels
- Interoperability with other libraries
- Supported compilers
Context initialization
VexCL can transparently work with multiple compute devices that are present in the system. VexCL context is initialized with a device filter, which is just a functor that takes a reference tocl::Device
and returns a bool
. Several standard filters are provided, but one can easily add a custom functor. Filters may be combined with logical operators. All compute devices that satisfy the provided filter are added to the created context. In the example below all GPU devices that support double precision arithmetics are selected:
One of the most convenient filters is vex::Filter::Env which selects compute devices based on environment variables. It allows to switch compute device without need to recompile the program.
Memory allocation
The vex::vector<T>
class constructor accepts a const reference tostd::vector<cl::CommandQueue>
. A vex::Context
instance may be conveniently converted to the type, but it is also possible to initialize the command queues elsewhere, thus completely eliminating the need to create avex::Context
. Each command queue in the list should uniquely identify a single compute device.
The contents of the created vector will be partitioned across all devices that were present in the queue list. Size of each partition will be proportional to the device bandwidth, which is measured the first time the device is used. All vectors of the same size are guaranteed to be partitioned consistently, which allows to minimize inter-device communication.
In the example below, three device vectors of the same size are allocated. VectorA
is copied from host vector a
, and the other vectors are created uninitialized:
Assuming that the current system has an NVIDIA and an AMD GPUs along with an Intel CPU installed, possible partitioning may look as in the following figure:
Copies between host and devices
Function vex::copy()
allows to copy data between host and device memories. There are two forms of the function – simple one and an STL-like:
The STL-like variant allows to copy sub-ranges of the vectors, or copy data from/to raw host pointers.
Vectors also overload array subscript operator, so that users may have direct read or write access to individual vector elements. But this operation is highly ineffective and should be used with caution. Iterators allow for element access as well, so that STL algorithms may in principle be used with device vectors. This would be very slow but may be used as a temporary building blocks.
Vector expressions
VexCL allows to use convenient and intuitive notation for vector operations. In order to be used in the same expression, all vectors have to becompatible:
- Have same size;
- Span same set of compute devices.
If the conditions are satisfied, then vectors may be combined with rich set of available expressions. Vector expressions are processed in parallel across all devices they were allocated on. One should keep in mind that in case several OpenCL command queues are used, then the queues of the vector that is being assigned to will be employed. Each vector expression results in launch of a single OpenCL kernel. The kernel is automatically generated and launched the first time the expression is encountered in the program. If VEXCL_SHOW_KERNELS
macro is defined, then the sources of all generated kernels will be dumped to the standard output. For example, the expression:
will lead to the launch of the following OpenCL kernel:
Here and in the rest of examples X
, Y
, and Z
are compatible instances ofvex::vector<double>
.
Builtin operations
VexCL expressions may combine device vectors and scalars with arithmetic, logic, or bitwise operators as well as with builtin OpenCL functions. If some builtin operator or function is unavailable, it should be considered a bug. Please do not hesitate to open an issue in this case.
Element indices
Function vex::element_index(size_t offset = 0)
allows to use an index of each vector element inside vector expressions. The numbering is continuous across the compute devices and starts with an optionaloffset
.
User-defined functions
Users may define custom functions to use in vector expressions. One has to define function signature and function body. The body may contain any number of lines of valid OpenCL code. Function parameters are namedprm1
, prm2
, etc. The most convenient way to define a function isVEX_FUNCTION
macro:
The resulting squared_radius
function object is stateless; only its type is used for kernel generation. Hence, it is safe to put commonly used functions in global scope.
Custom functions may be used not only for convenience, but also for performance reasons. The above example could in principle be rewritten as:
The drawback of the latter variant is that X
and Y
will be readtwice.
Note that any valid vector expression may be passed as a function parameter:
Random number generation
VexCL provides counter-based random number generators from Random123 suite, in which Nth random number is obtained by applying a stateless mixing function to N instead of the conventional approach of using N iterations of a stateful transformation. This technique is easily parallelizable and is well suited for use in GPGPU applications.
For integral types, generated values span the complete range; for floating point types, generated values are in [0,1] interval.
In order to use a random number sequence in a vector expression, user has to declare an instance of eithervex::Random
orvex::RandomNormal
class template as in the following example:
Note that vex::element_index()
here provides the random number generator with a sequence position N.
Reductions
An instance of vex::Reductor<T, OP>
allows to reduce an arbitrary vector expression to a single value of type T. Supported reduction operations are SUM
, MIN
, and MAX
. Reductor objects receive a list of command queues at construction and should only be applied to vectors residing on the same compute devices.
In the following example an inner product of two vectors is computed:
And here is an easy way to compute an approximate value of π with Monte-Carlo method:
Sparse matrix-vector products
One of the most common operations in linear algebra is matrix-vector multiplication. An instance ofvex::SpMat
class holds representation of a sparse matrix. Its constructor accepts sparse matrix in commonCRS format. In the example below a vex::SpMat
is constructed from anEigen sparse matrix:
The matrix-vector products may be used in vector expressions. The only restriction is that the expressions have to be additive. This is due to the fact that matrix representation may span several compute devices. Hence, a matrix-vector product operation may require several kernel launches and inter-device communication.
Stencil convolutions
Stencil convolution is another common operation that may be used, for example, to represent a signal filter, or a (one-dimensional) differential operator. VexCL implements two stencil kinds. The first one is a simple linear stencil that holds linear combination coefficients. The example below computes moving average of a vector with a 5-point window:
Users may also define custom stencil operators. This may be of use if, for example, the operator is nonlinear. The definition of a stencil operator looks very similar to a definition of a custom function. The only difference is that stencil operator constructor accepts vector of OpenCL command queues. The following example implements non-linear operatory(i) = sin(x(i) - x(i - 1)) + sin(x(i+1) - sin(x(i))
:
The current window is available inside the body of the operator through the X
array that is indexed relatively to the stencil center.
Stencil convolution operations, similar to the matrix-vector products, are only allowed in additive expressions.
Fast Fourier Transform
VexCL provides implementation of Fast Fourier Transform (FFT) that accepts arbitrary vector expressions as input, allows to perform multidimensional transforms (of any number of dimensions), and supports arbitrary sized vectors:
FFT is another example of operation that is only available in additive expressions. Another restriction is that FFT currently only supports contexts with a single compute device.
Multivectors
Class template vex::multivector<T,N>
allows to store several equally sized device vectors and perform computations on all components synchronously. Each operation is delegated to the underlying vectors, but usually results in the launch of a single fused kernel. Expressions may include values ofstd::array<T,N>
type, where N is equal to the number of multivector components. Each component gets corresponding element ofstd::array<>
when expression is applied. Similarly, array subscript operator or reduction of a multivector returns anstd::array<T,N>
. In order to access k-th component of a multivector, one can use overloadedoperator()
:
Some operations can not be expressed with simple multivector arithmetics. For example, an operation of two dimensional rotation mixes components in the right hand side expressions:
This may in principle be implemented as:
But this would result in two kernel launches. VexCL allows to assign a tuple of expressions to a multivector, which will lead to the launch of a single fused kernel:
Converting generic C++ algorithms to OpenCL
CUDA and OpenCL differ in their handling of compute kernels compilation. In NVIDIA's framework the compute kernels are compiled to PTX code together with the host program. In OpenCL the compute kernels are compiled at runtime from high-level C-like sources, adding an overhead which is particularly noticeable for smaller sized problems. This distinction leads to higher initialization cost of OpenCL programs, but at the same time it allows to generate better optimized kernels for the problem at hand. VexCL allows to exploit the possibility with help of its kernel generator mechanism.
An instance of vex::generator::symbolic<T>
dumps to output stream any arithmetic operations it is being subjected to. For example, this code snippet:
results in the following output:
The symbolic type allows to record a sequence of arithmetic operations made by a generic C++ algorithm. To illustrate the idea, consider the generic implementation of a 4th order Runge-Kutta ODE stepper:
This function takes a system function sys
, state variable x
, and advancesx
by time step dt
. For example, to model the equation dx/dt = sin(x)
, one has to provide the following system function:
The following code snippet makes a hundred of RK4 iterations for a single double
value on a CPU:
Let us now generate the kernel for a single RK4 step and apply the kernel to avex::vector<double>
(by doing this we essentially simultaneously solve big number of same ODEs with different initial conditions).
This approach has some obvious restrictions. Namely, the C++ code has to be embarrassingly parallel and is not allowed to contain any branching or data-dependent loops. Nevertheless, the kernel generation facility may save substantial amount of both human and machine time when applicable.
Custom kernels
As Kozma Prutkov repeatedly said, "One cannot embrace the unembraceable". So in order to be usable, VexCL has to support custom kernels.vex::vector::operator()(uint k)
returns cl::Buffer
that holds vector data on k-th compute device. If the result depends on the neighbor points, one has to keep in mind that these points are possibly located on a different compute device. In this case the exchange of these halo points has to be arranged manually.
The following example builds and launches a custom kernel for each device in the context:
Interoperability with other libraries
Since VexCL is built upon standard Khronos OpenCL C++ bindings, it is easily interoperable with other OpenCL libraries. In particular, VexCL provides some glue code forViennaCL and for Boost.compute libraries.
ViennaCL (The Vienna Computing Library) is a scientific computing library written in C++. It provides OpenCL, CUDA, and OpenMP compute backends. The programming interface is compatible with Boost.uBLAS and allows for simple, high-level access to the vast computing resources available on parallel architectures such as GPUs. The library's primary focus is on common linear algebra operations (BLAS levels 1, 2 and 3) and the solution of large sparse systems of equations by means of iterative methods with optional preconditioners.
It is possible to use generic ViennaCL's solvers with VexCL types. See examples/viennacl/solvers.cpp for an example.
Boost.compute is a GPU/parallel-computing library for C++ based on OpenCL. The core library is a thin C++ wrapper over the OpenCL C API and provides access to compute devices, contexts, command queues and memory buffers. On top of the core library is a generic, STL-like interface providing common algorithms (e.g.transform()
, accumulate()
, sort()
) along with common containers (e.g.vector<T>
, flat_set<T>
). It also features a number of extensions including parallel-computing algorithms (e.g.exclusive_scan()
, scatter()
, reduce()
) and a number of fancy iterators (e.g.transform_iterator<>
, permutation_iterator<>
, zip_iterator<>
).
vexcl/external/boost_compute.hpp provides an example of using Boost.compute algorithms with VexCL vectors. Namely, it implements parallel sort and inclusive scan primitives on top of the corresponding Boost.compute algorithms.
Supported compilers
VexCL makes heavy use of C++11 features, so your compiler has to be modern enough. The compilers that have been tested and supported:
- GCC v4.6 and higher.
- Clang v3.1 and higher.
- Microsoft Visual C++ 2010 and higher.
VexCL uses standard OpenCL bindings for C++ from Khronos group. The cl.hpp file should be included with the OpenCL implementation on your system, but it is also provided with the library.
This work is a joint effort of Supercomputer Center of Russian Academy of Sciences (Kazan branch) and Kazan Federal University. It is partially supported by RFBR grants No 12-07-0007 and 12-01-00033.
Generated by 1.8.3.1
- AMGCL--vexcl
- vexcl初探
- AMGCL---progect
- AMGCL----AMG ON GPU
- amgcl-CRS format
- AMGCL--VS2012编译boost1.53
- VexCL: Vector expression template library for OpenCL
- Vexcl方便的opencl c++库
- 学习使用amgcl的艰难历程
- 如何在Android中使用OpenCV(最新版本)
- 解决DEBIAN6。0。1A中无线网卡显示为设备未托管问题
- cocos2d-x 菜鸟学习笔记三(图片预加载与进度条)
- python的几个有趣点
- memcpy
- AMGCL--vexcl
- 谈谈对线程和进程的理论认识
- IO指令模拟流程(1)
- HDU 2925
- Solution to ns2 dsr segmentation fault
- Activity间数据传输
- IO指令模拟流程(2)
- DEDECMS织梦登录后台慢的完美解决方案
- ubuntu安装谷歌