关于何时使用cudaDeviceSynchronize

来源:互联网 发布:腾讯通rtx2009 mac 编辑:程序博客网 时间:2024/06/03 21:20

转自:http://blog.csdn.net/mathgeophysics/article/details/19905935


When to call cudaDeviceSynchronize

why do we need cudaDeviceSynchronize(); in kernels with device-printf?

Although CUDA kernel launches are asynchronous, all GPU-related tasks placed in one stream (which is default behaviour) are executed sequentially.

So, for example,

kernel1<<<X,Y>>>(...); // kernel start execution, CPU continues to next statement
kernel2<<<X,Y>>>(...); // kernel is placed in queue and will start after kernel1 finishes, CPU continues to next statement
cudaMemcpy(...); // CPU blocks until ememory is copied, memory copy starts only


而GOOGLE中文排名第二的解释是不太完整的:

哪些情况下应当使用cudaDeviceSynchronize()?-CSDN论坛-CSDN 


cudaStreamSynchronize vs CudaDeviceSynchronize vs cudaThreadSynchronize


These are all barriers. Barriers prevent code execution beyond the barrier until some condition is met.

  1. cudaDeviceSynchronize() halts execution in the CPU/host thread (that the cudaDeviceSynchronize was issued in) until the GPU has finished processing all previously requested cuda tasks (kernels, data copies, etc.)
  2. cudaThreadSynchronize() as you've discovered, is just a deprecated version of cudaDeviceSynchronize. Deprecated just means that it still works for now, but it's recommended not to use it (use cudaDeviceSynchronize instead) and in the future, it may become unsupported. But cudaThreadSynchronize() and cudaDeviceSynchronize() are basically identical.
  3. cudaStreamSynchronize() is similar to the above two functions, but it prevents further execution in the CPU host thread until the GPU has finished processing all previously requested cuda tasks that were issued in the referenced stream. So cudaStreamSynchronize() takes a stream id as it's only parameter. cuda tasks issued in other streams may or may not be complete when CPU code execution continues beyond this barrier.
0 0
原创粉丝点击