Tensorflow 运算设备的配置

来源:互联网 发布:优斗士网络推广效果 编辑:程序博客网 时间:2024/05/01 01:12

说明:此文是翻译官网 Using GPUs   

Tensorflow 的运算可以是 CPU,也可以是GPU,想要查看当前的运算被分配到哪个设备,可以设置 log_device_placement

# Creates a graph.a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')c = tf.matmul(a, b)# Creates a session with log_device_placement set to True.sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))# Runs the op.print sess.run(c)
得到如下的输出,说明我的运算被分配到CPU上去运行了
MatMul: (MatMul): /job:localhost/replica:0/task:0/cpu:02017-09-20 16:27:31.185055: I tensorflow/core/common_runtime/simple_placer.cc:834] MatMul: (MatMul)/job:localhost/replica:0/task:0/cpu:0b: (Const): /job:localhost/replica:0/task:0/cpu:02017-09-20 16:27:31.185445: I tensorflow/core/common_runtime/simple_placer.cc:834] b: (Const)/job:localhost/replica:0/task:0/cpu:0a: (Const): /job:localhost/replica:0/task:0/cpu:02017-09-20 16:27:31.185854: I tensorflow/core/common_runtime/simple_placer.cc:834] a: (Const)/job:localhost/replica:0/task:0/cpu:0[[22 28] [49 64]]

如何自定义运算设备呢,使用 with tf.device(''),注意这是分配的CPU,不是CPU核

# Creates a graph.with tf.device('/cpu:0'):  a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')  b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')  c = tf.matmul(a, b)# Creates a session with log_device_placement set to True.sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))# Runs the op.print sess.run(c)
得到的输出是

MatMul: (MatMul): /job:localhost/replica:0/task:0/cpu:02017-09-20 16:49:52.835533: I tensorflow/core/common_runtime/simple_placer.cc:834] MatMul: (MatMul)/job:localhost/replica:0/task:0/cpu:0b: (Const): /job:localhost/replica:0/task:0/cpu:02017-09-20 16:49:52.835888: I tensorflow/core/common_runtime/simple_placer.cc:834] b: (Const)/job:localhost/replica:0/task:0/cpu:0a: (Const): /job:localhost/replica:0/task:0/cpu:02017-09-20 16:49:52.836294: I tensorflow/core/common_runtime/simple_placer.cc:834] a: (Const)/job:localhost/replica:0/task:0/cpu:0[[22 28] [49 64]]

一般如果使用GPU作为运算部件的话,运算会占用所有的内存,如何自定义分配GPU内存呢,CPU没有这个自定义选项,两种方式

  1. 先分配小部分,再逐渐增长
config = tf.ConfigProto()config.gpu_options.allow_growth = Truesession = tf.Session(config=config, ...)

2.设置比例

config = tf.ConfigProto()config.gpu_options.per_process_gpu_memory_fraction = 0.4session = tf.Session(config=config, ...)

当有多个GPU怎么设定其中的一部分来运算

# Creates a graph.c = []for d in ['/gpu:2', '/gpu:3']:  with tf.device(d):    a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3])    b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2])    c.append(tf.matmul(a, b))with tf.device('/cpu:0'):  sum = tf.add_n(c)# Creates a session with log_device_placement set to True.sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))# Runs the op.print sess.run(sum)