TensorFlow学习记录-- 8.TensorFlow之如何构建漂亮的模型

来源:互联网 发布:域名转移 编辑:程序博客网 时间:2024/06/06 01:58

原文:
Structuring Your TensorFlow Models

1 定义一个计算图(传统方法)

一般的,一个模型需要链接输入数据,标签,并提供trainning,evaluation和inference操作。普通的构造方法如下:

class Model:    def __init__(self, data, target):        data_size = int(data.get_shape()[1])        target_size = int(target.get_shape()[1])        weight = tf.Variable(tf.truncated_normal([data_size, target_size]))        bias = tf.Variable(tf.constant(0.1, shape=[target_size]))        incoming = tf.matmul(data, weight) + bias        self._prediction = tf.nn.softmax(incoming)        cross_entropy = -tf.reduce_sum(target, tf.log(self._prediction))        self._optimize = tf.train.RMSPropOptimizer(0.03).minimize(cross_entropy)        mistakes = tf.not_equal(            tf.argmax(target, 1), tf.argmax(self._prediction, 1))        self._error = tf.reduce_mean(tf.cast(mistakes, tf.float32))    @property    def prediction(self):        return self._prediction    @property    def optimize(self):        return self._optimize    @property    def error(self):        return self._error

里面定义了数据,模型参数,还有prediction,optimize,error等操作,但是看起来还是很乱,如何优化呢?

2 使用Properties(python的属性)

可以参考这个了解@property基础概念:
7 python 动态绑定,@property的使用_slots限制class的属性以及多重继承
上面那个还是有点乱,接下来来利用@property属性把函数方法分别写到各个函数中去,改进的代码模型如下:

class Model:    def __init__(self, data, target):        self.data = data        self.target = target        self._prediction = None        self._optimize = None        self._error = None    @property    def prediction(self):        if not self._prediction:            data_size = int(self.data.get_shape()[1])            target_size = int(self.target.get_shape()[1])            weight = tf.Variable(tf.truncated_normal([data_size, target_size]))            bias = tf.Variable(tf.constant(0.1, shape=[target_size]))            incoming = tf.matmul(self.data, weight) + bias            self._prediction = tf.nn.softmax(incoming)        return self._prediction    @property    def optimize(self):        if not self._optimize:            cross_entropy = -tf.reduce_sum(self.target, tf.log(self.prediction))            optimizer = tf.train.RMSPropOptimizer(0.03)            self._optimize = optimizer.minimize(cross_entropy)        return self._optimize    @property    def error(self):        if not self._error:            mistakes = tf.not_equal(                tf.argmax(self.target, 1), tf.argmax(self.prediction, 1))            self._error = tf.reduce_mean(tf.cast(mistakes, tf.float32))        return self._error

这样,可以分别把prediction,optimize,和error操作定义到各自的函数里面去,这样每次调用这几个属性就相当于调用了其函数。

3 python装饰器来优化

上面prediction,property还有error函数还是有重复的步骤对不对,都是判断一下是否有这个属性然后决定是否创建这个属性,如果已有了就不要重新创建了,如何优化呢?可以利用python装饰器,基础概念可以看这个:
13 python装饰器,函数对象以及一些高阶函数如map/reduce,匿名函数,返回函数,偏函数等等
这样,修改的代码如下:
首先定义一个函数装饰器,可以对函数进行预处理,并return一个函数,这里希望其对prediction,property和error几个函数做预处理:

import functoolsdef lazy_property(function):    attribute = '_cache_' + function.__name__    @property    @functools.wraps(function)    def decorator(self):        if not hasattr(self, attribute):            setattr(self, attribute, function(self))        return getattr(self, attribute)    return decorator

接下来,利用装饰器可以简化上面第二步的函数,简化如下:

class Model:    def __init__(self, data, target):        self.data = data        self.target = target        self.prediction        self.optimize        self.error    @lazy_property    def prediction(self):        data_size = int(self.data.get_shape()[1])        target_size = int(self.target.get_shape()[1])        weight = tf.Variable(tf.truncated_normal([data_size, target_size]))        bias = tf.Variable(tf.constant(0.1, shape=[target_size]))        incoming = tf.matmul(self.data, weight) + bias        return tf.nn.softmax(incoming)    @lazy_property    def optimize(self):        cross_entropy = -tf.reduce_sum(self.target, tf.log(self.prediction))        optimizer = tf.train.RMSPropOptimizer(0.03)        return optimizer.minimize(cross_entropy)    @lazy_property    def error(self):        mistakes = tf.not_equal(            tf.argmax(self.target, 1), tf.argmax(self.prediction, 1))        return tf.reduce_mean(tf.cast(mistakes, tf.float32))

4 最后给各个函数操作增加属性范围(tensorflow里面Graph的Scopes)

这样修改:

import functoolsdef define_scope(function):    attribute = '_cache_' + function.__name__    @property    @functools.wraps(function)    def decorator(self):        if not hasattr(self, attribute):            with tf.variable_scope(function.__name__, *args, **kwargs):                setattr(self, attribute, function(self))        return getattr(self, attribute)    return decorator

这样每个操作函数都有自己的scopes了,非常方便简洁。

0 0