深度学习 —— 逻辑回归

来源:互联网 发布:360游戏优化器安卓版 编辑:程序博客网 时间:2024/06/05 02:27

用逻辑回归对MNIST数字分类

本章我们将展示如何使用Theano进行最基本的分类:逻辑回归。我们从快速构建一个模型原型开始,一来回顾一下之前的知识,二来明确标记并展示如何使用Theano的图来表现数学公式。

依据机器学习的传统,我们从MNIST数字分类开始。

模型

逻辑回归是一个线形概率分类器。参数包括权重矩阵W和偏差向量b。分类通过将输入向量映射到一组超平面实现,每一个超平面对应一个类。输入向量与超平面之间的距离代表了该输入属于对应类的概率。

数学表达上,输入向量x是i类的概率,随机变量Y,可以写成:

P(Y=i|x, W,b) &= softmax_i(W x + b) \\              &= \frac {e^{W_i x + b_i}} {\sum_j e^{W_j x + b_j}}

模型预测值y是属于该类的最大概率,表达为:

y_{pred} = {\rm argmax}_i P(Y=i|x,W,b)

Theano代码如下:

  
   # initialize with 0 the weights W as a matrix of shape (n_in, n_out)        self.W = theano.shared(            value=numpy.zeros(                (n_in, n_out),                dtype=theano.config.floatX            ),            name='W',            borrow=True        )        # initialize the biases b as a vector of n_out 0s        self.b = theano.shared(            value=numpy.zeros(                (n_out,),                dtype=theano.config.floatX            ),            name='b',            borrow=True        )        # symbolic expression for computing the matrix of class-membership        # probabilities        # Where:        # W is a matrix where column-k represent the separation hyperplane for        # class-k        # x is a matrix where row-j  represents input training sample-j        # b is a vector where element-k represent the free parameter of        # hyperplane-k        self.p_y_given_x = T.nnet.softmax(T.dot(input, self.W) + self.b)        # symbolic description of how to compute prediction as class whose        # probability is maximal        self.y_pred = T.argmax(self.p_y_given_x, axis=1)

考虑到训练过程中模型参数要固定,我们为W,b分配共享变量。通过这种方式将他们声明为Theano象征变量同时初始化内容。然后通过点乘和softmax操作计算P(Y|x, W, b)向量。计算结果p_y_given_x是向量类型的象征变量。

我们使用T.argmax操作,返回p_y_given_x最大概率类的索引,从而得到实际预测值。

以下章节我们具体展现如何实现参数优化。

定义损失函数

学习模型最优参数需要最小化损失函数。对于多类的逻辑回归,一般使用负指数相似作为损失。首先将相似L与损失l定义为:\mathcal{L} (\theta=\{W,b\}, \mathcal{D}) =  \sum_{i=0}^{|\mathcal{D}|} \log(P(Y=y^{(i)}|x^{(i)}, W,b)) \\\ell (\theta=\{W,b\}, \mathcal{D}) = - \mathcal{L} (\theta=\{W,b\}, \mathcal{D})

尽管我们所有的主题都围绕最小化损失展开,目前为止梯度下降是最小化任意非线性函数损失最简单的方式。本教程使用随机梯度下降与微批次(MSGD),详见Getting Started - DeepLearning 0.1 documentation

下列代码定义了给定微批次的损失:

 # y.shape[0] is (symbolically) the number of rows in y, i.e.,        # number of examples (call it n) in the minibatch        # T.arange(y.shape[0]) is a symbolic vector which will contain        # [0,1,2,... n-1] T.log(self.p_y_given_x) is a matrix of        # Log-Probabilities (call it LP) with one row per example and        # one column per class LP[T.arange(y.shape[0]),y] is a vector        # v containing [LP[0,y[0]], LP[1,y[1]], LP[2,y[2]], ...,        # LP[n-1,y[n-1]]] and T.mean(LP[T.arange(y.shape[0]),y]) is        # the mean (across minibatch examples) of the elements in v,        # i.e., the mean log-likelihood across the minibatch.        return -T.mean(T.log(self.p_y_given_x)[T.arange(y.shape[0]), y])

注意:尽管损失定义为总和数,但实践中我们使用平均数T.mean,这样做的好处是减少学习速率对于微批次大小的依赖。

构建一个逻辑回归类

我们现在可以构建封装逻辑回归基本功能的LogisticRegression类了。

下列代码展现了之前介绍的内容:

class LogisticRegression(object):    """Multi-class Logistic Regression Class    The logistic regression is fully described by a weight matrix :math:`W`    and bias vector :math:`b`. Classification is done by projecting data    points onto a set of hyperplanes, the distance to which is used to    determine a class membership probability.    """    def __init__(self, input, n_in, n_out):        """ Initialize the parameters of the logistic regression        :type input: theano.tensor.TensorType        :param input: symbolic variable that describes the input of the                      architecture (one minibatch)        :type n_in: int        :param n_in: number of input units, the dimension of the space in                     which the datapoints lie        :type n_out: int        :param n_out: number of output units, the dimension of the space in                      which the labels lie        """        # start-snippet-1        # initialize with 0 the weights W as a matrix of shape (n_in, n_out)        self.W = theano.shared(            value=numpy.zeros(                (n_in, n_out),                dtype=theano.config.floatX            ),            name='W',            borrow=True        )        # initialize the biases b as a vector of n_out 0s        self.b = theano.shared(            value=numpy.zeros(                (n_out,),                dtype=theano.config.floatX            ),            name='b',            borrow=True        )        # symbolic expression for computing the matrix of class-membership        # probabilities        # Where:        # W is a matrix where column-k represent the separation hyperplane for        # class-k        # x is a matrix where row-j  represents input training sample-j        # b is a vector where element-k represent the free parameter of        # hyperplane-k        self.p_y_given_x = T.nnet.softmax(T.dot(input, self.W) + self.b)        # symbolic description of how to compute prediction as class whose        # probability is maximal        self.y_pred = T.argmax(self.p_y_given_x, axis=1)        # end-snippet-1        # parameters of the model        self.params = [self.W, self.b]        # keep track of model input        self.input = input    def negative_log_likelihood(self, y):        """Return the mean of the negative log-likelihood of the prediction        of this model under a given target distribution.        .. math::            \frac{1}{|\mathcal{D}|} \mathcal{L} (\theta=\{W,b\}, \mathcal{D}) =            \frac{1}{|\mathcal{D}|} \sum_{i=0}^{|\mathcal{D}|}                \log(P(Y=y^{(i)}|x^{(i)}, W,b)) \\            \ell (\theta=\{W,b\}, \mathcal{D})        :type y: theano.tensor.TensorType        :param y: corresponds to a vector that gives for each example the                  correct label        Note: we use the mean instead of the sum so that              the learning rate is less dependent on the batch size        """        # start-snippet-2        # y.shape[0] is (symbolically) the number of rows in y, i.e.,        # number of examples (call it n) in the minibatch        # T.arange(y.shape[0]) is a symbolic vector which will contain        # [0,1,2,... n-1] T.log(self.p_y_given_x) is a matrix of        # Log-Probabilities (call it LP) with one row per example and        # one column per class LP[T.arange(y.shape[0]),y] is a vector        # v containing [LP[0,y[0]], LP[1,y[1]], LP[2,y[2]], ...,        # LP[n-1,y[n-1]]] and T.mean(LP[T.arange(y.shape[0]),y]) is        # the mean (across minibatch examples) of the elements in v,        # i.e., the mean log-likelihood across the minibatch.        return -T.mean(T.log(self.p_y_given_x)[T.arange(y.shape[0]), y])        # end-snippet-2    def errors(self, y):        """Return a float representing the number of errors in the minibatch        over the total number of examples of the minibatch ; zero one        loss over the size of the minibatch        :type y: theano.tensor.TensorType        :param y: corresponds to a vector that gives for each example the                  correct label        """        # check if y has same dimension of y_pred        if y.ndim != self.y_pred.ndim:            raise TypeError(                'y should have the same shape as self.y_pred',                ('y', y.type, 'y_pred', self.y_pred.type)            )        # check if y is of the correct datatype        if y.dtype.startswith('int'):            # the T.neq operator returns a vector of 0s and 1s, where 1            # represents a mistake in prediction            return T.mean(T.neq(self.y_pred, y))        else:            raise NotImplementedError()

我们通过以下代码构建一个实例:

 # generate symbolic variables for input (x and y represent a    # minibatch)    x = T.matrix('x')  # data, presented as rasterized images    y = T.ivector('y')  # labels, presented as 1D vector of [int] labels    # construct the logistic regression class    # Each MNIST image has size 28*28    classifier = LogisticRegression(input=x, n_in=28 * 28, n_out=10)

我们从为训练输入x和相应的类y分配象征变量开始。注意x和y是定义在LogisticRegression对象范围之外。考虑到类将输入构建成图,因此它作为__init__函数的参数传入。这在你想将该类下多个实例连接构建深层网络时十分有用。一层的输出可以作为上一层的输入传递。最后,我们通过实例化classifier.negtive_log_likelihood方法来定义cost变量:

 # the cost we minimize during training is the negative log likelihood of    # the model in symbolic format    cost = classifier.negative_log_likelihood(y)

注意象征变量classifier在初始时是依据x定义,因此x在定义cost时是作为隐含象征输入存在的。

学习模型

大多数语言如C/C++,Matlab, Pyhon要想实现MSGD一般从手动计算参数的梯度损失开始,对于复杂模型这项工作不太容易,特别是在考虑到计算稳定性问题的情况下。

使用Thean可使这项工作大大简化,它自动计算微分,并使用数学转化来提高计算稳定性。

计算梯度微分:

g_W = T.grad(cost=cost, wrt=classifier.W)g_b = T.grad(cost=cost, wrt=classifier.b)

g_w和g_b是象征变量,可以用于图计算。函数train_model,执行一步梯度下降可定义为:

# specify how to update the parameters of the model as a list of    # (variable, update expression) pairs.    updates = [(classifier.W, classifier.W - learning_rate * g_W),               (classifier.b, classifier.b - learning_rate * g_b)]    # compiling a Theano function `train_model` that returns the cost, but in    # the same time updates the parameter of the model based on the rules    # defined in `updates`    train_model = theano.function(        inputs=[index],        outputs=cost,        updates=updates,        givens={            x: train_set_x[index * batch_size: (index + 1) * batch_size],            y: train_set_y[index * batch_size: (index + 1) * batch_size]        }    )

update是一个对列表。在每一对中,第一个元素是本步中需要更新的象征变量,第二个元素是计算新值的象征函数。相似的,given是一个字典,定义了象征变量和本步中的替代值。train_model定义为:

输入是由mini-batch索引index和批次规模来定义x和相应的标签y

返回值是由index定义的与x, y相关的cost/loss

每次执行函数时,x, y由index指定部分的训练集替换。然后评估与minibatch相关的cost,并进行由updates定义的操作。

每次执行train_model(index),都会计算并返回minibatch的cost,并执行MSGD。学习函数以minibatch方式、通过反复执行train_model函数遍历训练集的全部样本。

测试模型

如之前学习分类器所述,测试模型时我们关注错分类样本的数量。LogisticRegression类有一个额外方式为提取每个微批次的错分类样本数构建象征图。

代码如下:

 
 def errors(self, y):        """Return a float representing the number of errors in the minibatch        over the total number of examples of the minibatch ; zero one        loss over the size of the minibatch        :type y: theano.tensor.TensorType        :param y: corresponds to a vector that gives for each example the                  correct label        """        # check if y has same dimension of y_pred        if y.ndim != self.y_pred.ndim:            raise TypeError(                'y should have the same shape as self.y_pred',                ('y', y.type, 'y_pred', self.y_pred.type)            )        # check if y is of the correct datatype        if y.dtype.startswith('int'):            # the T.neq operator returns a vector of 0s and 1s, where 1            # represents a mistake in prediction            return T.mean(T.neq(self.y_pred, y))        else:            raise NotImplementedError()

然后我们构建test_model和validate_model函数。稍后将见到validate_model是实施提前停止的关键。两个函数都通过提取微批次来计算错分类数,不同的是一个从测试集提取,另一个从验证集提取。

 
   # compiling a Theano function that computes the mistakes that are made by    # the model on a minibatch    test_model = theano.function(        inputs=[index],        outputs=classifier.errors(y),        givens={            x: test_set_x[index * batch_size: (index + 1) * batch_size],            y: test_set_y[index * batch_size: (index + 1) * batch_size]        }    )    validate_model = theano.function(        inputs=[index],        outputs=classifier.errors(y),        givens={            x: valid_set_x[index * batch_size: (index + 1) * batch_size],            y: valid_set_y[index * batch_size: (index + 1) * batch_size]        }    )

用Theano对MNIST进行训练的完整代码

"""This tutorial introduces logistic regression using Theano and stochasticgradient descent.Logistic regression is a probabilistic, linear classifier. It is parametrizedby a weight matrix :math:`W` and a bias vector :math:`b`. Classification isdone by projecting data points onto a set of hyperplanes, the distance towhich is used to determine a class membership probability.Mathematically, this can be written as:.. math::  P(Y=i|x, W,b) &= softmax_i(W x + b) \\                &= \frac {e^{W_i x + b_i}} {\sum_j e^{W_j x + b_j}}The output of the model or prediction is then done by taking the argmax ofthe vector whose i'th element is P(Y=i|x)... math::  y_{pred} = argmax_i P(Y=i|x,W,b)This tutorial presents a stochastic gradient descent optimization methodsuitable for large datasets.References:    - textbooks: "Pattern Recognition and Machine Learning" -                 Christopher M. Bishop, section 4.3.2"""from __future__ import print_function__docformat__ = 'restructedtext en'import six.moves.cPickle as pickleimport gzipimport osimport sysimport timeitimport numpyimport theanoimport theano.tensor as Tclass LogisticRegression(object):    """Multi-class Logistic Regression Class    The logistic regression is fully described by a weight matrix :math:`W`    and bias vector :math:`b`. Classification is done by projecting data    points onto a set of hyperplanes, the distance to which is used to    determine a class membership probability.    """    def __init__(self, input, n_in, n_out):        """ Initialize the parameters of the logistic regression        :type input: theano.tensor.TensorType        :param input: symbolic variable that describes the input of the                      architecture (one minibatch)        :type n_in: int        :param n_in: number of input units, the dimension of the space in                     which the datapoints lie        :type n_out: int        :param n_out: number of output units, the dimension of the space in                      which the labels lie        """        # start-snippet-1        # initialize with 0 the weights W as a matrix of shape (n_in, n_out)        self.W = theano.shared(            value=numpy.zeros(                (n_in, n_out),                dtype=theano.config.floatX            ),            name='W',            borrow=True        )        # initialize the biases b as a vector of n_out 0s        self.b = theano.shared(            value=numpy.zeros(                (n_out,),                dtype=theano.config.floatX            ),            name='b',            borrow=True        )        # symbolic expression for computing the matrix of class-membership        # probabilities        # Where:        # W is a matrix where column-k represent the separation hyperplane for        # class-k        # x is a matrix where row-j  represents input training sample-j        # b is a vector where element-k represent the free parameter of        # hyperplane-k        self.p_y_given_x = T.nnet.softmax(T.dot(input, self.W) + self.b)        # symbolic description of how to compute prediction as class whose        # probability is maximal        self.y_pred = T.argmax(self.p_y_given_x, axis=1)        # end-snippet-1        # parameters of the model        self.params = [self.W, self.b]        # keep track of model input        self.input = input    def negative_log_likelihood(self, y):        """Return the mean of the negative log-likelihood of the prediction        of this model under a given target distribution.        .. math::            \frac{1}{|\mathcal{D}|} \mathcal{L} (\theta=\{W,b\}, \mathcal{D}) =            \frac{1}{|\mathcal{D}|} \sum_{i=0}^{|\mathcal{D}|}                \log(P(Y=y^{(i)}|x^{(i)}, W,b)) \\            \ell (\theta=\{W,b\}, \mathcal{D})        :type y: theano.tensor.TensorType        :param y: corresponds to a vector that gives for each example the                  correct label        Note: we use the mean instead of the sum so that              the learning rate is less dependent on the batch size        """        # start-snippet-2        # y.shape[0] is (symbolically) the number of rows in y, i.e.,        # number of examples (call it n) in the minibatch        # T.arange(y.shape[0]) is a symbolic vector which will contain        # [0,1,2,... n-1] T.log(self.p_y_given_x) is a matrix of        # Log-Probabilities (call it LP) with one row per example and        # one column per class LP[T.arange(y.shape[0]),y] is a vector        # v containing [LP[0,y[0]], LP[1,y[1]], LP[2,y[2]], ...,        # LP[n-1,y[n-1]]] and T.mean(LP[T.arange(y.shape[0]),y]) is        # the mean (across minibatch examples) of the elements in v,        # i.e., the mean log-likelihood across the minibatch.        return -T.mean(T.log(self.p_y_given_x)[T.arange(y.shape[0]), y])        # end-snippet-2    def errors(self, y):        """Return a float representing the number of errors in the minibatch        over the total number of examples of the minibatch ; zero one        loss over the size of the minibatch        :type y: theano.tensor.TensorType        :param y: corresponds to a vector that gives for each example the                  correct label        """        # check if y has same dimension of y_pred        if y.ndim != self.y_pred.ndim:            raise TypeError(                'y should have the same shape as self.y_pred',                ('y', y.type, 'y_pred', self.y_pred.type)            )        # check if y is of the correct datatype        if y.dtype.startswith('int'):            # the T.neq operator returns a vector of 0s and 1s, where 1            # represents a mistake in prediction            return T.mean(T.neq(self.y_pred, y))        else:            raise NotImplementedError()def load_data(dataset):    ''' Loads the dataset    :type dataset: string    :param dataset: the path to the dataset (here MNIST)    '''    #############    # LOAD DATA #    #############    # Download the MNIST dataset if it is not present    data_dir, data_file = os.path.split(dataset)    if data_dir == "" and not os.path.isfile(dataset):        # Check if dataset is in the data directory.        new_path = os.path.join(            os.path.split(__file__)[0],            "..",            "data",            dataset        )        if os.path.isfile(new_path) or data_file == 'mnist.pkl.gz':            dataset = new_path    if (not os.path.isfile(dataset)) and data_file == 'mnist.pkl.gz':        from six.moves import urllib        origin = (            'http://www.iro.umontreal.ca/~lisa/deep/data/mnist/mnist.pkl.gz'        )        print('Downloading data from %s' % origin)        urllib.request.urlretrieve(origin, dataset)    print('... loading data')    # Load the dataset    with gzip.open(dataset, 'rb') as f:        try:            train_set, valid_set, test_set = pickle.load(f, encoding='latin1')        except:            train_set, valid_set, test_set = pickle.load(f)    # train_set, valid_set, test_set format: tuple(input, target)    # input is a numpy.ndarray of 2 dimensions (a matrix)    # where each row corresponds to an example. target is a    # numpy.ndarray of 1 dimension (vector) that has the same length as    # the number of rows in the input. It should give the target    # to the example with the same index in the input.    def shared_dataset(data_xy, borrow=True):        """ Function that loads the dataset into shared variables        The reason we store our dataset in shared variables is to allow        Theano to copy it into the GPU memory (when code is run on GPU).        Since copying data into the GPU is slow, copying a minibatch everytime        is needed (the default behaviour if the data is not in a shared        variable) would lead to a large decrease in performance.        """        data_x, data_y = data_xy        shared_x = theano.shared(numpy.asarray(data_x,                                               dtype=theano.config.floatX),                                 borrow=borrow)        shared_y = theano.shared(numpy.asarray(data_y,                                               dtype=theano.config.floatX),                                 borrow=borrow)        # When storing data on the GPU it has to be stored as floats        # therefore we will store the labels as ``floatX`` as well        # (``shared_y`` does exactly that). But during our computations        # we need them as ints (we use labels as index, and if they are        # floats it doesn't make sense) therefore instead of returning        # ``shared_y`` we will have to cast it to int. This little hack        # lets ous get around this issue        return shared_x, T.cast(shared_y, 'int32')    test_set_x, test_set_y = shared_dataset(test_set)    valid_set_x, valid_set_y = shared_dataset(valid_set)    train_set_x, train_set_y = shared_dataset(train_set)    rval = [(train_set_x, train_set_y), (valid_set_x, valid_set_y),            (test_set_x, test_set_y)]    return rvaldef sgd_optimization_mnist(learning_rate=0.13, n_epochs=1000,                           dataset='mnist.pkl.gz',                           batch_size=600):    """    Demonstrate stochastic gradient descent optimization of a log-linear    model    This is demonstrated on MNIST.    :type learning_rate: float    :param learning_rate: learning rate used (factor for the stochastic                          gradient)    :type n_epochs: int    :param n_epochs: maximal number of epochs to run the optimizer    :type dataset: string    :param dataset: the path of the MNIST dataset file from                 http://www.iro.umontreal.ca/~lisa/deep/data/mnist/mnist.pkl.gz    """    datasets = load_data(dataset)    train_set_x, train_set_y = datasets[0]    valid_set_x, valid_set_y = datasets[1]    test_set_x, test_set_y = datasets[2]    # compute number of minibatches for training, validation and testing    n_train_batches = train_set_x.get_value(borrow=True).shape[0] // batch_size    n_valid_batches = valid_set_x.get_value(borrow=True).shape[0] // batch_size    n_test_batches = test_set_x.get_value(borrow=True).shape[0] // batch_size    ######################    # BUILD ACTUAL MODEL #    ######################    print('... building the model')    # allocate symbolic variables for the data    index = T.lscalar()  # index to a [mini]batch    # generate symbolic variables for input (x and y represent a    # minibatch)    x = T.matrix('x')  # data, presented as rasterized images    y = T.ivector('y')  # labels, presented as 1D vector of [int] labels    # construct the logistic regression class    # Each MNIST image has size 28*28    classifier = LogisticRegression(input=x, n_in=28 * 28, n_out=10)    # the cost we minimize during training is the negative log likelihood of    # the model in symbolic format    cost = classifier.negative_log_likelihood(y)    # compiling a Theano function that computes the mistakes that are made by    # the model on a minibatch    test_model = theano.function(        inputs=[index],        outputs=classifier.errors(y),        givens={            x: test_set_x[index * batch_size: (index + 1) * batch_size],            y: test_set_y[index * batch_size: (index + 1) * batch_size]        }    )    validate_model = theano.function(        inputs=[index],        outputs=classifier.errors(y),        givens={            x: valid_set_x[index * batch_size: (index + 1) * batch_size],            y: valid_set_y[index * batch_size: (index + 1) * batch_size]        }    )    # compute the gradient of cost with respect to theta = (W,b)    g_W = T.grad(cost=cost, wrt=classifier.W)    g_b = T.grad(cost=cost, wrt=classifier.b)    # start-snippet-3    # specify how to update the parameters of the model as a list of    # (variable, update expression) pairs.    updates = [(classifier.W, classifier.W - learning_rate * g_W),               (classifier.b, classifier.b - learning_rate * g_b)]    # compiling a Theano function `train_model` that returns the cost, but in    # the same time updates the parameter of the model based on the rules    # defined in `updates`    train_model = theano.function(        inputs=[index],        outputs=cost,        updates=updates,        givens={            x: train_set_x[index * batch_size: (index + 1) * batch_size],            y: train_set_y[index * batch_size: (index + 1) * batch_size]        }    )    # end-snippet-3    ###############    # TRAIN MODEL #    ###############    print('... training the model')    # early-stopping parameters    patience = 5000  # look as this many examples regardless    patience_increase = 2  # wait this much longer when a new best is                                  # found    improvement_threshold = 0.995  # a relative improvement of this much is                                  # considered significant    validation_frequency = min(n_train_batches, patience // 2)                                  # go through this many                                  # minibatche before checking the network                                  # on the validation set; in this case we                                  # check every epoch    best_validation_loss = numpy.inf    test_score = 0.    start_time = timeit.default_timer()    done_looping = False    epoch = 0    while (epoch < n_epochs) and (not done_looping):        epoch = epoch + 1        for minibatch_index in range(n_train_batches):            minibatch_avg_cost = train_model(minibatch_index)            # iteration number            iter = (epoch - 1) * n_train_batches + minibatch_index            if (iter + 1) % validation_frequency == 0:                # compute zero-one loss on validation set                validation_losses = [validate_model(i)                                     for i in range(n_valid_batches)]                this_validation_loss = numpy.mean(validation_losses)                print(                    'epoch %i, minibatch %i/%i, validation error %f %%' %                    (                        epoch,                        minibatch_index + 1,                        n_train_batches,                        this_validation_loss * 100.                    )                )                # if we got the best validation score until now                if this_validation_loss < best_validation_loss:                    #improve patience if loss improvement is good enough                    if this_validation_loss < best_validation_loss *  \                       improvement_threshold:                        patience = max(patience, iter * patience_increase)                    best_validation_loss = this_validation_loss                    # test it on the test set                    test_losses = [test_model(i)                                   for i in range(n_test_batches)]                    test_score = numpy.mean(test_losses)                    print(                        (                            '     epoch %i, minibatch %i/%i, test error of'                            ' best model %f %%'                        ) %                        (                            epoch,                            minibatch_index + 1,                            n_train_batches,                            test_score * 100.                        )                    )                    # save the best model                    with open('best_model.pkl', 'wb') as f:                        pickle.dump(classifier, f)            if patience <= iter:                done_looping = True                break    end_time = timeit.default_timer()    print(        (            'Optimization complete with best validation score of %f %%,'            'with test performance %f %%'        )        % (best_validation_loss * 100., test_score * 100.)    )    print('The code run for %d epochs, with %f epochs/sec' % (        epoch, 1. * epoch / (end_time - start_time)))    print(('The code for file ' +           os.path.split(__file__)[1] +           ' ran for %.1fs' % ((end_time - start_time))), file=sys.stderr)def predict():    """    An example of how to load a trained model and use it    to predict labels.    """    # load the saved model    classifier = pickle.load(open('best_model.pkl'))    # compile a predictor function    predict_model = theano.function(        inputs=[classifier.input],        outputs=classifier.y_pred)    # We can test it on some examples from test test    dataset='mnist.pkl.gz'    datasets = load_data(dataset)    test_set_x, test_set_y = datasets[2]    test_set_x = test_set_x.get_value()    predicted_values = predict_model(test_set_x[:10])    print("Predicted values for the first 10 examples in test set:")    print(predicted_values)if __name__ == '__main__':    sgd_optimization_mnist()

使用者可以在深度学习教程文件夹中通过输入以下代码来使用SGD逻辑回归对MNIST数字进行分类:

python code/logistic_sgd.py

输出应该类似于:

...epoch 72, minibatch 83/83, validation error 7.510417 %     epoch 72, minibatch 83/83, test error of best model 7.510417 %epoch 73, minibatch 83/83, validation error 7.500000 %     epoch 73, minibatch 83/83, test error of best model 7.489583 %Optimization complete with best validation score of 7.500000 %,with test performance 7.489583 %The code run for 74 epochs, with 1.936983 epochs/sec

使用训练模型预测

每次验证集误差降低时sgd_optimization_mnist将保存更优模型,我们可以载入这个模型用于预测新数据的标签,预测函数示例如下:

def predict():    """    An example of how to load a trained model and use it    to predict labels.    """    # load the saved model    classifier = pickle.load(open('best_model.pkl'))    # compile a predictor function    predict_model = theano.function(        inputs=[classifier.input],        outputs=classifier.y_pred)    # We can test it on some examples from test test    dataset='mnist.pkl.gz'    datasets = load_data(dataset)    test_set_x, test_set_y = datasets[2]    test_set_x = test_set_x.get_value()    predicted_values = predict_model(test_set_x[:10])    print("Predicted values for the first 10 examples in test set:")    print(predicted_values)





原创粉丝点击