Theano 初探(二)随机数与梯度下降logistic模型

来源:互联网 发布:mac用什么播放器 编辑:程序博客网 时间:2024/06/06 14:55
theano随机数生成:
由RandomStreams 定义生成的随机数流,在相应的函数中如设定no_dafault_updates = True
会相应的使得随机数generator不更新randomState 即产生相同的随机数生成结果。

RandomStreams所定义的随机分布生成对象在调用rng.get_value或set_value方法时
有参数borrow其与return_internal_type 设定返回的RandomState类型是否为
int类型的别名,不要在意这些细节。
与numpy RandomState相似,不同含有随机变量的函数的调用,一个的调用会更新
另一个函数的RandomState,这里也给出了拷贝RandomState并在两个函数间
设定的例子。
from theano.tensor.shared_randomstreams import RandomStreams from theano import function srng = RandomStreams(seed = 234)rv_u = srng.uniform((2, 2))rv_n = srng.normal((2, 2))f = function([], rv_u)g = function([], rv_n, no_default_updates = True) nearly_zeros = function([], rv_u + rv_u - 2 * rv_u)f_val0 = f() f_val1 = f() print f_val0, f_val1g_val0 = g() g_val1 = g()print g_val0, g_val1rng_val = rv_u.rng.get_value(borrow = True)#print rng_valrng_val.seed(89234)rv_u.rng.set_value(rng_val, borrow = True)state_after_v0 = rv_u.rng.get_value().get_state() nearly_zeros() v1 = f() rng = rv_u.rng.get_value() rng.set_state(state_after_v0)rv_u.rng.set_value(rng, borrow = True)v2 = f() v3 = f()print v1 print v2 print v3import theanofrom theano.sandbox.rng_mrg import MRG_RandomStreams class Graph(): def __init__(self, seed = 123):  self.rng = RandomStreams(seed)  self.y = self.rng.uniform(size = (1,))g1 = Graph(seed = 123)f1 = theano.function([], g1.y)g2 = Graph(seed = 987)f2 = theano.function([], g2.y)print f1() print f2()def copy_random_state(g1, g2): if isinstance(g1.rng, MRG_RandomStreams):  g2.rng.rstate = g1.rng.rstate for (su1, su2) in zip(g1.rng.state_updates, g2.rng.state_updates):  su2[0].set_value(su1[0].get_value())copy_random_state(g1, g2)print f1() print f2()




下面有一个利用梯度下降法logistic回归的例子,这里数据是无相依关系的,仅仅用来进行
过拟合示例:

import numpy import theano import theano.tensor as T rng = numpy.random N = 400 feats = 784 D = (rng.randn(N, feats), rng.randint(size = N, low = 0, high = 2))training_steps = 10000 x = T.dmatrix('x')y = T.dvector('y')w = theano.shared(rng.randn(feats), name = 'w')b = theano.shared(0., name = 'b')print "Initial model:"print w.get_value()print b.get_value()p_1 = 1 / (1 + T.exp(-T.dot(x, w) - b))prediction = p_1 > 0.5 xext = -y * T.log(p_1) - (1 - y) * T.log(1 - p_1) cost = xext.mean() + 0.01 * (w ** 2).sum() gw, gb = T.grad(cost, [w, b])train = theano.function(inputs = [x, y], outputs = [prediction, xext], updates = ((w, w - 0.1 * gw), (b, b - 0.1 * gb)))predict = theano.function(inputs = [x], outputs = prediction)for i in range(training_steps): pred, err = train(D[0], D[1])print "Final model:"print w.get_value()print b.get_value()print "target values for D:"print D[1]print "prediction on D:"print predict(D[0])



0 0
原创粉丝点击