Winner-take-all Autoencoder

来源:互联网 发布:万网域名绑定非80端口 编辑:程序博客网 时间:2024/06/03 08:55

在尝试各种无监督学习方法来学习轴承振动信号的特征来进行故障诊断,最近看了关于Winner-take-all Autoencoder的论文和实现方法。

参考:

1. 论文:Winner-take-all Autoencoders

2. 代码:

a. Fully Connected WTA

b. Convolutional WTA

简单理解:

1. spatial sparsity是指卷积层的每个feature map每次只能激活一个神经元(保留激活值最大的神经元,其他全部置0)

2. lifetime sparsity是指在spatial sparsity之后,每个feature map中只有一个激活的神经元,则在一个大小为m的mini batch中,某个feature map有m个激活的神经元,对应每一个样本。life time sparsity就是指只保留这m个神经元里最大的k个,其他的都置0。

主要代码:

    def _spatial_sparsity(self, h):        shape = tf.shape(h)        n = shape[0]        c = shape[3]        h_t = tf.transpose(h, [0, 3, 1, 2]) # n, c, h, w        h_r = tf.reshape(h_t, tf.stack([n, c, -1])) # n, c, h*w        th, _ = tf.nn.top_k(h_r, 1) # n, c, 1        th_r = tf.reshape(th, tf.stack([n, 1, 1, c])) # n, 1, 1, c        drop = tf.where(h < th_r,             tf.zeros(shape, tf.float32), tf.ones(shape, tf.float32))        # spatially dropped & winner        return h*drop, tf.reshape(th, tf.stack([n, c])) # n, c    def _lifetime_sparsity(self, h, winner, rate):        shape = tf.shape(winner)        n = shape[0]        c = shape[1]        k = tf.cast(rate * tf.cast(n, tf.float32), tf.int32)        winner = tf.transpose(winner) # c, n        th_k, _ = tf.nn.top_k(winner, k) # c, k        shape_t = tf.stack([c, n])        drop = tf.where(winner < th_k[:,k-1:k], # c, n            tf.zeros(shape_t, tf.float32), tf.ones(shape_t, tf.float32))        drop = tf.transpose(drop) # n, c        return h * tf.reshape(drop, tf.stack([n, 1, 1, c]))
1 0
原创粉丝点击