第10章 无监督学习(2)

来源:互联网 发布:电子横幅软件 编辑:程序博客网 时间:2024/05/24 15:37

Continue


Representational Power, Layer Size and Depth


大多数自编码器都只有一层所谓的隐藏层,也就是所谓的码

  • 单层已能够在给定精度表达任何函数 e.g. Principal Components Analysis(PCA)主元分析

  • 一个多层自编码器更难训练, 但是如果训练适当,可以获得更牛叉的表达效果.1

随机自编码器

xQ(h|x)hP(x|h)output

随机自编码器的一般结构。编码和解码都不是简单的函数,都引入一些噪声,意味着他的输出可以看过是输入的分布取样, Q(h|x) for the encoder and P(x|h) for the decoder. RBMs 是一个特殊情况,其中P=Q

线性特征模型

关于数据生成的假设

  • sample real-valued factors
    hP(h)
  • sample the real-valued observable variables
    x=Wh+b+mnoise

概率主元分析和特征分析

它们都是上面等式的特殊情况,不过他们先验和噪声分布的选择不同

hP(h)P(x|h)x=Wh+b+noise

线性特征模型的一般结构, 观测数据 x 是通过隐性因素h的线性组合加上一些噪声获得的

  • 特征分析2, 先验:
    hN(0,I)

    其中假设 xi 条件独立,噪声来自对角协方差的高斯分布,协方差矩阵
    ψ=mdiag(σ2)
    其中
    σ2=(σ21,σ22,...)

    h的角色是获取 xi的相互依赖。
    xN(b,WWT+σ2I)
    其中 xi 通过 wik (for every k)影响 h^k=Wkx ,反过来, h^k 通过 wkj 影响 xj
  • 为了将PCA放进概率框架,令条件方差 σi 互相相当?. 不懂
    这时
    xN(b,WWT+σ2I)

    也就是
    x=Wh+b+σz

    其中 zN(0,I) 是白噪声.

概率 PCA

方差被h 获取, 使得 h成为一个小的重构残差 σ2.

  • σ0, pPCA 成为 PCA.

Continue


Representational Power, Layer Size and Depth


Generally, most trained auto-encoders have had a single hidden layer which is also the representation layer or code.

  • approximator abilities of single hiddenlayer neural networks: a sufficiently large hidden layer can represent any function with a given accuracy e.g. Principal Components Analysis(PCA)

  • training a deep neural network, and in particular a deep auto-encoder (i.e. with a deep encoder and a deep decoder) is more difficult than training a shallow one. If trained properly, such deep auto-encoders could yield much better compression than corresponding shallow or linear auto-encoders.3

stochastic auto-encoder

xQ(h|x)hP(x|h)output

Basic scheme of a stochastic auto-encoder. Both the encoder and the decoder are not simple functions but instead involve some noise injection, meaning that their output can be seen as sampled from a distribution, Q(h|x) for the encoder and P (x|h) for the decoder. RBMs are a special case where P = Q but in general these two distributions are not necessarily conditional distributions compatible with a unique joint distribution P (x, h).

Liner factor Models

Assumpation of how data was generated

  • sample real-valued factors
    hP(h)
  • sample the real-valued observable variables
    x=Wh+b+mnoise

Probabilistic PCA and Factor Analysis

Both are special cases of above equations and only differ in the choices made for the prior and noise distributions.

hP(h)P(x|h)x=Wh+b+noise

Basic scheme of a linear factors model, in which it is assumed that an observed data vector x is obtained by a linear combination of latent factors h, plus some noise. Different models, such as probabilistic PCA, factor analysis or ICA, make different choices about the form of the noise and of the prior P(h).

  • factor analysis4, latent variable prior:
    hN(0,I)

    where xi are assumed to be conditionally independent and noise is assumed to be coming from a fiagonal covariance Gaussian distribution, with covariance matrix
    ψ=mdiag(σ2)

    where
    σ2=(σ21,σ22,...)

    The role of the latent variables is to capture the dependence among xi.
    xN(b,WWT+σ2I)

    where
    xi influences h^k=Wkx via wik (for every k) and h^k influencess xj via wkj
  • In order to cast PCA in a probabilistic framework make the conditional variances σi equal to each other?.
    In that case
    xN(b,WWT+σ2I)

    also
    x=Wh+b+σz

    where zN(0,I) is white nosie.

Probabilistic PCA

the covariance is mostlycaptured by the latent variables , up h to some small residual reconstruction error σ2.

  • if σ0, pPCA becomes PCA.

  1. Bartholomew, 1987; Basilevsky, 1994 ↩
  2. Bartholomew, 1987; Basilevsky, 1994 ↩
  3. Bartholomew, 1987; Basilevsky, 1994 ↩
  4. Bartholomew, 1987; Basilevsky, 1994 ↩
0 0
原创粉丝点击