Adversarial Autoencoder
来源:互联网 发布:天蝎网络第二季看不到 编辑:程序博客网 时间:2024/06/05 15:29
Adversarial Autoencoder
https://arxiv.org/abs/1511.05644
- A generative models is to yield new data which subjects to a certain distribution so as to capture the distribution.
- An adversarial autoencoder (AAE) that can turn an autoencoder into a generative model.
- Matching the aggregated posterior to the prior ensures that generating from any part of prior space results in meaningful samples.
Motivation
The adversarial autoencoder matches the aggregated posterior distribution of the latent representation of the autoencoder to an arbitrary prior distribution.
The result of the training is that the encoder learns to convert the data distribution to the prior distribution, while the decoder learns a deep generative model that maps the imposed prior to the data distribution.
To speak differently, the encoder maps the data distribution into a prior distritbution in a latent space, and then the decoder returns to the original distribution again. The rationale behind is that any sample generated from the latent space subjected to the prior could be meaningful.
Method
The Adversarial training procedure is used to adjust the distritbution to certain prior, or limit the representation to certain form.
Here x is the data, and z is the code in the hidden space. The upper part is an autoencoder that build the one-to-one mapping between x and z. The lower part is the discriminator which force the distribution of z to match the prior p(z).
GAN
The discriminator guadually adjusts the distribution of generated samples to be indistinguishable from the real samples.
In AAE, the adversarial training procedure is to allow the codes generated by the encoder to match the prior distribution.
Method
- Both, the adversarial network and the autoencoder are trained jointly with SGD in two phases – the reconstruction phase and the regularization phase – executed on each mini-batch.
- In the reconstruction phase, the autoencoder updates the encoder and the decoder to minimize the reconstruction error of the inputs.
- In the regularization phase, the adversarial network first updates its discriminative network to tell apart the true samples (generated using the prior) from the generated samples (the hidden codes computed by the autoencoder). The adversarial network then updates its generator (which is also the encoder of the autoencoder) to confuse the discriminative network.
#
Unsupervised Clustering
- Using y, the indicator, can divide the data into clusters of a predefined number.
- The label information y and the style information z is disentangled.
Example
Example(Mapping)
- Adversarial Autoencoder
- Regression by Conditional Adversarial Autoencoder
- A wizard’s guide to Adversarial Autoencoders: Part 1, Autoencoder?
- Autoencoder
- autoencoder
- autoencoder
- Autoencoder
- Autoencoder
- AutoEncoder
- autoencoder
- Autoencoder
- autoEncoder
- sparse-autoencoder
- Sparse Autoencoder
- Autoencoder review
- Denoise Autoencoder
- Autoencoder 详解
- AutoEncoder详解
- CNN
- 使用JDBCTemplate实现与Spring结合,方法公用 ——测试(EmpDaoImplTest)
- Layer的实现细节
- 华为实习day2
- Android RatingBar(评分控件)
- Adversarial Autoencoder
- boot admin turbine
- ScrollView嵌套RecyclerView滑动冲突解决
- C语言求最大公约数和最小公倍数
- Angularjs 环境下Ztree结合JqueryUI实现拖拽
- 文章标题
- windows下influxDB的安装及启动
- 实用的sublime插件集合 – sublime推荐必备插件
- Struts2配置文件里面action详解