1014: 生成式自编码器 (GAE) Generative Autoencoder

论坛 期权论坛 金融     
dosi830   2022-7-1 11:34   3748   0
CATSMILE-1014
---
maxdepth: 4
#caption: mycap
numbered: 0
---
1014-gae.md
前言


  • 目标: 重新梳理生成式自编码器的相关框架,用一个统一的优化视角看待MLR,PCA,NMF到tSNE,LLE,NN-VAE,GAN,等等的一系列统计学习框架.在概率建模和最优化之间的边界进行思考
  • 背景与动机: 写1013的时候由于隐变量框架不清晰,给自己绕晕了.得出大致感觉后,把主要框架提炼出来,重新整理一下
  • 结论:
  • 备注: 历史悠久,历久弥新. 隐隐感觉这会是CATSMILE的一个里程碑
  • 更新时间:

    • 20220629:加入边际分布变分视角
    • 20220619:加入变分视角
    • 20220614

  • 相关模型:

    • PCA
    • LLGAE
    • Deep image Prior

  • 关键词:

    • 自编码器AutoEncoder
    • 降维Dimensionality Reduction,
    • 无监督学习Unsupervised Learning,
    • 压缩感知Compressed Sensing

  • 展望方向:

    • Transformer Attention Variational Autoencoders
    • Prior works quoted by VAE

自编码器

自编码器是一个可以方便地用来定量研究数据压缩的框架.和压缩感知是很相关的思考角度,考虑流形上的所有样本 , 自编码器需要对任意样本进行(编码->解码)两步操作,来保持恢复流形结构所需要的信息.这种信息的保持一般用一个度量函数来衡量. 一般地,流形可以用一个向量空间上的测度来表示 ,
变分视角负损失函数: 从自映射联合分布到边际分布

如果我们放弃一个显式的编码器,断开x->z之间的连接

我们可以发现,GAE可以避免引入显式的编码器,也就不需要用联合分布的KL逼近边际分布的KL. 其实GAE的思想非常简单,就是给定一个生成模型后,用点估计来近似后验分布,从而估算边际分布. 考虑如下对KL损失的分解,使用均匀先验

实际上求解 的过程,也可以使用梯度上升来确保连续变量可导性, 也就是把t步上升后的结果写成 ,对应位置的损失函数写成 . 这样做的好处是可以使用优化技巧来定义后验分布的点估计 . 实际上,我们也可以考虑用多个样本的 logsumexp 来近似原本的 logsumexp 积分, 其实是n = 1 的特殊情况.
于是我们就有了一个可导的目标函数 ,注意我们通过定义 之后就可以诱导出一个后验分布的点估计近似 , 并计算了该点的损失函数 ,然后用这个函数来为参数 m 提供梯度. 实际上,带有梯度下降的 会包含 m 的一阶梯度,因此实际优化中会对 m 有一些二阶梯度的表达式.
变分视角负损失函数: 自映射

通常的自编码问题可以表示为对数据流形附近的条件概率分布的近似,生成式自编码器使用了MAP下界对这个损失函数进行近似. 可以看到,忽略隐变量先验的情况下,我们可以用一个解码器 唯一定义一个GAE

这里一般对 取均匀先验 , 其中 操作对应对每个数据进行编码.这里的推断过程可以隐式地认为使用了狄拉克分布 去近似了解码器在贝叶斯意义下所对应的自然后验分布
损失函数原始推导

下文中用用 表示所有模型参数

注意我们可以对自编码负损失做均值要求,或者是下限要求. 鉴于均值要求对应着期望函数,求导时可以更高效地利用所有样本,本文优先探讨这种设定.从长远来看,对自编码负损失的下限做出限制,具有更强的工程学意义,但同时也更加困难.注意到,下限可以平滑处理,于是就可以恢复出一个期望形式

下文中,我们先讨论最简单的情况. 特别地,我们要求生成器在概率意义下重建 ,尽管这会引入不确定性,但是允许简单地联系到混合模型,且后续可以通过用EOL替换LEOP来对LOE下界做出约束.借助变分思想,考虑引入一个混合函数,来构建混合模型


我们发现,阶段化地最大化这个负损失,对应着建模的不同阶段

第一个不等号,在最优编码时取到,第二个不等号,在最优模型时取到.因此最优编码就是

所以我们可以认为,生成式自编码模型,其实意味着用一个可变参数模型的对数似然作为损失度量,从而引导出一个由极大似然定义的自编码器.这样做的好处是,编码器和解码器,都可以通过同一个分布 来加以定义
特例: 基于离散编码空间和混合模型的GAE

如果我们使用混合模型分解解码器,并且不对 加以限制,那么容易得到最优编码 .KMEANS就可以认为是一种这样的自编码器.
特别的,如果组分数量等于或大于数据点的数量,那么总能够确保 很难说这究竟算不算过拟合,趋近于KNN算法,模型逐渐收敛数据本身,不再有额外的参数.但是这类模型的问题是要在大量数据里高效寻找KNN.
特例: 基于高斯核连续编码空间混合模型的GAE (GGAE)

在CATSMILE-1013中,我们构建了一个正则化的编码空间,用来对可选组分进行混合得出生成分布.
特例: 基于线性连续编码空间混合模型的GAE (LGAE)

如果让解码分布成为一个活动的固定噪声的高斯分布,并加入一个升维的参数 ,我们得到一个类似线性回归的损失函数形式.


编码器


在这里, 定义了一个低维的线性子空间,用来对原始流形进行近似.注意到,如果我们显式地对z进行梯度下降,你会得到这么一个式子,可以看成一个带有自反馈的残差神经网络.


如果类似PCA去进一步要求单位正交性 梯度为

额这里怎么把z给消除了是啥意思?...也就是无论从哪里出发,都能一步到位到这个特殊位置,这个位置也是梯度为零向量的位置

注意这个式子恰好就是PCA的编码过程,也就是说原始的损失函数在正交单位矩阵的约束下是可以用PCA/SVD直接解出的.实际编程中,难点在于确保单位正交性
特例: 局部线性的生成式自编码模型 LLGAE

注意到全局线性模型的一个问题, 就是如果实际的流形仅仅是局部线性,那全局线性假设就是有问题的. 因此 需要也成为隐变量 的函数. 一个简单的办法是建立一个插值映射 .这个映射定义在上比在上要更加方便,因为是线性空间内部的坐标,原则上不可能反过来定义线性空间本身.考虑高斯核形式的 ,并加入偏移量 ,得到

注意这里使用了平滑处理,实际上把一个离散隐变量给消去了.如果联立两个隐藏变量 就会出现一个层级结构

对于z上的优化,可以使用梯度下降.对于k上的优化,直接取最优值即可.为了公平和其他降维算法比较,也可以考虑用高斯核嵌入空间参数化 .
注意到LLGAE需要对每个可选组分同时进行优化,复杂度是 ,对每个样本,需要做K*E次z上的梯度下降.这或许可以用一些采样或者嵌入算法进行加速,来减小
特例: 深度图像先验 Deep Image Prior

https://arxiv.org/pdf/1711.10925.pdf
注意到深度图像先验在操作中,和GAE具有类似的损失形式,因此我们可以对应地写出其条件生成分布,注意,这里的推断的隐变量是网络权重 , 而z一般是固定的先验高斯噪音或者某种向量,不需要被推断, 中的 代表网络除去权重的所有定义,如连接性,激活函数. 一般来讲,DIP有大量的隐变量需要通过梯度推断,但是由于隐藏变量之间的互相作用是局域性的,所以仍然会有一些好的性质.

特例: 最大激活输入 Maximal Activated Input

一种对于神经网络隐藏单元的测试就是观查如何输入图片来最大化某个神经元 的激活量,这相当于GAE的逆形式. 让我们假设激活量对应于某个条件概率的对数,那么我们就是在推断符合这个概念 的输入.不过这更多地像一个采样过程,而非推断过程. 也就是说,如果优化发生在数据向量,就是一种采样,如果优化发生在隐变量层面,就是一种推断

扩展材料:神经网络自编码器

NN-VAE celebA https://github.com/AntixK/PyTorch-VAE
Comparative thesis on AE https://github.com/mainak-ghosh/AutoEncoder
Empirical comparison between autoencoders andtraditional dimensionality reduction methods https://arxiv.org/pdf/2103.04874.pdf
SE post https://stats.stackexchange.com/questions/531706/autoencoders-as-dimensionality-reduction-tools
VAE https://arxiv.org/abs/1312.6114

  • VAE引用的Prior Work https://www.semanticscholar.org/paper/Auto-Encoding-Variational-Bayes-Kingma-Welling/5f5dc5b9a2ba710937e2c413b37b053cd673df02?sort=relevance&citedPapersSort=relevance&citedPapersLimit=10&citedPapersOffset=10
Zhihu post on VI https://www.zhihu.com/question/31032863/answer/315311293
AE发展脉络

  • 2008 Sparse Feature Learning for Deep Belief Networks https://papers.nips.cc/paper/2007/hash/c60d060b946d6dd6145dcbad5c4ccf6f-Abstract.html
  • 2011 Contractive Auto-Encoders:Explicit Invariance During Feature Extraction https://icml.cc/2011/papers/455_icmlpaper.pdf
  • 2013 Generalized Denoising Auto-Encoders as GenerativeModels https://arxiv.org/abs/1305.6663.pdf
Quote from 2013 paper on DAE. 很奇怪的是,似乎没有人提到Kmeans是一种自编码算法
Auto-encoders learn an encoder function from input to representation and a decoder function backfrom representation to input space, such that the reconstruction (composition of encoder and de-coder) is good for training examples. Regularized auto-encoders also involve some form of regu-larization that prevents the auto-encoder from simply learning the identity function, so that recon-struction error will be low at training examples (and hopefully at test examples) but high in general.Different variants of auto-encoders and sparse coding have been, along with RBMs, among themost successful building blocks in recent research in deep learning (Bengio et al., 2013b). Whereasthe usefulness of auto-encoder variants as feature learners for supervised learning can directly beassessed by performing supervised learning experiments with unsupervised pre-training, what hasremained until recently rather unclear is the interpretation of these algorithms in the context ofpure unsupervised learning, as devices to capture the salient structure of the input data distribution.Whereas the answer is clear for RBMs, it is less obvious for regularized auto-encoders. Do theycompletely characterize the input distribution or only some aspect of it? For example, clusteringalgorithms such as k-means only capture the modes of the distribution, while manifold learningalgorithms characterize the low-dimensional regions where the density concentrates.

Some of the first ideas about the probabilistic interpretation of auto-encoders were proposed by Ran-zato et al. (2008): they were viewed as approximating an energy function through the reconstructionerror, i.e., being trained to have low reconstruction error at the training examples and high recon-struction error elsewhere (through the regularizer, e.g., sparsity or otherwise, which prevents theauto-encoder from learning the identity function). An important breakthrough then came, yieldinga first formal probabilistic interpretation of regularized auto-encoders as models of the input dis-tribution, with the work of Vincent (2011). This work showed that some denoising auto-encoders(DAEs) correspond to a Gaussian RBM and that minimizing the denoising reconstruction error (as asquared error) estimates the energy function through a regularized form of score matching, with theregularization disappearing as the amount of corruption noise goes to 0, and then converging to thesame solution as score matching (Hyv arinen, 2005). This connection and its generalization to other1arXiv:1305.6663v4 [cs.LG] 11 Nov 2013energy functions, giving rise to the general denoising score matching training criterion, is discussedin several other papers (Kingma and LeCun, 2010; Swersky et al., 2011; Alain and Bengio, 2013).
Another breakthrough has been the development of an empirically successful sampling algorithmfor contractive auto-encoders (Rifai et al., 2012), which basically involves composing encoding, de-coding, and noise addition steps. This algorithm is motivated by the observation that the Jacobianmatrix (of derivatives) of the encoding function provides an estimator of a local Gaussian approxi-mation of the density, i.e., the leading singular vectors of that matrix span the tangent plane of themanifold near which the data density concentrates. However, a formal justification for this algorithmremains an open problem.

The last step in this development (Alain and Bengio, 2013) generalized the result from Vincent(2011) by showing that when a DAE (or a contractive auto-encoder with the contraction on the wholeencode/decode reconstruction function) is trained with small Gaussian corruption and squared errorloss, it estimates the score (derivative of the log-density) of the underlying data-generating distri-bution, which is proportional to the difference between reconstruction and input. This result doesnot depend on the parametrization of the auto-encoder, but suffers from the following limitations: itapplies to one kind of corruption (Gaussian), only to continuous-valued inputs, only for one kind ofloss (squared error), and it becomes valid only in the limit of small noise (even though in practice,best results are obtained with large noise levels, comparable to the range of the input).
What we propose here is a different probabilistic interpretation of DAEs, which is valid for any datatype, any corruption process (so long as it has broad enough support), and any reconstruction loss(so long as we can view it as a log-likelihood).The basic idea is that if we corrupt observed random variable X into  X using conditional distributionC(  X|X), we are really training the DAE to estimate the reverse conditional P (X|  X). Combiningthis estimator with the known C(  X|X), we show that we can recover a consistent estimator ofP (X) through a Markov chain that alternates between sampling from P (X|  X) and sampling fromC(  X|X), i.e., encode/decode, sample from the reconstruction distribution model P (X|  X), applythe stochastic corruption procedure C(  X|X), and iterate.This theoretical result is validated through experiments on artificial data in a non-parametric settingand experiments on real data in a parametric setting (with neural net DAEs). We find that we canimprove the sampling behavior by using the model itself to define the corruption process, yielding atraining procedure that has some surface similarity to the contrastive divergence algorithm (Hinton,1999; Hinton et al., 2006)

本文使用 Zhihu On VSCode 创作并发布
分享到 :
0 人收藏
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

积分:
帖子:
精华:
期权论坛 期权论坛
发布
内容

下载期权论坛手机APP