A large part of the literature on learning disentangled representations focuses on variational autoencoders (VAEs). Recent developments demonstrate that disentanglement cannot be obtained in a fully unsupervised setting without inductive biases on models and data. As such, Khemakhem et al. (2020) suggest employing a factorized prior distribution over the latent variables that is conditionally dependent on auxiliary observed variables complementing input observations. While this is a remarkable advancement toward model identifiability, the learned conditional prior only focuses on sufficiency, giving no guarantees on a minimal representation. Motivated by information theoretic principles, we propose a novel VAE-based generative model with theoretical guarantees on disentanglement. Our proposed model learns a sufficient and compact – thus optimal – conditional prior, which serves as regularization for the latent space. Experimental results indicate superior performance with respect to state-of-the-art methods, according to several established metrics proposed in the literature on disentanglement.
Learning optimal conditional priors for disentangled representations
Submitted on ArXiV 
      
  Type:
        Conférence
      Date:
        2020-10-19
      Department:
        Systèmes de Communication
      Eurecom Ref:
        6382
      Copyright:
        © EURECOM. Personal use of this material is permitted. The definitive version of this paper was published in Submitted on ArXiV  and is available at : 
      See also:
        
      PERMALINK : https://www.eurecom.fr/publication/6382
 
 
 
     
                       
                      