+Vector-Quantized Variational Autoencoders work by using vector quantization to obtain a discrete latent representation. They stand out from standard VAEs, functionally, through incorporation of vector-quantization and how the encoder outputs discrete rather than continuous codes and priors are learned rather than constant. Using the VQ method allows the model to circumvent issues of posterior collapse - where the latents are ignored when they are paired with a powerful autoregressive decoder - typically observed in the VAE framework. Using such representations with a prior, the model can generate high fidelity output.
0 commit comments