Reducing Word Embedding Bias Using Learned Latent Structure

Citation:

Mishra H. Reducing Word Embedding Bias Using Learned Latent Structure, in AI for Social Good Workshop. ; 2020.

Abstract:

Word embeddings learned from collections of data have demonstrated a significant level of biases. When these embeddings are used in machine learn- ing tasks it often amplifies the bias. We propose a debiasing method that uses (Figure 1) a hybrid classification - variational autoencoder network. In this work, we developed a semi-supervised classification algorithm based on variational autoencoders which learns the latent structure within the dataset and then based on learned latent structure adaptively re-weights the importance of certain data points while training. Experimental results have shown that the proposed approach works better than existing SoTA methods for debiasing word embeddings.

Back to AI for Social Good event

Last updated on 07/01/2021