Ethically Sourced Modeling: A Framework for Mitigating Bias in AI Projects within the US Government


The increasingly widespread use of Natural Language Processing (NLP) in AI applications must be continually monitored for biases and false associations, especially those surrounding protected or disadvantaged classes of people. We discuss methods and algorithms used to mitigate such biases and their weak points, using real world examples in civilian agencies of the US government.

Back to AI for Social Good event

Last updated on 07/01/2021