On Statistical Discrimination as a Failure of Social Learning: A Multi-Armed Bandit Approach

Citation:

Komiyama J, Noda S. On Statistical Discrimination as a Failure of Social Learning: A Multi-Armed Bandit Approach, in IJCAI 2021 Workshop on AI for Social Good. ; 2021.

Abstract:

We analyze statistical discrimination using a multi-armed bandit model where myopic firms face candidate workers arriving with heterogeneous observable characteristics. The association between the worker’s skill and characteristics is unknown ex ante; thus, firms need to learn it. In such an environment, laissez- faire may result in a highly unfair and inefficient outcome—myopic firms are reluctant to hire minority workers because the lack of data about minority workers prevents accurate estimation of their performance. Consequently, minority groups could be perpetually underestimated—they are never hired, and therefore, data about them is never accumulated. We proved that this problem becomes more seri- ous when the population ratio is imbalanced, as is the case in many extant discrimination problems. We consider two affirmative-action policies for solving this dilemma: One is a subsidy rule that is based on the popular upper confidence bound algorithm, and another is the Rooney Rule, which requires firms to interview at least one minority worker for each hiring opportunity. Our results indicate temporary affirmative actions are effective for statistical dis- crimination caused by data insufficiency.

Last updated on 07/01/2021