Online search engines are an extremely popular tool for seeking information. However, the results returned sometimes exhibit undesirable or even wrongful forms of bias, such as with respect to gender or race. In this paper, we consider the problem of fair keyword recommendation, in which the goal is to suggest keywords that are relevant to a user’s search query, but exhibit less (or opposite) bias. We present a multi-objective method using word embedding to suggest alternate keywords for biased keywords present in a search query. We perform a qualitative analysis on pairs of subReddits from Reddit.com (e.g., r/AskMen vs. r/AskWomen, r/Republican vs. r/democrats). Our results demonstrate the efficacy of the proposed method and illustrate subtle linguistic differences between subReddits.