Bias

These examples articulate how algorithms can amplify inherent human biases, and hence a vicious cycle ensues- thereby affecting increasingly larger populations.

Interestingly, a study by Caliskan et al. (2017) found that algorithms that were trained on data from the news learned race and gender-based biases swiftly. This is of particular interest as many claim that the news is an objective source of information. According to the researchers it is not surprising that bias would be a result even when using an unbiased algorithm to derive regularities from a given dataset because the regularities discovered are the bias. Importantly, the computational model is exposed to language in a similar way that humans would be. The results of Caliskan et al. (2017) indicate that language encompasses imprints of our historic biases, which are frequently problematic. Such findings hold vital implications, not only for AI but also for our understanding of humans in the field of psychology. This is because these results propose that mere exposure to language in humans could account for biases on some level as it did in algorithms.

I then came across a new study by DeFranza et al. (2020), which provides further evidence for the idea of mere exposure to language and prejudice in humans. Their research indicates that gender prejudice is more common in languages that use grammatical genders (i.e. languages in which the form of a noun or verb is presented as either female or male). This includes languages such as Hindi, French and Spanish to name a few. These findings suggest that the language humans use can enhance gender-based prejudice because the ‘genderedness’ of certain languages makes gender more salient in the mind. Research such as DeFranza et al. (2020) suggests that language can both shape and communicate human thoughts, particularly concerning prejudice.

Overall, the findings that algorithms can learn bias reflects and magnifies the widespread existence of prejudice in society, while exposing clear patterns of inequality. The acknowledgment of prejudice in seemingly unbiased algorithms greatly enhances our understanding of society and how the human mind can acquire biases too. Thus, the notion that algorithms eliminate human biases is incorrect as algorithms have been found to have a significant impact on our society and its unjust inequalities.

Last updated