supergeek
Algorithmic bias describes systematic and repeatable errors in a computer system that create unfair outcomes, such as privileging one arbitrary group of users over others.

Algorithmic Bias

A

Humans are error-prone and biased, but that doesn’t mean that algorithms are necessarily better. Still, the tech is already making important decisions about your life and potentially ruling over which political advertisements you see, how your application to your dream job is screened, how police officers are deployed in your neighborhood, and even predicting your home’s risk of fire.

Broadly speaking, AI is a set of tools and technologies that are put together to mimic human behavior and boost the capacity and efficiency of performing human tasks. ML is a subset of AI that automatically adapts over time based on data and end-user input. Bias can be introduced into AI and ML through things like human behavior and the data we generate.

Algorithmic bias describes systematic and repeatable errors in a computer system that create unfair outcomes, such as privileging one arbitrary group of users over others. For instance, the ML model may be biased from the start if its assumptions are skewed. Once built, the model is tested against a large data set. If the data set is not appropriate for its intended use, the model can become biased. Bias can show up anywhere in the design of the algorithm: the types of data, how you collect it, how it’s used, how it’s tested, who it’s intended for or even the question it’s asking.

As ML learns and adapts, it’s vulnerable to potentially biased input and patterns. Existing prejudices – especially if they’re unknown – and data that reflects societal or historical inequities can result in bias being baked into the data that’s used to train an algorithm or ML model to predict outcomes.

As algorithms expand their ability to organize society, politics, institutions, and behavior, sociologists have become concerned with the ways in which unanticipated output and manipulation of data can impact the physical world. Because algorithms are often considered to be neutral and unbiased, they can inaccurately project greater authority than human expertise.

Algorithmic bias has been cited in cases ranging from election outcomes to the spread of online hate speech. It has also arisen in criminal justice, healthcare, and hiring, compounding existing racial, socioeconomic, and gender biases. The relative inability of facial recognition technology to accurately identify darker-skinned faces has been linked to multiple wrongful arrests of black men, an issue stemming from imbalanced datasets.

If we can agree that bias in AI is a problem, then we can act with intention to reduce it. Though this is an emerging field, several approaches from research design might inform us on techniques we can apply toward gender and racial bias in algorithms. To help researchers and leaders at businesses and organizations developing AI systems catalyze gender/racial-smart ML, social change leaders should encourage ML development partners to pursue and advocate for the following:

Embed and advance gender and racial diversity, equity, and inclusion among teams developing and managing AI systems.

Recognize that data and algorithms are not neutral, and then do something about it.

Center the voices of marginalized community members, including women and non-binary individuals, in the development of AI systems.

Establish gender- and racial-sensitive governance approaches for responsible AI.

These actions are not exhaustive, but they provide a starting point for building smart ML that advances equity. Let’s not miss this opportunity to revolutionize how we think about, design, and manage AI systems and thereby pursue a more just world today and for future generations.

Explainability is one thing; interpreting it rightly (for the good of society), is another.

Murat Durmus
Author of The AI Thought Book