New tech, old biases: a look into bigotry in AI

Are we teaching our biases to AI? If so, what needs to change?


20/06/2023

In 2015, Jacky Alciné, a black software developer, tweeted that he and his friends had been labeled as “gorillas” by Google Photos. Google categorizes images using an artificial intelligence to help users search and group them – in this case it backfired.

Even though Google released a statement apologizing and promising a quick solution, two years later WIRED magazine published an article on how the issue had never been truly solved. Google had blocked Google Photos from using labels like “gorilla”, “chimp,” “chimpanzee,” and “monkey” on any image, and when asked to search for “black man,” “black woman,” or “black person”, Google Photos provided images of people of all skin complexions. Google’s “fix” was forcing the AI model to ignore the aforementioned labels, making it impossible to search for black people or gorillas using the service.

Google Photos’ approach and quality control seemed to not have accounted for Jacky’s skin tone, echoing an earlier time in color photography history when photographic films were not designed to capture darker-skinned people. What at first may seem like a mere technical problem, in fact, highlights a real danger in the use of artificial intelligence systems. And it’s not just Google.

In 2010, multiple news outlets reported on Joz Wang, a Taiwanese-American strategy consultant, who noticed her Nikon camera constantly flagged her as blinking due to her eye shape. In 2016, the infamous Microsoft chatbot Tay had to be shut down hours after launching, as it immediately learned to be racist from Twitter. More recently, in 2022, MIT Technology Review reporter Melissa Heikkiläarchive discussed how Lensa, an app that generates fantasy portraits of users based on their selfies, was generating oversexualized images of generic Asian women when she used it, but it generated perfectly normal portraits when her white male colleagues used it. Finally, as impressive as ChatGPT may be, users have been finding prompts that result in the service outputting bigoted texts since its release, regardless of OpenAI’s continuous work to set up guardrails against problematic topics.

These mistakes have, until now, been found mostly in apps used for entertainment or convenience, but unless we’re careful, similar problems may soon affect critical services we depend upon as a society.

Why do we have this problem in AI and how can we work towards solving it?

Developments in AI

Over the last decade, AI graduated from being a niche research field in mathematics and computer science to the topic that everyone is talking about. Sparked by the potential of tools like ChatGPT and DALL-E, the public discussion on AI has exploded, fueled by utopic and apocalyptic visions of a future where large parts of our lives are managed by artificial intelligence services.

In layman’s terms, today, when we refer to AI, we generally refer to mathematical models that “train” to perform a particular task by analyzing large quantities of data. Consequently, one of the core problems of developing AI models is:  “Where do we get the data?”

Invariably, because we tend to want to solve human-related problems we also tend to need human-generated data. Unfortunately, our collective digital footprint is far from being equitable and hate-free, as anyone who has spent any amount of time on social media can attest to. This then reflects on the development of AI systems. So we should be equally concerned not only by the new problems and benefits advancements in AI may bring, but also by the existing societal problems AI may exacerbate.

The Problem

There is a clear trend in AI development: AI models are not always designed with everyone in mind. Often the data being used to train models does not account for minorities and the quality control processes behind these services and applications are not producing the expected results.

But if we know there is a problem (like in the case of Jacky Alciné), we can fix it, right? The answer to this may be unexpected. Most modern AI models are not humanly interpretable, meaning it may be virtually impossible to track down the aspects of the data that influence the results provided by these systems. If an AI designed for text generation analyzes 50 million documents during the learning process and, when tested, outputs misogynistic text, depending on how it was designed, we may not be able to know why. We may suspect some of the input documents have misogynistic undertones but it may be impossible to identify which in order to remove them. Critically, no human is going to read the 50 million documents prior to them being fed to the model.

Sadly, these problems are not restricted to services exclusively used for entertainment, as the examples above may suggest. In fact, these problems are of increasing concern as more critical components of our society start adopting AI tools. AI models are now being tested in financial, judicial, and medical settings, where the wrong decision can mean someone’s bankruptcy, incarceration, or even death.

A paper submitted to the 54th Session of the United Nations Human Rights Council in 2022 brings light to the issue from a law enforcement surveillance point of view:

“Because the training data for facial recognition technologies in law enforcement context comes from photos relating to past criminal activity, people of colour are overrepresented in facial recognition technology training systems. In some jurisdictions, such as the United States, people of colour are at a much higher risk of being pulled over, searched, arrested, incarcerated, and wrongfully convicted than whites. Therefore, facial recognition technology produces many false positives because it is already functioning in a highly discriminatory environment. Law and border enforcement agencies around the world are experimenting with automated facial recognition technology with complete discretion and on ad hoc basis, without appropriate legal frameworks to govern their use nor sufficient oversight or public awareness.”

Though there are known biases against women and minorities imprinted everywhere from medical diagnostics data to loan default data, many times this data is still used without prior scrutiny to implement AI systems. This issue rises partially out of ignorance, partially due to a lack of resources, and partially due to how difficult it is to filter out these imbalances when dealing with such large quantities of data. Making mistakes is a core part of the scientific method, and research is always an iterative process, still, it is paramount we ensure critical AI systems are not being deployed without a deep understanding of what might go wrong and the ways to fix it, lest we use real people as guinea pigs.

Towards a Solution

So, is AI doomed to learn to replicate our mistakes? Yes. But researchers are also hard at work developing better ways of tracking how AI models make decisions, and why they output certain results. “Explainable AI” is the name of this research field and its goal is to provide humans with tools for interpreting these models. Explainable approaches are increasingly required by stakeholders in critical fields where the AI decision-making process cannot be completely opaque and definitive. In these instances, a human-in-the-loop approach can be taken where human experts work alongside Explainable AI systems, but ultimately have a final say on critical decision-making processes. Through this approach, practitioners working in judicial, medical, and other critical fields could be informed by AI tools while minimizing the dangers of their use.

Besides the more technical solutions, governments are also working with various degrees of urgency toward better legislating and regulating AI applications and public data usage. News of companies being sued for using copyrighted artworks, without the artists’ permission, to create profitable image-generation models, highlight only a fraction of the problem that is unethical data use in tech.

Meanwhile, as individual consumers, we can increasingly opt to engage with AI-based services that are more transparent regarding their decision-making process and the data they consume. This means developers can more easily fix issues with the help of consumer reports, and users can double-check where the information being provided is coming from. Many AI-based information summarization tools are leaning toward this approach, where the summarized answers provided to the users include sources for each statement.

As we are slowly unraveling the impact AI will have on our daily life going forward, it is important to be mindful of its direct influence on pre-existing pressing social issues. Its impact, positive or negative, will solely be a reflection of its use.  Though, regardless of the work we put into making AI a tool for everyone, the path toward more equitable, hate-free data starts with a change in mentality, not a change in technology. Sadly, when it comes to avoiding bigotry, we might be trying to teach AI what we collectively haven’t learned yet.