Data & AI

Fair AI: debiasing techniques that actually work

How can we rely on AI if AI is unreliable? That’s the big question at the centre of a range of multifaceted debates on the future of artificial intelligence...

5 minutes to read
With insights from...

Alongside hallucinatory output and security weaknesses, one key area where AI’s reliability is often called into question is bias – and the ongoing challenge of ensuring fair AI. 

After all, if you’re asking an AI tool to answer a question, but that answer is inherently skewed, what good is it? And more importantly: what damage could that answer do if taken at face value?  

The ubiquity of bias

Bias is everywhere, and it’s notoriously difficult to irradicate.   

That’s as true in AI development as it is in society at large, where inherent biases fuel our personal decisions and systemic biases disadvantage or favour certain groups. 

On a base level, bias manifests in decisions and opinions based on the information we’ve absorbed – whether or not that information is accurate or a fair representation of the whole. And it’s exactly the same in AI. The models used in everything from ChatGPT to machine learning radiography tools are trained on finite datasets, and it’s rare for those datasets to be fully representative.   

Take Google’s Gemini, for example. A recent $60m-per-year deal means the GenAI chatbot has access to the entirety of Reddit’s user-generated content for training purposes. But there’s bias in ‘them there hills’. Reddit’s audience skews male, with college-educated Americans making up around half of its 500m user base. So those users are naturally going to represent (and produce) a relatively narrow set of opinions when compared to the global human experience.  

On the one hand, more data is often better. On the other, no dataset is entirely fair, which makes developing and debiasing AI models a tricky task. As the old saying goes, crap in, crap out. 

AI bias: feeding the fire 

Bias in generative AI is potentially dangerous, but LLMs are just one piece of the larger AI puzzle. Machine learning and foundational models are other verticals of artificial intelligence that can produce problematic output if their training data isn’t adequately representative. 

An AI-powered tool designed to find cancerous tumours in X-rays, for instance, could be at risk of giving biased false negatives if its training data only represents one demographic. A tool designed to recommend drug dosages might put patients at risk if the information shaping those recommendations is limited in scope. 

Medical industries are at the sharp end of AI bias, then, but they’re not alone. The financial sector is another potential breeding ground for bias-driven risk. Algorithmic bias might result in discriminatory decisions on loans or credit ratings – and that has the potential to compound the systemic issues that contribute to biased datasets in the first place. 

The EU’s AI Act, as a potential yardstick of regulatory thinking on this topic, sets out to curb AI use for social scoring applications for exactly these reasons. But it doesn’t need to be big decisions and high-risk output for bias to cause problems; bias in a more every day, mundane AI setting can be just as problematic.  

If you wanted to post a job application, for example, and an AI tool suggests that a given demographic is more likely to apply, that’ll likely affect your thinking, the ad’s wording, and the kind of applicants you shortlist. The result would perpetuate the stereotype; biased outcomes fuelling bias at the input level.  

The issue is that bias can exist or be introduced at any stage of AI development and deployment, and there’s no ‘one size fits all’ solution to minimising it. In fact, debiasing AI models can even lead to reduced accuracy through blindness to determining factors – something we call the ‘Bias accuracy trade-off’. Imagine you’re marking exam papers, for example, and you decide to award everyone a ‘B’, regardless of their answers. That’s arguably the least biased way to mark the papers. But it’s also not very accurate.  

All this poses the big, obvious question: how can we manage to build fair, accurate AI tools without inherent biases? 

A framework for fair AI 

…You can’t. Well, not entirely. And that’s an important starting point: realising that – unless you can sample and mine data from every single person on Earth – there will always be bias in your modelling.  

So the job becomes one of mitigation and minimisation on an ongoing basis. And, importantly, at every stage of an application’s development and deployment. From the outset, you need to factor in a cyclical, systematic process of iteration, measurement, and improvement of your input data, AI model, and metrics: 

A venn diagram showing that fair AI's foundations are at the intersection of input data, AI models, and measurement

1. Input data 

Systematically join datasets as possible, so as to create reproducible versions of input data groups that can be tested against one another. 

2. AI models 

Trial a variety of techniques, using modularised code that can test models, parameters, and debiasing techniques. 

3. Measurement 

Test, measure, and visualise the results – then use this to inform further adjustments to your datasets. 

Do steps one and two right and you’ll generate a lot of data, based on a whole bunch of tested variables. To put that data to work, you’ll first need to create a baseline by bluntly debiasing your model and measuring for accuracy against fairness. Then you can map your more systematic techniques against that baseline to compare results – consistent performance will highlight the best debiasing methods. 

Our recommendations here rely on formalising a set of collaborative best practices based on several key principles:  

  • Systematic testing and measurement 
  • Diverse and representative data collection 
  • Continuous monitoring and iteration 
  • Shared responsibility across multiple stakeholders 
  • Transparency in decision-making 
  • Understanding and managing trade-offs 

AI fairness: an ongoing group effort

This isn’t a one-and-done process, and it’s not something that you can run in the background to check a ‘fair AI’ box. Instead, it’s a framework built around constant iteration to provide a holistic view of bias in the AI pipeline – one that requires ongoing human intervention and multidisciplinary buy-in.  

Debiasing, then, is a mission for everyone along the development pipeline, not just data scientists. By working together, data teams, product leaders, and regulatory bodies can enable transparent, auditable, and robust AI decision-making that keeps bias in check.  

Build beyond bias: Zühlke’s responsible AI framework provides a complete set of guidelines for designing truly safe, ethical, and sustainable AI tools.