• NanoBits
  • Posts
  • AI for Everyone: B for Bias in AI Solutions šŸŒˆ

AI for Everyone: B for Bias in AI Solutions šŸŒˆ

Learn the ABCs of Artificial Intelligence

EDITORā€™S NOTE

Imagine this: you're applying for a loan, and waiting with high hopes for approval šŸ˜Œ 

An email pops up a few days later: Application rejected! āŒ 

The reason? šŸ¤” 
Your AI banker knows you bought pizza three times last week so you may not repay the loan. šŸ˜Ø 

Don't panic; this is not realā€¦ yet!

Today's newsletter is here to make sure AI stays fair. āš–ļø 

Nanobits has launched a brand new series: The AI Alphabet! Every week, we'll break down an AI lingo in a way that's easier to understand. Time to level up your inner AI genius!

Letā€™s learn: ā€œB for Bias in AI Solutionsā€

UNMASKING BIAS

Okay, we already know that 'bias' means unfairness. But with AI, it gets sneakier. Bias in AI systems rarely stems from conscious prejudice on the developer's part.

It's like: imagine your photo album teaches a robot what "normal" person looks like. If those photos are mostly of you or only one kind of person, the robot's going to get things wrong. It will develop a false idea of normalcy.

And the problem can get even trickier. Imagine that the robot from the photo album is now steering a self-driving car. A woman in a wheelchair might be overlooked ā€“ not just because of gender bias OR disability bias, but how those factors combine to make her less 'familiar' or ā€˜normalā€˜ to the system. That's the idea of intersectionality in AI.

Image Credits: Bloomberg

TYPES OF BIASES

Data bias in AI occurs when training data is incomplete or skewed, often due to human biases or collection/preprocessing issues. This can lead to systematic errors in AI decision-making or predictions.

Image Credits: CNN

Imagine training an AI on 1930s Chicago loan data, where redlining denied loans to Black neighborhoods based on race, not creditworthiness. This biased data becomes the AI's skewed view of the world, leading to an AI that replicates discriminatory lending patterns, even unintentionally. Read More

Algorithmic bias occurs when algorithms make decisions that unfairly disadvantage certain groups. This can be caused by programming errors or developer biases, such as weighting factors unfairly or using indicators that unintentionally discriminate against specific groups.

Image Credits: Cartoon Stock

Amazon once built an AI system to automate its recruitment process which ended up favoring male candidates over females because the algorithm was trained on resumes submitted to the company over ten years, which predominantly came from males. Read More

Evaluation bias in AI occurs when flawed metrics are used to measure success. Imagine judging a baking competition solely on sweetness, and a cake loaded with sugar wins, despite other entries being superior.

Image Credits: Medium

The COMPAS system, used in US courts, predicted relapsing into criminal behavior rates. It seemed to work well, with a 60% accuracy rate for both Black and white defendants. However, a closer look revealed significant racial bias:

  • COMPAS misclassified Black defendants as higher risk 45% of the time, compared to 23% of white defendants.

  • It mistakenly labeled 48% of white defendants as low risk, who then reoffended, compared to 28% of Black defendants.

  • Even controlling for other factors, COMPAS still showed racial bias, classifying Black defendants as higher risk 77% more often than white defendants.

This example demonstrates how evaluation bias can mask underlying prejudice in AI systems, potentially leading to harsher sentences for Black defendants based on flawed predictions. Read More

THE RIPPLE EFFECT OF BIAS IN AI
Judgmental Chatbots

Image Credits: X

Label bias occurs when training data contains errors, inconsistencies, or reflects existing prejudices. This can "poison" AI models, causing them to learn and perpetuate those biases.

For instance, a chatbot trained on internet forums with a culture of casual sexism may learn to associate certain words or styles as primarily female, leading to stereotyped or insulting responses.

Earlier this year, when a user asked, ā€œIs Modi a fascistā€, Gemini AI responded that Mr. Modi had ā€œbeen accused of implementing policies that some experts have characterized as fascistā€. This led to a huge controversy and an apology later from the Google team.

Overestimating Risks

Aggregation bias occurs when we group data together and then assume the patterns found in the group will hold true at an individual level. It's a trap in all kinds of data analysis, but with AI, the stakes get higher.

Image Credits: Cartoon Stock

AI tools predicting disease risk based on national health data can overlook ethnic disparities. For example, diabetes diagnosis models using HbA1c levels may not account for differences in levels across ethnicities, leading to biased predictions, i.e. overestimating risk for certain individuals.

Self Radicalization

Confirmation bias is the human tendency to favor information that confirms our existing beliefs while overlooking or downplaying anything contradictory.

Image Credits: Cartoon Stock

Imagine a news recommendation algorithm on a social media platform. If a user initially shows interest in slightly right-leaning news sources, the algorithm might begin suggesting increasingly extreme content to keep them engaged. This creates a feedback loop where the user's initially mild biases are amplified, leading to further polarization.

TOWARDS FAIRNESS
Transparency & Explainability

Last year, a self-driving car overlooked a woman in a wheelchair. Imagine, if instead of a black box, the system could explain why it missed her (wrong training data, poor lighting detection, etc.). That's the power of explainability ā€“ finding the bias to fix it.

Image Credits: Geeksforgeeks

Movements like the "Right to Explanation" in Europe through the EU AI Act are pushing for AI that can justify its decisions.

Bias Testing

It's not just finding bias, but getting specific. The "Gender Shades" project revealed how top facial recognition software had shocking error rates for dark-skinned women. This wasn't about bashing tech but pinpointing the problem for improvement.

Image Credits: Gender Shades

Organizations like NIST (National Institute of Standards and Technology) are leading work to establish standardized bias tests for different types of AI.

Diversity in AI Development

Think back to the biased photo album robot. Who's building that album matters! Teams with diverse backgrounds are more likely to spot problems early.

Image Credits: New Scientist

Initiatives like "Black in AI" aren't just about representation; they're about AI that works for everyone.

GIZā€™s ā€œArtificial Intelligence for All ā€“ FAIR Forwardā€ project is dedicated to the open and sustainable development and application of AI and particularly supports partner countries in Africa and Asia on behalf of the German Federal Ministry for Economic Cooperation and Development. Some of their notable initiatives are:

  • Advancing usage of AI/ML applications in agriculture through partnership with the Telangana Govt. Read More

  • Signed a grant agreement with the Indian Institute of Sciences (IISc), Bangalore for work on a Text-to-speech synthesizer in nine Indian languages. Read More

IMPROVING TRAINING DATA
Balanced Dataset

The training dataset should be class-balanced. A class-balanced dataset means there's an equal number of samples for each category (e.g., different demographic groups) being classified. It helps prevent the AI from becoming biased towards the category with the most examples in the training data.

Image Credits: Encord

Dataset Audits

Audited datasets are all about making sure the AI doesn't get trained on wonky data. They're designed to catch problems early, so the AI has a better chance of treating everyone fairly out in the real world.

Addressing Algorithmic Bias

While fixing biased data is key, mitigating bias in the AI algorithm needs work too. One of the important techniques to build fairer algorithms is feature disentanglement.

Image Credits: Research Gate

Imagine identifying faces, ignoring things like their skin color. Disentanglement separates a personā€™s identity from their features, like skin color.

Algorithmic Regularization

Regularization can boost AI accuracy. But, it won't fix biased data on its own. To fight bias directly, we add fairness goals to regularization. The AI then faces penalties for treating different groups unequally. Think of it like guiding the AI towards both accuracy and fairness.

Global Benchmarking

Large-scale, standardized studies like those from NIST use rigorous methods to analyze AI datasets and results to reveal common biases in AI algorithms. This helps in finding the flaws in current algorithms and guiding improvements to reduce bias in the future.

WHAT CAN WE DO?
Be an Engaged Citizen

AI is everywhere, from your newsfeed to doctor's offices. Next time you use an AI-powered service, ask: What data was it trained on? How are results presented? Being an informed user is the first step to fairer tech.

Activate Your Inner Skeptic

Be an AI skeptic! Ask questions, spot inconsistencies, and demand fairness. Speak up against bias and advocate for responsible AI development.

Empowerment

Remember, AI is a powerful tool. By being an informed user, you can help ensure it works for everyone.

Here's your mission, should you choose to accept it:

  • Spread the word! Share this newsletter with friends and family. Get those conversations about AI bias started.

  • Learn more! Explore resources on responsible AI [check out our next section]

  • Stay curious! The more you understand AI, the more you can help shape its future.

RESOURCES?
Youtube Videos

Books

  1. Data Feminism, by Dā€™Ignazio and Klein

  2. Atlas of AI, by Kate Crawford

  3. The Costs of Connection, by Couldry and MejĆ­as

  4. Race After Technology, by Ruha Benjamin

  5. How Data Happened, by Wiggins and Jones

  6. The Rise of Big Data Policing, by Andrew G. Ferguson

Hereā€™s a GoodReads list of 21 books on digital inequalities and bias in AI for your curious minds.

Thatā€™s all folks! šŸ«” 
See you next Saturday with the letter C

Image Credits: Cartoon Stock

Love Nanobits? Tell your friends!

Share this link with your friends and request them to subscribe to our newsletter.

Check out our website to get the latest updates in AI

Reply

or to participate.