Artificial Intelligence isn’t just transforming the world: it’s deciding your future. The most alarming part? It can be doing so based on biased data, judging you by your skin color, gender, or social background. In this post, you’ll discover how algorithms are making decisions that affect lives—and how these decisions often reproduce old injustices with a veneer of innovation.
The COMPAS Case: When AI Decides Who Goes to Jail
COMPAS, an algorithmic system used in the US judicial system, was created to predict the risk of criminal recidivism. In theory, the intention was good: to help judges make more data-driven decisions. But in practice, it revealed one of the most concerning examples of algorithmic bias in the last decade. Investigators found that COMPAS tends to overestimate the risk of recidivism for Black individuals, while underestimating it for white individuals, even with similar profiles. The impact is direct: harsher sentences and an unfair increase in the Black prison population. The COMPAS case is emblematic because it shows how an algorithm, even well-intentioned, can reproduce and amplify deeply ingrained social prejudices, causing devastating impacts on already vulnerable communities.
Amazon and the Algorithm That Rejected Women
Amazon, one of the world’s most powerful companies, decided to use AI to automate its recruitment process. The idea was to create an efficient tool, capable of identifying the best talent without human bias. But what happened was the exact opposite.
By training the algorithm with resume data from the past 10 years, the system learned that the ideal employee profile was… male. This was because, historically, the company’s technical positions were predominantly held by men, and the algorithm interpreted this as a standard of excellence. The result: the system began to automatically penalize any resume that mentioned activities related to the female universe, such as “women’s team captain” or “member of a women’s group.” The project was abandoned, but the symbolic damage was done. Amazon’s story showed how, even unintentionally, AI can become a tool for exclusion—in this case, for highly qualified women who were eliminated before even an interview.
The Problem Is in the Data: How AI Learns to Discriminate
The foundation of artificial intelligence is data. Everything it “knows” was learned from databases that reflect the real world. And the real world, unfortunately, is full of historical inequalities. When AI is trained with this information, it learns to reproduce the same discriminatory behavior.
If a credit system is trained with decades of data where minorities historically had less access to financing, what do you think it will do? It will learn that this group represents a “risk”—and deny credit, even when there’s no real justification. If an HR algorithm is trained with predominantly white, male leadership profiles, it will understand that this is the standard to follow—and discard anything that deviates from it. This is how prejudices gain new life in the digital universe. AI doesn’t invent inequalities, but it can amplify them with astonishing efficiency.

Types of Bias: The Different Faces of Algorithmic Discrimination
There are several types of bias that directly affect the reliability and fairness of artificial intelligence systems.
- Sampling bias: occurs when the data collected doesn’t adequately represent the diversity of the population, leaving groups underrepresented or absent.
- Algorithmic/process bias: arises when the system’s own logic is built in a partial way, reproducing unjust patterns even without direct human interference. Often, it’s a consequence of adjustments and prioritizations made during model development that favor certain groups.
- Human bias in data annotation: Those who label information to train AI can, even unconsciously, transfer their own prejudices. And when this data is used, AI simply reproduces the error with mathematical precision.
These biases aren’t just technical slips. They shape decisions with real impact: who gets hired, who gets a correct diagnosis, who gets credit, and who goes to jail.
The Impacts of Bias in Crucial Areas (Beyond Justice and HR)
The impacts of algorithmic bias go far beyond isolated cases. They create a ripple effect that can affect a person’s entire life trajectory—especially if they already belong to a historically marginalized group.
- Access to Credit: Risk assessment systems can reject loans or offer worse conditions for minorities, hindering their financial inclusion.
- Surveillance Systems: Facial recognition software can have higher error rates for non-white individuals and other minorities, increasing the risk of discrimination and judicial errors.
- Medical Diagnoses: The use of AI in diagnoses can be disastrous if models aren’t trained with diversity. Black individuals, for example, may have their symptoms underestimated, with serious consequences.
Algorithmic bias is a new form of social exclusion—silent, sophisticated, and invisible to many.
AI in Healthcare: When an Algorithm Error Can Cost a Life
Few fields are as sensitive as medicine. Yet, artificial intelligence is already being used to assist diagnoses and indicate treatments. The problem is that these algorithms have often been trained with predominantly white data, from specific age groups and health patterns.
Studies have revealed that symptoms of Black patients, for example, are frequently interpreted as less severe by AI systems. This happens because the data used to calibrate these algorithms don’t account for physiological and sociocultural nuances that affect symptoms and access to healthcare. The result can be tragic: incorrect diagnoses, delayed treatments, and even preventable deaths. The risk isn’t in using technology—it’s in using technology without responsibility, without diversity, and without ethical review.
Transparency, Explainability, and the Fight for Algorithm Control
Most AI systems operate like a black box: no one knows for sure how a decision was made, or based on what criteria. And this is one of the greatest dangers of our time. When you don’t understand how you were evaluated, you can’t challenge it. When there’s no transparency, there’s no justice.
That’s why there’s a growing movement for “explainable AI” (XAI), where algorithms need to justify their choices in a way that’s understandable to humans. There’s also increasing pressure for external and independent audits, capable of assessing whether a system is generating discriminatory results. The responsibility for controlling algorithms isn’t just companies’. It belongs to public authorities, civil society, and users—who need to know they have rights and must demand clarity, accountability, and redress.
The Farce of Neutrality: Why AI Can Also Be Prejudiced
Many still believe that artificial intelligence is an impartial, cold, calculating tool superior to humans. But this view is completely outdated. AI is a human creation—and as such, carries all the flaws and limitations of its creators.
The idea that algorithms are fair simply because they are digital is dangerous. The truth is that, without constant vigilance and ethics, artificial intelligence simply becomes a new way to maintain old privileges. What we are seeing is the automation of inequality.
Your Rights Against Algorithmic Decisions: How to Protect Yourself
Few people know this, but laws like the LGPD (in Brazil) and GDPR (in Europe) give citizens the right to know when a decision has been made by an automated system. What’s more: you have the right to request explanations, challenge, and even refuse an AI from making decisions for you in certain situations.
Algorithmic monitoring tools are becoming increasingly necessary, as are organizations that oversee the ethical use of data. Consumers and citizens can no longer be passive in the face of the technological avalanche. It’s essential to understand your rights, demand transparency, and insist that technology respects human dignity.
A Possible Future: AI That Includes and Protects
The good news is that a fairer artificial intelligence is possible. But it needs to be built on a foundation of diversity, responsibility, and a commitment to the common good. This includes:
- Frequent audits of algorithms.
- Diverse development teams.
- Human-centered design.
- Clear and rigorously applied public regulation.
AI can be an ally for inclusion, justice, and social progress—provided that society participates in its construction and does not blindly accept its decisions.
Algorithmic bias is a real and urgent challenge. Technology itself isn’t evil, but its uncritical use can perpetuate inequalities. With transparency, accountability, and social participation, we can ensure that AI is a tool for inclusion and justice, not exclusion.
You can ignore it… or you can be part of the change.
And you? Have you ever felt the effects of an automated decision? Share your thoughts in the comments below. Let’s broaden this conversation and fight for technology that respects everyone—and not just the privileged.