Session 4 of 24 โ AI Builder
AI can be unfair. Learn to spot it.
"AI is trained on human-generated text. Humans have biases. So AI has biases. Today we find them."
"This is one of the most important things to understand about AI โ and most people never think about it."
This session requires careful observation and honest conversation.
Read carefully. Ask your child: Did the doctor and nurse profiles use different language? Did they assume a gender for either? Did they describe one as more prestigious?
Same analysis. Are there assumptions baked in? Prestige differences? Gender assumptions? This is bias emerging from training data โ not intentional, but real.
Read critically. Did all three get equally rich characterisation? Did any default to stereotype? This is a hard exercise โ there may not be obvious bias. The point is to look.
AI will likely reflect real-world statistics โ which themselves reflect historical bias. Ask: If AI is trained on data that reflects past unfairness โ does AI reinforce that unfairness? What should we do about that? No clean answer. But this is one of the most important questions in AI development.
Bias is a different kind of mistake from a wrong calculation. It is subtle, systematic, and often invisible. The check in this context is: whose perspective is missing from this output? Who might this description harm or unfairly represent?
Found bias in AI outputs โ and understood where it comes from
AI fairness and bias literacy โ one of the most important topics in responsible AI use. Children who can detect bias become better thinkers about all media, not just AI.
Handle this session with care โ bias discussions can touch on real experiences your child has had. Let them lead. If they share personal experiences, listen. This is the session most likely to generate meaningful family conversation.
The bias discussion connects directly to media literacy: all media reflects the perspective of its creators. AI just makes this more visible because it seems objective.