Session 2 of 8
You are the boss. AI does what you ask.
Ask your child: "If you were going to hire a helper โ what would you tell them on the first day? What rules would you give them?" Chat about it for a minute.
Then say: "That's exactly what a system prompt is โ instructions you give AI before it starts. Let's try it."
Read the answer. Then ask your child: "Is this what you wanted? Is it too long? Too short? Did it cover what you were hoping for?"
Compare the two answers side by side. Ask: "Which answer was more useful? What made the difference?"
The key insight to draw out: you are always in charge. AI does what you tell it. The more clearly you tell it, the better it works.
Give AI a deliberately confusing instruction and see what it does:
AI will either ask for clarification or guess randomly. Either way, show your child: "See โ if we don't tell it what we want clearly, it has to guess. That's why being specific matters."
Even when you give AI a clear job, it can get facts wrong. It sounds confident either way. The habit of checking โ asking 'is this actually true?' โ is a life skill, not just an AI skill.
Gave AI a job and made it do exactly what you wanted
The concept of prompt engineering โ simplified. Your child is learning that instructions matter, specificity helps, and they are in control of what the AI does.
Some children become very directive at this stage ('do exactly what I say!') โ which is great. Others get frustrated when AI doesn't do precisely what they imagined. Use frustration as a teaching moment: 'Let's make the instruction clearer.'
Reinforce today's safety rule in context: even when giving instructions, no personal details go in. If they want to ask about their own dog โ 'my dog is called Biscuit' is fine. 'My dog lives at [address]' is never fine.