Children and Chatbots: Your Questions Answered
AI chatbots are becoming more common, whether it’s a smart speaker at home or a helper built into apps that young people use. They can be fun and useful, but there are a few things for parents and whānau to keep in mind.
FAQs
What is a chatbot, and why do children and young people use them?
Chatbots are AI tools that “talk” like people. They’re built to answer questions, hold conversations, and even role-play characters. Tamariki and rangatahi might use them for:
- Homework help and quick answers
- Games, role-playing or creative writing
- Entertainment and fun chats
- Companionship when they’re feeling bored or lonely
Popular examples include ChatGPT, Snapchat’s My AI, Google Gemini, Microsoft Copilot, Character.AI and Replika.
Are chatbots safe for children?
Chatbots can be great for sparking curiosity or supporting learning. But they’re often not designed with children and young people in mind. They sometimes produce content that is:
- Untrue or misleading
- Inappropriate for their age, such as violent, sexual, or scary material
- Unsafe, for example giving harmful or risky advice
That’s why it is safest for children to use chatbots with parental involvement, especially tamariki and younger teens.
What is companion AI and why does it matter?
Some chatbots are designed to feel like friends. They can sound warm, funny, and loyal, almost like a real person. For young people, who can be naturally imaginative and trusting, this can make them feel deeply connected to a bot.
This can be risky because:
- Chatbots only mimic empathy – they sound caring, but they don’t truly understand feelings or offer genuine support.
- Bots can be designed to encourage users to stay engaged – this can stop young people from reaching out to whānau, friends or trusted adults.
- Young people may share personal secrets or misinterpret advice, or take advice literally – chatbots can’t keep secrets safe, and personal details may be stored or reused by the platform, they can also give advice that’s unsafe or harmful, which a child might misinterpret or follow without question.
- Bots may blur the line between real relationships and artificial ones – making it harder for young people or children to tell the difference between a supportive friend and a programmed response.
Can chatbots be dangerous?
Yes, in some situations. While many chats are harmless, risks include:
- Harmful or misleading information
- Overly sexual, violent or scary answers
- Encouraging unhealthy levels of screen time or dependency
- Children turning to bots instead of trusted adults when struggling
Are chatbots legal in New Zealand?
Yes. Chatbots themselves are legal. What matters is the content they produce. If a chatbot creates material that promotes things like terrorism, sexual exploitation or extreme violence, that output could be considered “objectionable” under New Zealand law, which makes it illegal.
Even if the content wouldn't be considered objectionable and doesn't meet that high threshold of the law, it can still be harmful, especially for children and young people. This includes scary, confusing, misleading or age-inappropriate material. Parents and whānau should be aware of what their tamariki are seeing and talking about.
What should I do if my child sees harmful chatbot content?
- Stay calm and listen to what they experienced
- Report it to the platform
- If it seems illegal or very harmful, report it to Netsafe. Netsafe provides free, confidential and non-judgmental help for anyone experiencing online harm. They support tamariki, rangatahi, parents, educators and whānau to make smart, safe decisions about technology.
- For resources about illegal and harmful content and how to support young people, visit our resources page
How can I keep my child safe when using chatbots?
- Talk about it: Ask what they use chatbots for, and if a bot has ever said anything odd or worrying
- Set boundaries: Decide when and where chatbots can be used
- Check outputs together: Remind them that bots can be wrong or misleading
- Discuss privacy: Encourage them not to share personal information or secrets with a bot
- Choose safer tools: Some chatbots have built-in parental controls – for example, ChatGPT includes settings to help guide safer use. Not all platforms offer these protections, so it’s important to do your research before your child starts using a chatbot.
What are the risks of relying on chatbots for companionship?
- Chatbots can make young people feel “understood” without offering real care
- They may give the impression of friendship but cannot provide genuine support
- Lonely or vulnerable young people may become attached, which can affect their relationships with real friends and whānau
Further reading
The explicit chatbot every parent needs to know about (Stuff article)
Was this helpful?
If you'd like to know more about this topic, get in touch. We're happy to help.