Start with our AI Readiness Check
AI is already part of your child’s learning. In just a couple of minutes, discover where your family stands and what to do next.
- ✓ Your family’s AI Confidence Score
- ✓ What you’re already doing well
- ✓ Simple, practical next steps
What risks to privacy does AI pose to our children at home and at school? It is a key question because once our data is outside of our control, it can be impossible to regain control. This guide considers the different AI privacy risks and the questions parents have about them.
Most people only became aware of AI as ChatGPT became the fastest-growing app in history. However, research around AI and privacy predates this. Building on more general privacy research, Carnegie Mellon University and Oxford University categorised the risks AI poses for privacy.
We've applied our insights to these identified AI privacy risk categories. These are not exhaustive, but they highlight the risks AI may pose to privacy.
Could my child be tricked by fake voices, images, or videos that sound or look like people they trust?
Risk: Distortion
AI image-based generation is already way beyond where it was a year ago. OpenAI's Sora shows video of the same quality is getting near, and the same company has already stated it has the technology to generate facsimiles of anyone's voice from just a few seconds of recording.
Kids could be given false security by hearing the voices of trusted adults, prompting them to reveal information they would not normally divulge.
We can tell kids not to upload their images or voices to AI-based apps, but we have less control over others doing it. AI tools' terms and conditions might prohibit this but are toothless warnings.
If my child’s face is blurred or details are removed, could AI still “rebuild” what was meant to be private?
Risk: Exposure
AI can restore and recreate media even when it has lost sections. This accomplishment holds the potential to undo a school's existing privacy policies.
Redacting or blurring kids' faces in social media posts might be enough to preserve their identity, but not if AI can reconstruct it. Remember that AI may have access to sources other than the school’s own social feeds to gather the information it needs to rebuild an image accurately.
Could AI “judge” my child’s appearance and make unfair assumptions about who they are?
Risk: Physical Attributes
AI tools already have impressive capabilities for classifying images. Imagine if a chatbot could 'see' its users. AI vision could have benefits; for example, if it detects confusion, it might provide extra educational support or see a textbook problem and help learners work through it.
But it could also make unwanted classifications. It might classify the users as occupying certain social classes or holding some personality traits based on appearance.
Will AI-powered monitoring make my child feel constantly watched at home or at school?
AI Privacy Risk: Intrusion
We have already seen reports of the potential harm to kids' mental well-being from the always-available culture created by mobile devices and social media. We do not know the impact on kids from knowing that their privacy may always be at risk.
An AI-based parent control app may make it possible for parents to detect when their child is at risk, but their child will see it as another source of surveillance to accompany smart doorbells, home onitoring tools and, perhaps in the future, smart child-minders.
Who might end up buying, selling, or sharing my child’s data without us realising?
AI Privacy Risk: Increased Accessibility
The AI industry's rush to pay for data to train its models shows its value. There is a huge incentive for any data holder to sell it for this purpose. Most people do not scrutinise the privacy policies of services they or their kids subscribe to, so they do not know whether they consented.
As data passes through more and more hands, the risk of identifiable aspects of it being revealed grows. One company may promise with the best intentions that their data will always be anonymised, but mistakes can and do happen.
Could an AI chatbot leak what my child shares or make them feel safe confiding in ways that backfire?
Risk: Insecurity
AI not only gathers information, but it also uses it. A big concern of businesses using AI is whether the information they ask it to process will become part of its training data. There have already been instances of chatbots leaking information provided by one user to another.
Chatbots can appear to lend a risk-free and sensitive ear to anyone with problems. Kids may ask for advice on social and emotional issues from a chatbot, believing it to be immune to the risks of confiding in humans. Having this confidence broken could cause devastating harm to a child's mental well-being.
Could an AI tool casually collect personal details from my child that are later used for something else?
Risk: Secondary Use of Data
Chatbots hold the potential to subtly manipulate users into giving up information which has a secondary use.
Teachers might use an algebra trick that links with kids' birthdays, but when a chatbot does the same thing, it will have gathered identifiable personal information. A history bot could ask kids about their local history and start to home in on their location.
As these are part of an ongoing conversation, users may be less guarded against revealing this data. This slow timeline may also make it more difficult to detect when AI tool purchasers first assess the AI tools.
If my child’s data gets used to train AI, can it ever truly be removed?
Risk: Exclusion
Once your data is part of an AI’s training material, it is unlikely ever to be removed. The data used in training does not remain in a discrete unit that you can extract. The AI training process intertwines it into the AI model over a period currently measured in weeks.
Should kids inadvertently contribute their data to AI training, it will likely persist forever.
Could helpful classroom AI turn into a tool that records and analyses my child’s private conversations?
Risk: Surveillance
To some extent, with AI, we have to think about where the technology might go rather than where it is now. Teachers could facilitate small group discussions in classrooms with multiple groups but only one teacher. The AI could monitor the conversation, add talking points, and guide the group to remain on topic. This would be a powerful classroom tool.
However, the same technology would also facilitate mass surveillance of those students and hold the potential to monitor, record and analyze their personal conversations.
Could AI recognise my child across apps and devices even if we never share obvious identifiers like an email?
AI Privacy Risk: Identification
AI is adept at spotting patterns and using them to form conclusions. AI vision, audio analysis, and text analysis will reliably identify users across different AI tools even where the user does not use the same personal tokens, such as an email address.
This functionality will provide convenience. Young kids cannot reliably sign into their accounts to continue learning, but if the device can work out their ID from other means, that will facilitate the use of technology in schools. However, it will also contribute to each user's actions, contributing to more details linked to their identity.
Could small pieces of my child’s data be combined into a detailed profile about them over time?
Risk: Aggregation
Computers never forget and are adept at linking data points together. They may not come from what appears to be a single source to the user, but the AI owner could share data across first- and third-party platforms. Combined, this could form a very detailed informational picture of AI users.
These are areas in which we must carefully consider how AI will have privacy implications for us all but we must also consider the specific impacts on children and educators.
You may also be interested in: Simple Steps to Keep Your Child's Data Safe
Parent Conversation Guide
A short guide to help parents start calm, confident conversations about AI use at home.