At the Educational App Store, our AI Assessment Framework and support for AI in education provide us with a unique insight into how privacy concerns and AI apply to children, education, and families.
We know that a key question for parents is whether AI will collect their kids' data and expose it. Schools must also consider the privacy impact when their new or existing apps and tools build in AI functionality.
Most people only became aware of AI as ChatGPT became the fastest-growing app in history. However, research around AI and privacy predates this. Building on more general privacy research, Carnegie Mellon University and Oxford University categorised the risks AI poses for privacy.
We've applied our insights to these identified AI privacy risk categories. These are not exhaustive, but they highlight the risks AI may pose to privacy.
AI Privacy Risk: Distortion
AI image-based generation is already way beyond where it was a year ago. OpenAI's Sora shows video of the same quality is getting near, and the same company has already stated it has the technology to generate facsimiles of anyone's voice from just a few seconds of recording.
Kids could be given false security by hearing the voices of trusted adults, prompting them to reveal information they would not normally divulge.
We can tell kids not to upload their images or voices to AI-based apps, but we have less control over others doing it. AI tools' terms and conditions might prohibit this but are toothless warnings.
AI Privacy Risk: Exposure
AI can restore and recreate media even when it has lost sections. This accomplishment holds the potential to undo a school's existing privacy policies.
Redacting or blurring kids' faces in social media posts might be enough to preserve their identity, but not if AI can reconstruct it. Remember that AI may have access to sources other than the school’s own social feeds to gather the information it needs to rebuild an image accurately.
AI Privacy Risk: Physical Attributes
AI tools already have impressive capabilities for classifying images. Imagine if a chatbot could 'see' its users. AI vision could have benefits; for example, if it detects confusion, it might provide extra educational support or see a textbook problem and help learners work through it.
But it could also make unwanted classifications. It might classify the users as occupying certain social classes or holding some personality traits based on appearance.
AI Privacy Risk: Intrusion
We have already seen reports of the potential harm to kids' mental well-being from the always-available culture created by mobile devices and social media. We do not know the impact on kids from knowing that their privacy may always be at risk.
An AI-based parent control app may make it possible for parents to detect when their child is at risk, but their child will see it as another source of surveillance to accompany smart doorbells, home onitoring tools and, perhaps in the future, smart child-minders.
AI Privacy Risk: Increased Accessibility
The AI industry's rush to pay for data to train its models shows its value. There is a huge incentive for any data holder to sell it for this purpose. Most people do not scrutinise the privacy policies of services they or their kids subscribe to, so they do not know whether they consented.
As data passes through more and more hands, the risk of identifiable aspects of it being revealed grows. One company may promise with the best intentions that their data will always be anonymised, but mistakes can and do happen.
AI Privacy Risk: Insecurity
AI not only gathers information, but it also uses it. A big concern of businesses using AI is whether the information they ask it to process will become part of its training data. There have already been instances of chatbots leaking information provided by one user to another.
Chatbots can appear to lend a risk-free and sensitive ear to anyone with problems. Kids may ask for advice on social and emotional issues from a chatbot, believing it to be immune to the risks of confiding in humans. Having this confidence broken could cause devastating harm to a child's mental well-being.
AI Privacy Risk: Secondary Use of Data
Chatbots hold the potential to subtly manipulate users into giving up information which has a secondary use.
Teachers might use an algebra trick that links with kids' birthdays, but when a chatbot does the same thing, it will have gathered identifiable personal information. A history bot could ask kids about their local history and start to home in on their location.
As these are part of an ongoing conversation, users may be less guarded against revealing this data. This slow timeline may also make it more difficult to detect when AI tool purchasers first assess the AI tools.
AI Privacy Risk: Exclusion
Once your data is part of an AI’s training material, it is unlikely ever to be removed. The data used in training does not remain in a discrete unit that you can extract. The AI training process intertwines it into the AI model over a period currently measured in weeks.
Should kids inadvertently contribute their data to AI training, it will likely persist forever.
AI Privacy Risk: Surveillance
To some extent, with AI, we have to think about where the technology might go rather than where it is now. Teachers could facilitate small group discussions in classrooms with multiple groups but only one teacher. The AI could monitor the conversation, add talking points, and guide the group to remain on topic. This would be a powerful classroom tool.
However, the same technology would also facilitate mass surveillance of those students and hold the potential to monitor, record and analyze their personal conversations.
AI Privacy Risk: Identification
AI is adept at spotting patterns and using them to form conclusions. AI vision, audio analysis, and text analysis will reliably identify users across different AI tools even where the user does not use the same personal tokens, such as an email address.
This functionality will provide convenience. Young kids cannot reliably sign into their accounts to continue learning, but if the device can work out their ID from other means, that will facilitate the use of technology in schools. However, it will also contribute to each user's actions, contributing to more details linked to their identity.
AI Privacy Risk: Aggregation
Computers never forget and are adept at linking data points together. They may not come from what appears to be a single source to the user, but the AI owner could share data across first- and third-party platforms. Combined, this could form a very detailed informational picture of AI users.
These are areas in which we must carefully consider how AI will have privacy implications for us all but we must also consider the specific impacts on children and educators.