Skip to main content
alert18 March 2026
5 min

Deepfakes Targeting Children: What Parents Must Know

By Safe Child Guide Editorial Team

Deepfake technology has advanced rapidly, and law enforcement agencies across the UK are reporting a sharp increase in cases where children are the targets. The National Crime Agency has warned that AI-generated imagery involving children is one of the fastest-growing threats in the online safety landscape. The most alarming development is the use of deepfake tools to create non-consensual intimate imagery of children and young people. Perpetrators take innocent photographs — often from social media profiles, school websites, or family accounts — and use freely available AI tools to generate sexualised images. These images are then shared on dark web forums, used for blackmail, or circulated among peers as a form of bullying. Schools have reported cases where students have used deepfake apps to create fake images of classmates, sometimes not fully understanding the legal and emotional consequences. Under UK law, creating, possessing, or distributing sexualised deepfake images of children is a criminal offence, regardless of the age of the person who created them. What parents can do: review the images of your child that are publicly available online and tighten privacy settings on all social media accounts. Discuss deepfakes with your child in an age-appropriate way — they need to understand that this technology exists and that they should tell a trusted adult immediately if they become aware of fake images involving themselves or a peer. Schools should include deepfake awareness in their online safety curriculum.

Sources

Related safety topics

Frequently Asked Questions