Voice Cloning and Audio Deepfakes: A Family Guide
How to recognise AI-generated voice fraud, set up family verification measures, and respond if your child's voice has been cloned.
How AI voice cloning works
Modern artificial intelligence can create a convincing replica of a person's voice from as little as three seconds of audio. Free and low-cost tools available online allow anyone to upload a voice sample — taken from a social media video, podcast, or phone recording — and then generate new speech in that person's voice, saying anything the attacker types. The resulting audio can be indistinguishable from genuine speech to an untrained ear, and sometimes even to voice-recognition software. This technology, originally developed for accessibility and entertainment, is now routinely misused in financial fraud and, increasingly, in targeted attacks against families, including attempts to impersonate children in distress.
The grandparent scam pattern
One of the most common voice-cloning frauds involves a caller impersonating a grandchild or young relative who claims to be in urgent trouble — arrested abroad, involved in an accident, or hospitalised — and begging the recipient not to tell other family members while requesting an immediate bank transfer or gift card payment. The cloned voice adds emotional authenticity that written scam messages cannot match, and the urgency and secrecy are designed to bypass rational judgement. Older family members are most frequently targeted, but any relative can receive such a call. The scam exploits the deep instinct to protect a child, which is precisely why it succeeds even with educated, cautious victims.
Setting up family safe words
A family safe word is a simple but highly effective defence against voice-cloning fraud. Choose a word or short phrase that is memorable but not guessable from public information — avoid names, birthdays, or obvious words. Share it only with close family members and agree that anyone receiving a distressing call from a family member must ask for the safe word before taking any action, including sending money. A genuine caller will know it; a fraudster will not. Reinforce the rule that no emergency is so urgent that there is no time for the safe word. Practise the protocol as a family so it becomes instinctive, and update the word periodically or if you suspect it has been shared.
Protecting your child's voice online
Every video your child posts publicly on TikTok, YouTube, Instagram, or similar platforms is a potential voice sample for cloning tools. You do not need to prohibit all video content, but you should audit privacy settings so that videos are visible only to approved followers rather than the general public. Be particularly cautious about long, unedited videos in which your child speaks naturally at length, as these provide the highest-quality training data. Avoid posting content that includes your child's full name, school, or location alongside their voice. Gaming streams, vlogs, and school performance recordings are common sources. Even brief clips aggregated across platforms can provide sufficient audio for a usable clone.
What to do if a voice has been cloned
If you believe your child's or a family member's voice has been cloned and used in fraud, report it to Action Fraud on 0300 123 2040 or at actionfraud.police.uk. Preserve any recordings of the fraudulent call, along with any associated phone numbers, email addresses, or payment requests. If money has been transferred, contact your bank immediately — many UK banks can attempt a Faster Payments recall within 24 hours. If the cloned voice was used to generate threatening, abusive, or sexual content involving your child, this is a criminal matter to report directly to the police on 101, or 999 if there is an immediate risk. The Internet Watch Foundation (iwf.org.uk) can assist if the content has been distributed online.
This is practical educational content to support families. For case-specific concerns about a child's safety, contact the NSPCC helpline on 0808 800 5000 or your local safeguarding team.
Was this page helpful?