Deepfake Defences: How AI Deception is Evolving and What We Can Do About It
- Katarzyna Celińska
- 3 days ago
- 2 min read
Deepfakes are no longer just a curiosity — they are now a systemic cyber and societal risk. The Ofcom reports “Deepfake Defences: Mitigating the Harms of Deceptive Deepfakes” and “Deepfake Defences 2 – The Attribution Toolkit” provide one of the most comprehensive overviews of how deep fakes are created, how they cause harm, and what strategies regulators, tech companies, and governments can use to fight them.

Photo: https://pl.freepik.com/
Key findings
📌 Deepfakes are now mainstream.
📌 Three main harms:
➡️ Demeaning: used to humiliate or abuse victims, often through non-consensual sexual content.
➡️ Defrauding: used in fraudulent advertising, scams, and impersonations, including real-world financial theft.
➡️ Disinforming: used to spread political and social disinformation.
📌 The “deepfake economy.”
An entire ecosystem of creators, apps, and hosting platforms is fueling growth.
📌 Detection and response are complex.
Ofcom proposes a four-part defence model — Prevent, Embed, Detect, Enforce:
➡️ Prevent: Limit creation (prompt/output filters, NSFW dataset removal, red teaming).
➡️ Embed: Add watermarks, metadata, and labels to trace content origins.
➡️ Detect: Use forensic, hashing, and AI classification tools to identify manipulated content.
➡️ Enforce: Set platform rules, remove harmful content, and sanction violators.
📌 The Attribution Toolkit
This second paper introduces a framework for content provenance — ensuring synthetic media can be traced to its source using metadata, digital signatures, and watermarking. It focuses on upstream accountability (and downstream enforcement.
As a cybersecurity professional passionate about my work, I find these reports very interesting. Deepfakes are increasingly used for fraud, manipulation, and reputational attacks, and yet, we lack universal defensive standards. What’s missing — and what I expected to find — are clear guidelines for organizations and users on how to prepare and respond to deep fake threats. The reports focus primarily on history, mechanisms, and the technology supply chain, not user-level defences.
Still, I strongly agree with one key point: "AI model developers and service providers must embed safeguards by design, not as an afterthought."
Organizations, to protect themselves, should implement:
➡️ Verification Protocols – Establish multi-factor verification for sensitive communications and require secondary confirmation (e.g., callback or secure channel) before acting on voice, video, or email requests.
➡️ Detection aSystems – Use AI-based tools to detect manipulated media.
➡️ Employee Awareness – Conduct regular training and simulation exercises to help staff recognize deepfakes, practice verification procedures, and strengthen organizational readiness.
Author: Sebastian Burgemejster
Comments