Cost of a Data Breach 2025 – Part 3
- Katarzyna Celińska

- Oct 14
- 2 min read
The third part of my analysis of the IBM–Ponemon Institute Cost of a Data Breach Report 2025 dives into one of the fastest-growing risks: AI in data breaches.
AI in Breaches
13% of organizations reported a breach linked to their AI models or apps — and shockingly, 97% lacked proper AI access controls.

Photo: https://pl.freepik.com/
The most common causes of AI hasztag#security incidents:
☑️ Supplychain compromise (30%)
☑️ Modelinversion (24%)
☑️ Modelevasion (21%)
The consequences were severe: operational disruption (31%), unauthorized access to sensitive data (31%), and loss of data integrity (29%).
Shadow AI – The Bigger Problem
☑️ 20% of breaches were tied to shadow AI — unsanctioned use of AI by employees.
☑️ Shadow AI incidents most often compromised customer PII (65%) and intellectual property (40%).
☑️ Breaches involving shadow AI added $670K to the average breach cost compared to organizations with little or no shadow AI.
This proves banning AI doesn’t work. Employees will use AI to work smarter — the key is enabling secure, governed use.
AI as an Attack Tool
Attackers are also weaponizing AI:
☑️ 16% of all breaches in 2025 involved attackers using AI.
☑️ Top techniques: AI-generated phishing (37%) and deepfakes (35%).
Almost every organization has now faced incidents involving AI. The most common risks come from supply chain exposure, model inversion, or evasion. The biggest consequences? Service disruption, data loss, and compromised integrity. Shadow AI is now a bigger problem than Shadow IT — employees use AI even when banned. The solution isn’t prohibition, but helping employees use AI in a safe and secure way. The fact that 63% of organizations lack AI governance policies says everything about our preparedness. Meanwhile, threat actors are moving fast, weaponizing AI for phishing and deepfakes.
Author: Sebastian Burgemejster





Comments