top of page
Search

AI and the Rise of Phishing Scams

  • Writer: Katarzyna  Celińska
    Katarzyna Celińska
  • Oct 13
  • 2 min read

A recent Reuters investigation in collaboration with Harvard University tested whether generative AI tools could be misused to craft phishing scams. The results were alarming: AI makes phishing faster, cheaper, and more convincing, nearly doubling the success rate compared to traditional phishing.

 

Tested AI models:

☑️ OpenAI’s ChatGPT

☑️ Anthropic’s Claude

☑️ Meta’s Meta AI

☑️ Google’s Gemini

☑️ Elon Musk’s xAI Grok

☑️ China’s DeepSeek

 

ree


Chatbots were asked to generate scam -like emails. While most refused initially, with light “prompt engineering” (e.g., framing it as research, role-play, or fiction), they generated highly tailored phishing templates.

 

Results

Click-through rate: 11% of participants clicked on links in the AI-generated phishing emails — nearly double the industry benchmark of 6% for phishing simulations.

☑️ AI lowers barriers → anyone can generate sophisticated phishing emails.

☑️ Psychological manipulation at scale → urgency, fear, trust, and empathy can be mass-produced by algorithms.

 

Scenarios generated:

☑️ Fake IRS tax refund messages.

☑️ Medicare benefit fraud emails.

☑️ Charity scams exploiting disaster relief themes.

☑️ Banking & payment fraud emails.

☑️ Level of personalization: Chatbots even suggested best times of day to target seniors for maximum effect.

☑️ Bypassing guardrails: Despite safeguards, models still produced phishing content when asked indirectly. This highlights the fragility of AI safety filters.

☑️ Real-world parallels: Investigations in Southeast Asia show scam syndicates already using AI tools to sharpen their fraud campaigns.

 

The hype around AI is everywhere — and now I see many ‘experts’ popping up who just yesterday were GDPR, compliance, risk, whistle blow or NIS2 consultants. Today, suddenly, they’re AI gurus (sic!).

For me, AI is a double-edged sword. It can bring huge benefits, but it can also hurt organizations and people if misused. This study shows clearly how criminals can weaponize AI to scale phishing attacks.

Phishing is nothing new, but the psychological manipulation behind it is timeless — and AI just amplifies it. What’s dangerous here is not only the sophistication of the scams, but also the speed and scale with which they can now be deployed.

 

The only sustainable defense is a mix of:

✅ Ongoing user education and awareness.

✅ Building a ‘reduced trust’ approach in communication — verify before you click.

✅ Strengthening detection systems and incident response.

 


 
 
 

Comments


Stay in touch

BW ADVISORY sp. z o.o. 

ul. Boczańska 25
03-156 Warszawa
NIP: 525-281-83-52

Privacy policy

  • LinkedIn
  • Youtube
bottom of page