top of page
Search

NYDFS Issues Guidance on AI Cyber Risks

  • Writer: Katarzyna  Celińska
    Katarzyna Celińska
  • Nov 9
  • 2 min read

The NYDFS has released guidance — “Cyber Risks from Artificial Intelligence and Strategies to Combat Related Risks”.

 

This marks one of the first U.S. state-level regulatory efforts to explicitly connect AI governance with cybersecurity, outlining expectations for financial institutions, insurers, and other regulated entities operating in New York.

 

Key Highlights

1️⃣ AI-Driven Cyberattacks — AI can be weaponized by threat actors to automate phishing, develop polymorphic malware, and conduct large-scale reconnaissance.

2️⃣ Model Exploitation and Poisoning — Poorly secured AI systems are vulnerable to manipulation, leading to data corruption or biased outputs.

3️⃣ ThirdParty Dependencies — The report underscores the “increased vulnerabilities due to third-party, vendor, and other supply chain dependencies” — highlighting the need for robust TPRM and continuous vendor oversight.

4️⃣ AI Misuse and Hallucination — Systems generating incorrect or misleading outputs can cause operational errors or regulatory breaches.

5️⃣ Governance Failures — Lack of clear accountability for AI systems introduces major legal, ethical, and cybersecurity exposures.

 

ree

Recommendations

✅ Establish cybersecurity programs that define roles, responsibilities, and risk ownership.

✅ Conduct risk assessments as part of enterprise cybersecurity programs.

✅ Implement TPSP policies and procedures.

✅ Strengthen incident response and model monitoring processes.

✅ Require strong access controls, including MFA.

 

Honestly, I can’t find anything new in these guidelines that hasn’t already been discussed for years in cybersecurity standards and frameworks like DORA, NIS2, ISO27001, or other frameworks. However, what the NYDFS did well is connect the dots between AI and the supply chain risk landscape, which is where most modern cyber incidents originate. And this is spot on — because the majority of organizations I’ve worked with still underestimate how dependent they are on third-party infrastructure, APIs, and AI-powered SaaS services. I’ve seen too many companies rely blindly on ISO 27001 certificates or SOC2 reports as a security assurance, and frankly — that’s not enough.

 

Recently, I’ve reviewed multiple SOC 2 reports that:

✅ Were issued by auditors lacking deep technical understanding,

✅ Contained auto-generated controls from GRC platforms,

✅ And often didn’t match the actual risk landscape of the organization.

✅ In some cases, the controls existed only on paper, not in real operations.



 
 
 

Comments


Stay in touch

BW ADVISORY sp. z o.o. 

ul. Boczańska 25
03-156 Warszawa
NIP: 525-281-83-52

Privacy policy

  • LinkedIn
  • Youtube
bottom of page