AEPD publishes AI Agents Guidelines
- Katarzyna Celińska

- 7 hours ago
- 2 min read
Literally “just a moment ago” I came across the new guidelines published by AEPD on AI agents and personal data protection.
The timing could not have been better. I was preparing a mini-lecture for the Łódź group of Stowarzyszenie Praktyków Ochrony Danych, where I was speaking about: AI Security and Personal Data Protection – Understanding Risks in Gen AI Systems (LLM, RAG, Agents) and Technical Safeguards.

Grafika: Freepik
I recommend this publication to anyone working in cybersecurity, privacy, compliance, or AI governance.
What makes these guidelines particularly valuable is that they are not just legal commentary or high-level regulatory messaging. This is a technical document.
The Agencia Española de Protección de Datos - AEPD takes the time to explain how agentic AI systems function from an architectural perspective. It addresses LLMs, memory mechanisms, orchestration layers, tool integrations, autonomy, and external system interactions. It clearly recognizes that AI agents are not simply “chatbots,” but systems capable of acting, making decisions, invoking tools, and interacting with multiple environments.
Even traditional Gen AI systems based on LLM or RAG architectures already introduce significant risks. We are all familiar with prompt injection, hallucinations, data leakage, over-collection of personal data, or weak access control around embeddings and vector databases.
AI agents represent a further step. They can trigger APIs, access databases, store persistent memory, coordinate with other agents, and execute automated workflows. In such environments, personal data may flow across systems in ways that are difficult to predict or fully map.
Engineering, not paperwork
There is, of course, a process and organizational section in the guidelines. But the real strength of the document lies in its focus on technology. It discusses system design choices, architectural safeguards, monitoring mechanisms, logging, access controls, isolation of components, and the importance of human oversight in agentic systems. And this is, in my view, where the future of privacy and AI governance is heading.
For years, compliance could often be treated as documentation: policies, registers, DPIAs, contractual clauses. Today, risks are increasingly embedded in code, pipelines, connectors, and system configurations. If you do not understand how an agent selects tools, how it stores memory, how it retrieves context, or how it logs user interactions, you cannot meaningfully assess personal data risk.
Compliance is moving from documents to design.
We are becoming more and more dependent on autonomous technologies. In such an environment, the era of purely “paper-based” compliance and lawyers focusing only on formal conformity may slowly be coming to an end.
Author: Sebastian Burgemejster





Comments