
Introduction
AI systems are gradually moving beyond generating responses and are now beginning to take actions across systems with limited human involvement. They plan tasks, interact with external systems, retrieve and process data, and carry out multi-step actions with limited human involvement. These systems, commonly referred to as ‘AI agents’, are already being used across sectors, from customer service and recruitment to finance and infrastructure management.
At first, this shift may not seem like a regulatory problem. After all, the EU AI Act already provides a structured compliance framework based on risk classification, transparency, and accountability. But that framework was designed with relatively static systems in mind, systems that produce outputs, not systems that act autonomously across environments.
That’s where the real issue begins because once an AI system begins to act, the question is no longer just what it was designed to do, but what it actually does in real time.
How AI Agents Differ in Function and Risk
An AI agent is not a single, isolated system. It operates by combining multiple capabilities, reasoning, tool use, and environmental interaction. It can retrieve information from databases, trigger external APIs, modify records, send communications, or initiate transactions, all as part of a single workflow.
The same underlying model can therefore produce very different outcomes depending on how it is deployed. A system summarising internal documents carries minimal regulatory risk. The same system, if used to screen job applicants or process financial decisions, may fall within high-risk categories under the EU AI Act.
This is because the regulatory profile of an AI agent is not determined by its architecture, but by its actions, the data it handles, and the systems it interacts with. In practice, this creates a moving compliance target.
Compliance Challenges in Dynamic AI Systems
The EU AI Act follows a structured approach. Systems are classified based on risk, and obligations are applied accordingly. High-risk systems must comply with requirements relating to risk management, data governance, transparency, human oversight, and cybersecurity.
But this model assumes that the system being assessed is stable, that its behaviour can be tested, documented, and evaluated before deployment. AI agents challenge that assumption.
Once deployed, these systems do not operate in isolation. They interact with dynamic environments, respond to new inputs, and adapt their behaviour across tasks. More importantly, their actions can trigger multiple regulatory frameworks at once. An agent processing customer queries may engage data protection law. If it executes financial actions, financial regulations may apply. If it interacts with digital platforms, additional obligations may arise.
In practice, compliance does not really stay a one-time exercise anymore. It starts depending on how the system behaves after deployment.
Evolving System Behaviour and Compliance Risks
One of the most significant challenges in this space is what can be described as behavioural drift.
AI agents are designed to adapt. They may refine outputs based on feedback, change how they use tools, or develop new execution patterns within the boundaries of their programming. Over time, this can lead to behaviour that differs from what was originally tested during compliance assessment.
Not all adaptation is problematic. Systems that operate within clearly defined and documented parameters can still remain compliant. The difficulty arises when behaviour evolves in ways that were not anticipated, particularly in systems that learn from ongoing interactions or rely on external data sources.
In such cases, it becomes difficult to determine whether the system is still operating within the scope of its original conformity assessment. At some point, it becomes difficult to tell what was actually tested and what just wasn’t.
This creates a structural gap. The regulatory framework evaluates systems at the point of design and deployment. AI agents, however, continue to evolve beyond that point.
Key Challenges
- Cybersecurity and system Control: AI agents typically operate with broad access to external tools and systems. Overly permissive configurations allow agents to act beyond their intended scope. Governing this requires more than internal system instructions. Access controls and monitoring must be enforced at the architectural level.
- Human Oversight: The Act requires effective human oversight for high-risk systems, but agents can execute sequences of consequential actions faster than any human reviewer can follow. When oversight is reduced to reviewing outcomes after the fact, the opportunity to prevent harm has already passed.
- Transparency: Disclosure requirements are straightforward when an agent interacts directly with an end user. They become considerably more complex when the agent acts in a user’s behalf, communicates with third parties, or influencers backend systems without any visible interface. Determining who is affected and ensuring they are meaningfully informed is a genuine open problem.
- Regulatory Overlap: Few AI agents operate within a single legal regime. Their actions can simultaneously trigger obligations under GDPR, the Digital Service Act, sector-specific financial or healthcare regulations, and cybersecurity frameworks. Coordinating compliance across these overlapping layers requires a level of cross functional legal and technical infrastructure that most organisations are not yet structured to provide.
AMLEGALS Remarks
The EU AI Act represents one of the most comprehensive attempts to regulate artificial intelligence, and its risk-based framework is both necessary and forward-looking. However, the rise of AI agents highlights a limitation that is difficult to ignore.
The current compliance structure is built around the assumption that systems can be assessed, categorised, and controlled at the point of deployment. The challenge going forward is not simply to regulate AI systems, but to account for how they behave once they are in use. Without that shift, there is a risk that compliance becomes a formal requirement rather than a functional safeguard.
For any queries or feedback, feel free to connect with mridusha.guha@amlegals.com or Khilansha.mukhija@amlegals.com
