Back to insights
Security

AI agents and system access: Three security risks — and how we solve them

Published 2026-02-10 · Emil Kanneworff

When an AI agent gets access to your systems, three critical risks emerge. We walk through them and show how Vertex Solutions builds agents with security as the foundation.

Vintage key on keyboard — symbolizing security and access control for AI agents

AI agents that can act autonomously in your systems are no longer the future — they are the present. They can read emails, update databases, send messages, and execute commands on behalf of your employees.

But with that capability comes a security risk that many organizations underestimate. Security researcher Simon Willison has described what he calls the 'lethal trifecta' for AI agents: a combination of three properties that together create a fundamental vulnerability.

In this article, we walk through the three risks, explain why they are relevant for companies under GDPR and NIS2, and show concretely how we at Vertex Solutions address them when building AI agents for our clients.

Risk 1: Access to private and sensitive data

For an AI agent to be useful, it needs access to data. A customer service agent needs to read inquiries. A legal assistant needs to access contracts. An internal knowledge agent needs to search documents.

The problem is that most AI agent platforms require broad system access to function. This is equivalent to giving a new employee the key to every cabinet from day one — an approach we warn against in our article on AI agents as digital workers.

For companies under GDPR, this is not just a technical risk — it is a legal one. If an agent accesses personal data without legal basis, or if data is sent to a third-party API without a data processing agreement, it can trigger fines and regulatory action.

  • Vertex Solutions' approach: Our agents operate on the principle of least privilege. Each agent gets precisely the access rights it needs — no more. Data access is defined explicitly per agent and per task, and all access is logged with full traceability.

Risk 2: Exposure to untrusted content (prompt injection)

The second risk is more subtle: prompt injection. When an AI agent reads content from external sources — emails, web pages, documents from partners — this content can contain hidden instructions that manipulate the agent's behavior.

An example: An agent processing incoming emails receives a message with hidden text instructing it to forward all contracts to an external address. The agent cannot distinguish between legitimate instructions and malicious content — because both are just text.

This is not a theoretical threat. Security researchers have demonstrated prompt injection attacks against all major language models, and no one has yet found a definitive solution to the problem.

  • Vertex Solutions' approach: We design agents with strict input validation and sandboxing. External content is processed in isolated contexts, separated from the agent's core instructions. And critically: we always build a human-in-the-loop step for high-consequence actions, so an agent can never perform sensitive operations without human approval.

Risk 3: The ability to communicate externally

The third component of the trifecta is the agent's ability to send data out of the organization. If an agent can send emails, call APIs, or upload files, an attacker — via prompt injection — potentially has the ability to exfiltrate sensitive data without anyone noticing.

The combination of the three risks is what makes it dangerous: The agent has access to data (risk 1), it can be manipulated via external content (risk 2), and it can send data out (risk 3). Together, they form an attack vector that is fundamentally different from traditional security threats.

For companies under the NIS2 directive, which requires cybersecurity in the supply chain, this is particularly relevant. An AI agent operating across systems is effectively part of your security perimeter.

  • Vertex Solutions' approach: Our agents have explicitly defined output permissions. An agent that analyzes documents cannot send emails. An agent that drafts responses cannot access the file system. Communication channels are whitelisted, and all outgoing communication is logged and auditable.

Our security architecture: Four layers of protection

At Vertex Solutions, we build AI agents with a four-layer security architecture that systematically addresses all three risks:

  • Layer 1 — Isolated data access: Each agent has its own restricted access profile. No agent has access to more than it needs. Data sources are connected via read-only APIs where possible.
  • Layer 2 — Input sandboxing: External content (emails, documents, API responses) is processed in isolated contexts with strict validation. The agent's core instructions are protected against manipulation.
  • Layer 3 — Output control: Actions with consequences (sending data, system changes, external communication) require either human approval or explicit whitelist authorization.
  • Layer 4 — Full audit trail: Every action, decision, and data exchange is logged with timestamp, context, and result. Logs are accessible to compliance teams and can be exported to your existing SIEM system.

GDPR and NIS2: What the law requires of your AI agents

Under GDPR, you must be able to document that personal data is processed lawfully, that access is limited to what is necessary, and that you have appropriate technical and organizational measures. An AI agent accessing personal data is a data processing activity — and must be treated as such.

The NIS2 directive, which came into force in October 2024, places additional cybersecurity requirements, including supply chain risk management and incident response. If your AI agent uses a third-party LLM via API, that LLM provider is effectively part of your supply chain.

The choice between no-code and code-based agents directly impacts your compliance options — we walk through that trade-off in our article on no-code vs. code for AI agents.

Conclusion: Security is not a feature — it is the foundation

AI agents deliver enormous value. But that value is only realized if the organization can trust that the agent will not expose data, be manipulated, or act outside its mandate.

At Vertex Solutions, we do not build AI agents and add security afterward. We design the security architecture first and build the agent within those boundaries. That is the difference between a prototype and a production solution.

If you are considering implementing AI agents in your organization, the conversation does not start with 'which model should we use?' — it starts with 'what data does the agent have access to, and who controls it?'

  • Map agent data access before you build — not after
  • Implement least privilege: the agent only gets access to what is necessary
  • Sandbox external content to protect against prompt injection
  • Require human approval for high-consequence actions
  • Log everything — and make logs accessible for compliance and audit
  • Treat your LLM provider as part of the supply chain under NIS2

Want to secure your AI agents against these risks?

We design the security architecture first — and build the agent within those boundaries.