14 Jul 2025

The Agentic Stack: Hurtling Towards Autonomous AI, But Is Security Lagging Behind?

The Agentic Stack is here. Are we ready for the security implications?

AI agents are rapidly moving from theory to practice, discovering new tools, integrating autonomously, and reshaping how businesses operate.

But as the new Agentic Protocol Stack (MCP, AG-UI, A2A) takes shape, we’re facing unprecedented risks:
- Autonomous attack propagation
- Tool poisoning
- Interface manipulation

We can’t afford to repeat the mistakes of the internet boom: build first, secure later.

Our Co-Founder & CTO Chris Jefferson explores how organisations can balance speed with responsibility and share practical strategies for deploying agentic systems securely and responsibly.

Words by
Chris Jefferson
Domino Image

The Agentic Stack: Hurtling Towards Autonomous AI, But Is Security Lagging Behind?

Something extraordinary is unfolding in the world of artificial intelligence. AI agents such as DeepAgent are now capable of autonomously discovering new tools and connecting to them via the Model Context Protocol (MCP), retaining those capabilities for future tasks and all this with scarcely a human in sight. It’s nothing short of astonishing progress.

Yet, therein lies the rub.

Organisations across the globe are racing ahead, deploying AI agents at a pace that threatens to outstrip our collective ability to evaluate or secure them effectively. The implications of this reality are as far-reaching as they are sobering.

Lessons from the Internet Boom

The 1990s were marked by a surge of optimism as the internet hurtled into public consciousness, transforming businesses, governments and societies virtually overnight. Early adopters secured significant advantages, riding the wave of innovation and digital disruption.

However, there was a catch. Security, all too often, was treated as a mere afterthought. The prevailing mantra was “build first, secure later.” The result? Costly retrofits, systemic breaches, and vulnerabilities that continue to haunt us to this day (e.g. SQL Injection).[JD1] [CJ2]

Fast forward to 2025, and history appears poised to repeat itself but at machine speed rather than human pace.

Therein lies the crucial difference: threats in the age of the internet relied, to a significant extent, on human actions to propagate. In contrast, AI threats could potentially spread autonomously, with agents acting at speeds far beyond the realm of human intervention.

The Autonomous AI Protocol Stack

What is emerging now is nothing less than the technological scaffolding for the next era of artificial intelligence: a new “Agentic Protocol Stack,” composed of three principal layers (although more do exist) that collectively promise to redefine how AI systems interact, both with humans and amongst themselves.

Layer 1: MCP (Model Context Protocol)

Consider MCP as the AI equivalent of USB-C a universal connector that dramatically simplifies how AI systems link to external tools, databases and services.

Previously, integrating an array of AI applications with an equally diverse ecosystem of external systems created what Anthropic aptly termed the “M × N problem.” Essentially, you’d require a unique integration for each pairing of an AI system and external tool, leading to a bewildering matrix of connections.

MCP sweeps away this complexity, transforming it into an “M + N problem,” by offering a single, standardised interface. In short, MCP has the potential to eliminate countless bespoke integrations.

Real-world evidence underscores MCP’s promise. At Block, an enterprise-scale rollout of MCP has enabled employees to shave between 50 and 75 per cent off the time spent on routine tasks, thanks to seamless AI-tool connectivity. Notably, major players like OpenAI, Microsoft and AWS have embraced MCP, weaving it into their platforms with astonishing speed.

Layer 2: AG-UI (Agent-to-User Interface)

Thus far, user interfaces have largely remained static, constraining the dynamic potential of AI. Enter AG-UI: a protocol designed to foster adaptive, fluid interfaces capable of reshaping themselves in real time, depending on what an AI agent discovers it can do.

Picture digital forms that reorganise themselves on-the-fly, offering users new capabilities the moment an AI agent learns a novel skill. It’s an essential advancement for forging true human-machine collaboration.

Layer 3: A2A (Agent-to-Agent Communication)

Finally, there’s A2A, the coordination layer of this nascent stack. Google’s Agent2Agent protocol typifies this concept, enabling autonomous agents to discover one another, negotiate tasks, and coordinate intricate workflows that span organisational boundaries.

Think of it as APIs chatting away with other APIs but entirely without human handoffs or manual integrations. It’s the digital equivalent of colleagues spontaneously collaborating on a complex project, but all at machine speed.

Accelerating Capability, Growing Exposure

Whilst these protocols herald a dazzling new future, they equally usher in significant risks. When agents possess the capability to autonomously discover and integrate tools, the potential for a single compromised agent to cascade through interconnected systems becomes alarmingly real. In mere seconds, such a breach could ripple across networks, outpacing human capacity to detect or intervene.

Recent breakthroughs serve as a stark warning. Google’s own AI systems have demonstrated an ability to discover zero-day vulnerabilities with remarkable speed. Should such capabilities fall into the wrong hands, the consequences could be devastating with AI systems exploiting flaws faster than human teams can possibly patch them.

In this new stack several new and evolving attack vectors are emerging:

  • Prompt Injection: Malicious actors embed harmful instructions within seemingly benign text inputs, subtly hijacking the behaviour of AI agents.
  • Tool Poisoning: Rogue MCP servers might masquerade as legitimate services, only to harvest sensitive data or execute unauthorised actions once trust is established.
  • Interface Manipulation: The AG-UI layer could be compromised, presenting users with falsified data whilst appearing to operate normally an illusion of security hiding malicious activity beneath the surface.

 

Amidst this rapidly evolving technological landscape, regulatory frameworks are also struggling to keep pace. Over 25 jurisdictions worldwide are drafting new AI governance policies. However, as highlighted by recent analyses, none adequately address the unique security challenges presented by autonomous, machine-speed agentic networks.

Many regulations still operate on the assumption that humans remain firmly in control. Yet the reality is increasingly divergent with AI agents now capable of acting faster, and in ways humans may struggle even to comprehend, let alone regulate.

To Move Quickly or To Move Thoughtfully?

Enterprises find themselves confronting a delicate balancing act. On the one hand, moving rapidly offers tangible competitive advantages. Early adopters can reap enormous efficiencies, unlock new capabilities, and seize market opportunities.

Yet on the other hand, proceeding without sufficient caution courts significant risk. Hasty implementations could pave the way for catastrophic failures whether from security breaches, governance breakdowns, or cascading systemic disruptions. Ultimately, the real contest is not simply between moving quickly and moving cautiously. Rather, it is about building agentic behaviours with accountable scopes and effective observability at every layer.

Practical Recommendations

Organisations determined to harness the emerging protocol stack and agentic technologies without succumbing to its dangers can adopt several key strategies:

· Implement Zero-Trust Architectures - Gone are the days of assuming implicit trust within internal networks. Enterprises must now adopt zero-trust models that scrutinise every connection, even amongst internal AI agents.

· Develop Rigorous Evaluation Frameworks - Standard, static testing methodologies are no longer sufficient. Sophisticated scenario-based testing is now required capable of stress-testing AI systems against real-world complexities and edge cases to assure their overall behaviour.

· Embrace Privacy-Enhancing Technologies (PETs) - Technologies such as federated learning, differential privacy, and secure multiparty computation are proving essential. These PETs enable organisations to collaborate across borders, securely leveraging data without compromising privacy or compliance obligations.

· Adopt Phased Rollouts - Rather than unleashing MCP across an entire enterprise in one fell swoop, leading organisations are opting for measured, phased deployments. Such approaches allow for controlled testing, lessons learned, and incremental scaling.

· Participate in Standards Development - Proactive engagement in the development of protocols and governance standards offers organisations a chance to shape the very frameworks they will ultimately be required to follow rather than being forced to retrofit compliance later.

· Invest in Observability and Traceability - Agentic systems and new protocol layers introduce additional complexity and potential opacity. Organisations should invest in observability tools that provide end-to-end visibility, audit trails, and explainability for both agent behaviours and protocol-level interactions.

Building the Future, Securely and Responsibly

The MCP revolution has arrived. Autonomous AI agents are leaping from theoretical constructs into practical reality, reshaping how enterprises work, connect, and innovate. Yet the path forward remains uncertain, with several potential futures emerging:

  • Universal Standards: A convergence toward shared protocols akin to the internet’s backbone, unlocking massive network effects and seamless interoperability.
  • Fragmented Ecosystems: Competing, vendor-specific protocols, resulting in a patchwork of isolated “agent internets” with limited interoperability.
  • Regulated Evolution: Governments impose stringent oversight, potentially slowing innovation but enhancing security and safety.
  • Hybrid Architectures: Different protocols and governance models thrive in specific sectors, coexisting in a mosaic of solutions tailored to varied needs.

 

Whichever direction prevails, one fact remains inescapable: MCP, AG-UI, and A2A are indicating the direction of travel towards a new foundational infrastructure upon which the future of autonomous AI will be built.

The pivotal question facing every organisation today is not whether to embrace this new agentic ecosystem. It is how swiftly and how safely they can weave it into the very fabric of their operations, while simultaneously erecting robust security, governance, and observability frameworks to safeguard both business interests and societal well-being against catastrophic risks.

It’s not just about deploying AI. It’s about deploying it wisely. How will you build your agentic future?

I’d love to hear your thoughts and share insights. If there’s anything I’ve missed, or if you’d like to discuss this further, please feel free to reach out.

References

Acunetix. (n.d.). TLS Security 2: A Brief History of SSL/TLS. Acunetix.

AG-UI Protocol Team. (2024). AG-UI: The Agent-User Interaction Protocol. GitHub repository: https://github.com/ag-ui-protocol/ag-ui

Amazon Web Services. (2025). Unlocking the Power of Model Context Protocol (MCP) on AWS. AWS Machine Learning Blog.

Anthropic. (2024). Introducing the Model Context Protocol. https://www.anthropic.com/news/model-context-protocol

Bhatia, D. (2025). 10 Innovative MCP Server Use Cases That Will Transform Your Business in 2025. Medium.

Block. (2025). MCP in the Enterprise: Real World Adoption at Block. https://block.github.io/goose/blog/2025/04/21/mcp-in-enterprise/

Google. (2025). Google's AI Tool Big Sleep Finds Zero-Day Vulnerability in SQLite Database Engine.

IAPP Research and Insights. (2025). Global AI Law and Policy Tracker. International Association of Privacy Professionals.

Microsoft. (2025). Model Context Protocol (MCP) is Now Generally Available in Microsoft Copilot Studio. Microsoft Copilot Blog.

OECD. (2025). Sharing Trustworthy AI Models with Privacy-Enhancing Technologies. OECD Artificial Intelligence Papers, No. 38.

OpenAI. (2025). Model Context Protocol (MCP) - OpenAI Agents SDK. https://openai.github.io/openai-agents-python/mcp/

Pillar Security. (2024). The Security Risks of Model Context Protocol (MCP). https://www.pillar.security/blog/the-security-risks-of-model-context-protocol-mcp

VentureBeat. (2025). The Interoperability Breakthrough: How MCP is Becoming Enterprise AI's Universal Language.

Wikipedia. (2025). Model Context Protocol. Last updated 2 days ago.

Yehudai, A., Eden, L., Li, A., et al. (2025). Survey on Evaluation of LLM-based Agents. arXiv preprint arXiv:2503.16416v1.

Zero Trust Part I: The Evolution of Perimeter Security. (n.d.).