The Emerging Opportunity in AI Agent Infrastructure
AI agents are quickly becoming the next big thing in tech. After years of hype around chatbots and automation, we’re now seeing autonomous AI “agents” tackle complex tasks on their own. With two investments made in this emerging space in April, we think AI agent infrastructure is a promising frontier worth watching. There’s quite an amount of excitement among founders and investors because the pieces are finally coming together to make AI agents truly useful.
AI Agents, and Why Now?
AI agents are autonomous actors powered by AI that can plan and execute multi-step tasks. Unlike a simple chatbot that just responds to queries or a hard-coded script that follows a fixed procedure, an AI agent can figure out what needs to be done and then do it. Give an agent a high-level goal (“Help me prepare a sales report” or “Automate my scheduling”), and it can break the task into sub-tasks, call on the necessary tools or data, and carry out the steps – all with minimal human micromanagement. This matters now because recent advances in AI make it possible – today’s LLMs have better reasoning abilities, and new frameworks let them safely connect with the outside world.
While there was early interest in AI agent infrastructure, the ecosystem was fragmented and lacked a clear framework. Developers experimented with rudimentary tool use and orchestration, but the building blocks weren’t standardized or robust enough to support scale. This year feels fundamentally different. With more defined architectural patterns, like MCP and modern orchestration frameworks, the foundation is taking shape. Many are calling 2025 the “big year for agents,” yet the real traction has been limited to a few verticals, mainly coding, legal, customer service, and sales. Beyond those, most agent applications remain aspirational.
That said, the verticals where agents have landed are already generating meaningful revenue. As usage grows, so does the need for stronger infrastructure to support it. When agents are running thousands of tasks per day: accessing files, updating CRMs, generating documents, the expectations around uptime, security, observability, and interoperability rise sharply. This is where the production layer of agent infrastructure becomes critical. Just like in past platform shifts, the tools that ensure performance, reliability, and trust at scale are likely to become the backbone of the ecosystem, as well as a key opportunity for investment.
Inside the AI Agent Infrastructure Stack
Building useful AI agents requires a whole new infrastructure stack. This stack has a few key layers that work together to let agents sense, think, and act:
The “Model Context Protocol” (MCP) – At the core is a new open standard called MCP, which you can think of as a universal adapter for AI. Introduced by Anthropic in late 2024, MCP is essentially a common language that lets AI agents connect to external tools, databases, and services in a plug-and-play way (Anthropic likens it to a “USB-C for AI”). Instead of coding custom integrations for every app or API, developers can use MCP as a unified interface. This means an agent can talk to many systems (from Salesforce to Slack to your local file system) through a standardized protocol. In practice, MCP allows an AI agent to ask, “What tools and data can I access?” and get a list of available actions (for example, read email, search customer records, or open a webpage). It brings interoperability and consistency to agent development, which is crucial for scaling these systems.
Tools and Actions (Browser-Based and Computer-Based) – With the protocol in place, an agent needs actual tools to use. Think of these as the “hands and eyes” of an AI agent. Some tools are browser-based, allowing an agent to navigate the web or a SaaS app just like a person would. For instance, an agent might use a headless browser to log into a website, click buttons, and scrape information – effectively turning any website into an API for the agent. Other tools are computer-based, letting agents interact with the operating system or local environment: reading and writing files, executing code, querying a database, or controlling an application. Through MCP connectors (often called MCP servers), a wide range of these capabilities is becoming available. This layer is what gives agents real-world powers – do things in your digital world.
Orchestration and Intelligence Layer – Above the tools, we need an orchestration layer to manage the agent’s reasoning and workflow. This is the “brain” that decides when to use which tool, how to sequence steps, and how to handle decisions or errors. Modern agent orchestration frameworks use the AI model’s reasoning to plan out actions, but they also provide structure: keeping track of state, handling memory of past steps, and ensuring the agent stays on track. For example, if an agent is tasked with planning a conference, the orchestration layer would help it break the job into parts (find venues, send invites, track RSVPs) and coordinate each step, possibly spinning up sub-agents for specialized tasks. Besides LLMs, we’re seeing new platforms emerge here – from startups building “agent Ops” tooling, to cloud platforms by Microsoft, Google, and Amazon offering managed agent services. They help developers deploy and monitor agents at scale, with features like evaluation tests (to keep agents from going off the rails), logging and debugging, and security guardrails.
Big Players Are On Board
One reason this space is heating up is that big tech companies are investing heavily to support AI agents. Anthropic kicked it off by open-sourcing the MCP standard, signaling that a common ecosystem benefits everyone. OpenAI soon after adopted MCP in its own developer stack, meaning AI built on OpenAI’s platform can readily use MCP-compatible tools. Microsoft’s AI platform has followed suit – its Copilot Studio now lets you plug in MCP connectors so your Copilot agents can securely tap into company data and services with just a few clicks. Google is exploring complementary protocols (like a system for agents to talk to each other directly) and likely will support these standards in its cloud offerings. In short, there’s a convergence happening: fierce competitors are aligning on open interfaces for AI agents. This validation from industry leaders not only accelerates development but also gives startups a stable foundation to build on. We suddenly have the makings of a rich agent ecosystem, with a growing library of tools and best practices that everyone can share.
An Investment Thesis in the AI Wave
From an investor’s perspective, AI agent infrastructure is shaping up to be a compelling investment thesis in the current wave of AI innovation. Why? We’re at a classic platform shift. Just as the rise of mobile apps or cloud computing created a need for new infrastructure (app stores, cloud DevOps tools, etc.), the rise of AI agents is creating demand for specialized infrastructure and “picks and shovels” services. Companies that provide the building blocks – whether it’s robust agent orchestration (“the Vercel for agents”), security and authentication layers for agents, or domain-specific MCP toolkits – stand to gain as agent-based applications proliferate. Moreover, AI agents have the potential to boost productivity dramatically, which means businesses will eagerly spend on technologies that make agents more reliable and powerful.
A notable signal of this momentum is the recent $75 million funding round secured by Chinese AI startup Butterfly Effect, the creator of the autonomous agent Manus AI, led by the iconic venture firm Benchmark. The investment quintupled Manus’s valuation to approximately $500 million in a few months. This is a very contrarian bet by a leading US VC firm given the macroeconomic uncertainty as well as the complex geopolitical tension between US and China. Unlike the firm’s previous investment in HeyGen, another leading Chinese AI company in the avatar space, Manus will continue operating outside of the U.S. Looking at historical data, Benchmark has made several contrarian bets — and they’ve often been right. This time is no exception. We shouldn’t underestimate the insight they may have seen before the rest of the market. The bet on Manus suggests Benchmark believes deeply in the rise of agentic computing and sees Manus as a frontier player in this emerging wave — regardless of domicile. It’s a strong signal: in a bifurcating global AI landscape, great teams building real technology can still attract bold capital if the vision is compelling enough. Regardless of domicile, the next generation of category-defining tech companies may be founded outside the U.S., and the structure of the global technology order may be on the cusp of being redefined.
We’re still early, and many pieces of this stack are evolving, but that early-stage mix of rapid progress and open questions is exactly where smart money sees opportunity. In the coming months and years, as agents move from experiment to mainstream deployment, the infrastructure enabling them could become the backbone of a new era of software.