The Control Layer Of AI: Why Agentic AI Stacks Are The Next Big Thing
During the recently concluded NVIDIA GTC 2026 event, the chip giant announced NeMoClaw. Built on OpenClaw, the blockbuster agentic AI framework that allows building AI agents that can run on personal devices, NeMoClaw adds a security and privacy layer to the OpenClaw agents or ‘Claws’, making them suitable for enterprise usage.
While the jury is still out on whether NemoClaw could deliver on its promise of offering enterprise-grade AI agents, this marks NVIDIA’s further move into the Agentic AI stack as it looks to claw in deeper into the AI software and agent development. Notably, NVIDIA introduced Nemotron, a family of open models that help in building specialised agentic AI systems.
NVIDIA joins companies like Google, Amazon, Microsoft, and Salesforce that are building what is referred to as the AI agent frameworks that help in the creation, management, and orchestration of AI agents. The race is somewhat reminiscent of the early days of cloud computing, where firms competed to define the dominant platform layer that developers would build on.
The Agentic Transformation Wave
Today, enterprises can build AI agents from scratch by using coding languages such as Python and JavaScript, but that would require significant time and effort, especially when building for scale.
This is where agentic frameworks come in. Frameworks have built-in features and functions. AI agent frameworks are increasingly positioned as the “operating systems” for autonomous software without starting from scratch.
Popular open-source AI agent frameworks include LangChain, LlamaIndex, and LangGraph for core development. Big tech and enterprise tech majors — Microsoft, Google, Amazon, OpenAI, and Salesforce — currently offer a slew of AI agent frameworks and platforms tightly integrated with their cloud and AI ecosystems.
These include tools like AutoGen, Vertex AI Agent Builder, Agents for Bedrock, and Agentforce, which have become the building blocks that other businesses are leveraging to enable the agentic transformation.

Real enterprise demand, supply side push from technology vendors, and genuine market shift marks the rush for Big Tech to build the next big thing in AI agent frameworks, opines Ashvin Vellody, Partner, Deloitte India.
Enterprises’ access to powerful frontier models and falling cost of inference have acted as incentives, he added. “Companies now recognise that value will not come from the model alone. It will come from making the technology easier to use for a much larger pool of developers and builders who can create product-grade solutions. Enterprises want a practical way to embed these capabilities into business workflows.”
Indian Players Crowd Orchestration, Application Layers
For companies, building agent frameworks is less about tooling and more about owning the next platform layer. Much like APIs and cloud platforms before them, these frameworks can lock developers into ecosystems, shape usage patterns, and unlock monetisation levers.
In India, this shift is already underway.
In March, fintech startup Razorpay introduced its Agentic AI Studio in partnership with Anthropic’s Claude model. The platform is being tested with partners such as Swiggy and Zomato, enabling AI agents to place orders and complete payments. It has also tied up with companies like PVR Inox, BigBasket, and LinkedIn.
Unlike foundational frameworks such as Google’s Vertex AI or OpenAI’s Agents SDK, which are built on their own underlying models, Razorpay operates at the orchestration and application layers, using Claude as the base model.
At the application layer, Agent Studio functions as both a marketplace and a builder platform, allowing businesses to deploy purpose-built AI agents for specific use cases. At the orchestration layer, it enables businesses to define multi-step workflows in plain English, connect agents to systems like Shopify, WhatsApp, Tally, QuickBooks, and Slack, and trigger actions based on real-time payment events.
“We deliberately don’t build foundational models. That’s Anthropic’s and OpenAI’s domain,” said Khilan Haria, chief product officer at Razorpay. “Instead, we focus on making that intelligence actionable within real-world commerce contexts, at scale.”
This positioning is reflective of a broader trend. Most Indian startups in the agentic AI space are focusing on the orchestration and application layers rather than building foundational models.

Voice AI startup Gnani.ai, for instance, recently launched Inya, a platform designed to help enterprises rapidly build and deploy voice agents. “It is a multi-agent platform that gives customers access to prebuilt workflows along with all the necessary configurations to quickly develop AI agents,” said Ganesh Gopalan, cofounder and CEO of Gnani.
The startup’s Inya platform also includes an orchestration layer that manages interactions at scale while minimising latency. “In many cases, partners and customers have been able to build and deploy voice AI agents within 30 minutes,” the CEO added.
Similarly, Bengaluru-based Bolna AI operates in the orchestration layer, enabling enterprises to deploy multilingual voice agents across different call scenarios. Noida-based Squadstack primarily operates at the application layer, while also building orchestration capabilities for production-grade deployments focused on revenue and customer experience workflows.
The concentration of Indian firms in these layers can be attributed to relatively lower entry barriers and faster paths to monetisation.
Experts believe that while foundational models and frameworks remain important, long-term value will be created at higher layers of the stack.
“Over time, models will become more available, and frameworks more standardised,” said Vellody. “The real differentiation will come from how effectively organisations use them to drive business outcomes.”
Answering Monetisation Questions
Ultimately, the race to build AI agent frameworks is less about enabling developers and more about owning the monetisation layer of AI. Much like cloud platforms transformed infrastructure into a recurring revenue business, companies such as Microsoft and Salesforce are positioning their frameworks as the gateway through which enterprise AI is built, deployed, and scaled.
This creates multiple revenue streams. Vendors can charge subscription fees for platform access, usage-based fees tied to agent activity, or outcome-linked pricing where they take a share of transactions executed by agents.
In many cases, frameworks themselves may not carry a direct cost. Tools like AutoGen or CrewAI, for instance, are often free to use. However, companies monetise the underlying infrastructure, models, or applications built on top of them.
Pricing models vary widely. Bolna AI, for example, charges customers on a per-minute basis, with entry-level pricing starting as low as ₹2.5 per minute. Razorpay’s Agent Studio, currently in early access, offers a 30-day free trial, after which pricing depends on the specific agent and use case ranging from subscription fees to per-action or outcome-based charges.
“While frameworks can be monetised, the real monetisation will likely be indirect,” said Arun Chandrasekaran, VP analyst at Gartner. “Revenue will ultimately come through models, infrastructure, and applications.”
For markets like India, cost efficiency, multilingual capabilities, and pre-built agents are expected to play a critical role. Chandrasekaran noted that lightweight, modular, and open frameworks are likely to find greater traction than heavy enterprise stacks.
Ultimately, as enterprises move from experimentation to production, the real value will accrue to those who control how AI agents are orchestrated, integrated, and monetised at scale. Much like cloud and mobile before it, today’s fragmented landscape of AI agent frameworks is likely to consolidate over time into a handful of dominant platforms where control over context and governance will determine the winners.
The post The Control Layer Of AI: Why Agentic AI Stacks Are The Next Big Thing appeared first on Inc42 Media.
No comments