We are working on some innovative learning projects to help leaders understand what agentic AI could mean for their roles and their organisations, and to help with readiness for adoption; so I have been reading about overcoming barriers and blockers relating to organisational and technical architecture.
The diffusion and practical adoption of revolutionary general purpose technologies like electricity, the internet, and arguably AI, are often slowed down by the need for infrastructural adaptation. Plumbing, pipes, interoperability and architectural changes are typical limiting factors. It feels like this is where we are with agentic AI to some extent – the concept has exploded onto the scene, but to realise its promise beyond POCs and pilots, we have to take a step back and resolve some long-ignored issues around how our organisations are connected.
I think the arguments made by Arvind Narayanan and Sayash Kapoor in their widely-quoted paper AI as Normal Technology earlier this year are broadly correct, and at the point where enterprise AI becomes less novel and more part of the organisational fabric, we will have a better idea of what it can really do.
Small, specialist AI agents don’t need omnipotent AGI to be useful. As in the real world, ecosystems of small agents in a situation of co-opetition are likely to evolve more rapidly than a single large LLM, and produce results that are safer, more transparent, and more adaptive. This is partly why learning-capable, workflow-integrated systems that adapt to a firm’s processes were cited – alongside change management and learning – as key success factors among the 5% of firms who reported successful AI adoption, according to the recent paper from MIT on the state of AI in business.
But it will take time to transform manual, process-driven ways of working into composable service modules; vertically-divided silos into connected networks; and, crucially, top-down management culture and systems into enablement and coordination systems that guide, navigate and support centaur teams or so-called high-agency generalists.
Designing and coding individual agents is actually the easiest challenge to overcome, and can be done today with existing tech. But the plumbing, the knowledge and data flows, the interoperability and the control and oversight systems are harder problems to solve.

What will agentic AI mean for enterprise software and architecture?
Constellation Research recently summarised how they see agentic AI disrupting enterprise software silos, and the shift they describe from ‘bought’ monoliths to ‘built’ agents and services seems like it is already underway:
Enterprise software will be reinvented with headless platforms, AI agents and enterprises that will aim to build their own systems. But a lot has to happen for enterprise software and SaaS incumbents to be disrupted. The storyline that’s emerging is one that revolves around enterprise software disruption due to AI agents that can traverse multiple systems and data silos. Today, enterprise software resides in buckets of acronyms–ERP, CRM, HCM–with multiple silos and data stores. AI agents threaten to break down that hierarchy.
In particular, Microsoft is betting heavily on agentic AI replacing big lumbering enterprise SaaS platforms in the next five years, as this piece on New Stack outlines:
Microsoft’s vision of agent-native business platforms represents either the most significant transformation in enterprise software since the advent of the internet, or an overly optimistic prediction that underestimates the inertia of enterprise IT.
The question is no longer whether AI will transform business applications, but how quickly and completely will this transformation occur. [Microsoft corporate VP for business apps and platforms Charles] Lamanna’s prediction that by 2030, agent-based systems will be the prevailing pattern may be optimistic, but the direction seems inevitable.
Tim O’Reilly recently shared a thoughtful piece looking back at the history of software development to better predict what comes next, writing about how the modularity of UNIX combined with version control helped define the modern software development stack, leading to Github, which in turn was a key enabler for AI changing the way we code.
I anticipate this approach of modularity, version control and collaborative shared repositories will also be applied to other forms of language and codification, such as rules, guidelines, process designs, cultural norms, and other key elements of the context within which we and AI agents operate, especially in non-technical enterprise domains.
Tim also talked about some of the dynamics of group-based collaboration and pondered how distributed AI could evolve this kind of architecture of participation.
In the enterprise world, there have long been products explicitly serving the needs of teams (i.e., groups), from Lotus Notes through SharePoint, Slack, and Microsoft Teams. 20 years ago, Google Docs kicked off a revolution that turned document creation into a powerful kind of group collaboration tool. Git and GitHub are also a powerful form of groupware, one so fundamental that software development as we know it could not operate without it. But so far, AI model and application developers largely seem to have ignored the needs of groups, despite their obvious importance. As Claire Vo put it to me in one recent conversation, “AI coding is still largely a single-player game.”
These ideas hark back to the early days of social computing, with its notions of small things loosely joined and Tim’s own writing on creating an architecture of participation. Perhaps these ideas are useful guides to designing the agentic AI architectures we will need to make the most of our new supertools inside the enterprise.
Vertical agents and World Foundational Models
The need for shared knowledge and collaborative architectures is just as important in the world of physical AI and robotics as it is in the virtual realm.
A comprehensive report released today by AI Supremacy on progress in robotics highlights the rapid evolution of World Foundational Models that give robots knowledge of the physical realm that surrounds them, and its author Diana Wolf Torres is bullish on AI’s rapid evolution in this field.
Jensen Huang said the ChatGPT moment for robotics was “just around the corner.”
He was wrong.
It wasn’t around the corner. We’re living in it…
This is an area of work that will benefit greatly from the same kind of source and version control, memory and collaboration that O’Reilly talked about in relation to software development more broadly. Most robots are highly specialised and require limited but very detailed world models, albeit with common features, as we see with things like physics models in world building for games.
But the same will also be true for most agents in a virtual agentic AI architecture inside our organisations. They might share general purpose LLMs, but overall there will be more vertical agents than general ones.
This piece by Stephen Bigelow provides a good overview and summary of the vertical agents that are emerging in fields such as health, finance, manufacturing and customer support, and identifies six characteristics they tend to have in common: expertise, optimisation, context and reasoning, native compliance, automation, and adaptability.
Small agents loosely joined within an architecture of collaboration and participation sounds a lot more realistic than waiting for AGI and asking it to run the company.
Interestingly, As the WSJ covered this weekend, China’s industrial policy is focused on developing lots of small, practical applications of AI – in many cases what we would call vertical agents – rather than chasing the dream of a universally applicable AGI.
… if AGI remains a distant dream, as more people in Silicon Valley now believe, China will be in position to steal a march on its global rival in wringing the most out of AI in its current form, and spread its applications worldwide. Already in China, domestic AI models similar to the one that powers ChatGPT are being used, with state approval, to grade high-school entrance exams, improve weather forecasts, dispatch police and advise farmers on crop rotation, say state media and government reports.
This focus, coupled with the rapid expansion of a hyper-competitive industrial ecosystem – as we have seen play out in electric vehicles and solar power – could mean that Chinese firms will be quicker to identify the architecture and interoperability challenges that agentic AI will need to solve in order to scale, and take a lead in collaborative AI architectures. Meanwhile, in the United States, there is so much riding on the continued inflation of the AI-driven stock market bubble that leading firms are committed to a strategy of go big or go home, which is not necessarily the best path for innovation and, more importantly, diffusion and adoption.
Some architectural considerations for agentic AI
So what should leaders focus on in terms of organisational architectures and AI readiness?
On the technical side, we have Model Context Protocol, which is a good step towards agent interoperability, and other elements of the coordination architecture are starting to emerge.
But we also need to think about how to map the organisation’s process landscape and identify those simple process steps that are best suited to automation, and re-bundle them into a modular system to support re-use and combination.
The idea of service composability is therefore an important one for AI readiness, as Holly Hall from the MACH alliance argues:
The correlation between companies leveraging composable architectures and those that are successfully deploying AI solutions is undeniable. In a 2025 research report from the MACH Alliance, it was observed that companies with a mature MACH architecture [Composable, Connected, Incremental, Open, and Autonomous] are twice as likely to successfully deploy AI (77% vs. 36%).
We also need a strong knowledge and data architecture to inform agentic operations and underpin the context in which agents work. Real-time data platforms and flows are a key part of this, but so too is knowledge management.
We are noticing a shift away from RAG as the key method of contextualising AI agents and towards a broader approach of using knowledge bases and knowledge graphs. But for many firms that are meeting- and management-centric, a lot of useful context is not accessible in a clear written form, so they are at a disadvantage.
This piece by Raghunandan Gupta on Powering Enterprise AI with a Knowledge Base I spotted last week is a good basic introduction to the topic.
Plus, as we have written about in relation to context management for AI, memory (possibly with the addition of version control and rollbacks) is a big challenge for LLMs and an important factor in getting the best out of them. In principle, there should be relatively easy if storage-intensive solutions for enterprise AI, and these could be important for transparency and accountability as well as performance.
One interesting concept sketch I saw recently on Reddit posited an event-driven gateway for ingesting information that is immutably logged and then run through agents to categorise and learn from it in pursuit of what the OP called a ‘git for AI memory’.
Creating an architecture of collaboration for agents, people and machines is a fascinating multifaceted challenge, and one I expect we will return to; but for now here are some other pointers from previous editions: