In a week where more than one leading LLM seems to be creaking at the seams, suffering capacity issues or even going backwards in terms of output utility, it is worth re-stating why we think it is worth focusing on enterprise AI over consumer AI, and why we think it will ultimately produce greater economic returns.

Demis Hassabis this week made the bullish case for AI transformation in an interview with the Guardian newspaper:

“I think we’ll get this right. I think also, humans are infinitely adaptable. I mean, look where we are today. Our brains were evolved for a hunter-gatherer lifestyle and we’re in modern civilisation. The difference here is, it’s going to be 10 times bigger than the Industrial Revolution, and maybe 10 times faster.” The Industrial Revolution was not plain sailing for everyone, he admits, “but we wouldn’t wish it hadn’t happened. Obviously, we should try to minimise that disruption, but there is going to be change – hopefully for the better.”

Will AI really be 10x bigger and faster than the industrial revolution? Perhaps not; but it could have a major impact on our societies, economies and our organisations.

It’s the last bit – organisations – that we are most interested in, because the industrial economy was kind of grafted onto a basically feudal structure of owners/managers versus workers/citizens. It led to great gains, but it also led to grinding urban poverty and environmental degradation. AI and automation holds the promise of finally breaking this mould and creating smart, human-centric organisations where talent, ideas and hard work are rewarded, and where we can work together without the equivalent of a caste system.

Andrew McAfee considers what kinds of organisations will thrive in this new era, and concludes it will be the same kind of “geek” organisations that emerged victorious from the dot com boom:

The pattern from history is clear: General-purpose technologies – those rare ones with impact broad and deep enough to accelerate the normal march of economic progress – don’t maintain the status quo. They disrupt it. They create new winners and losers, enable the rise of new superstars and cause some (many? most?) established giants to stumble and fall. One well-documented example: the American manufacturers at the top of their industries when factories started electrifying early in the 20th century were usually not the ones on top once electrification had wrought its changes (conveyor belts, assembly lines, overhead cranes, individually-motorized machines, and so on)…

In The Geek Way and frequently here I’ve written about companies like Netflix, Anduril, Stripe , SpaceX, and Tesla, which are doing to the incumbents in their industries about what Alexander the Great’s army did to all the empires it faced.

Sure. Some of the big tech firms will thrive. Some enterprise software firms will adapt and continue to play a role in organisational life. And there will be whole new generations of startups and scale-ups that will find valuable niches using AI to get things done that were not possible – or perhaps just not economical – before.

But I don’t think that captures the real size of the opportunity. Anybody with an idea, a mission or a handful of insights that can become algorithms will be able to create their own organisational system – their own little world – and run with small team structures to scale a business without any of the wasteful, performative accoutrements of a Twentieth Century corporation. And every small product company, workshop or service business will be able to strip out costs thanks to the commoditisation of automation technologies and lower-cost robotics.

Just like modular transport and supply chains, electrification, computing and the internet, it seems like AI has the potential to reduce the cost of doing business and solving problems across almost all sectors and societies.

 

Cui Bono?

To whom will these benefits accrue? Big tech firms, a new wave of geeky firms, or the whole economy? In the United States, eye-watering sums are being pumped into data centres, compute and model training in the hope that one of the leading AI firms today will be the winner who takes it all. By contrast, in China, open models are catching up fast and there is a sense that the value will be in the application layer, not the LLMs themselves.

If AI is to be so transformative, then it is ultimately good for the global economy for it to be available to all, and not wholly owned by monopolists or oligarchs and their investors.

Digital sovereignty is often framed as every country or region needing to build their own tools, but as this thoughtful piece from Public Digital argues, there are important questions that organisations of all kinds – from small firms to national governments – should be asking themselves about what they control and what they depend on:

The goal of digital sovereignty is not an impossible ideal of “digital independence,” a state of complete self-sufficiency that would be incredibly expensive and cut an organisation off from global innovation. As we have said elsewhere, sovereignty, if defined as owning every layer of a technological stack, can quickly become counterproductive. It leads to balkanized systems, limited interoperability and a retrenchment from global cooperation. The incentives for creative development disappear. Competition and innovation subside.

The goal, instead, is intelligent dependence. This means making better decisions about what to depend on, under which conditions, and for how long. But to do that, leaders must first understand the whole picture…*

The diffusion of economic benefits from general purpose technologies is a key determinant of their overall impact. Value capture by a few dominant firms is rarely the optimal outcome.

Electrification and the development of the internet were quite diffuse innovations, with both acting as a force multiplier for all kinds of industries and activities, both old and new. In contrast, the transistor and telephony both led to value capture by a handful of large firms, and only later became easily accessible general purpose technologies.

Right now, the battle for AI supremacy among big US tech firms looks less like electrification and more like telephony – perhaps even more so. With such huge upfront costs, there is a tension between value capture by early adopters and the bigger potential prize of diffuse innovation and returns. But competition and innovation will ultimately find a way, as we are seeing in China, and a whole new application layer could be where the economic diffusion (and arguably the biggest aggregate returns) will be found.

Enterprise AI is already a growing market, with Microsoft, Google, Amazon (and Anthropic?) well placed to take a slice of increasing AI spending. In the consumer space, LLMs will probably end up as free tools that build on or replace search engines, with new products and premium apps limited by consumer spending patterns. Perhaps licensing and APIs are a route to LLM providers taking a cut from the diffusion of economic benefits, or maybe they will just fall back on advertising.

Yeah, but does it even scale, bro?

Rohit Krishnan recently wrote about the inevitability of advertising as a monetisation model for AI and LLMs, which I find a truly alarming idea with little economic or social rationale for anybody except investors and ad-tech tycoons. So alarming in fact that I commented on the piece, which is a rarity for me these days. Given (waves hands frantically) everything that is happening right now with democracy, media and social cohesion, I can’t think of many worse ideas than to repeat the mistake of walled gardens and attention farming that turned the early social media ecosystem into the weaponised stupidity we see today.

Enterprise use cases might sound boring in comparison, but they really are not. It’s not about slightly better enterprise software, which has always been disappointing. The market for services-as-a-software and organisational components could be huge, and far more impactful.

 

Learning by doing is the key to adaptation

Christina Wodtke remarked the other day on how us old timers from the Gen X era have been re-energised by AI:

The old timers who built the early web are coding with AI like it’s 1995.

Think about it: They gave blockchain the sniff test and walked away. Ignored crypto (and yeah, we’re not rich now). NFTs got a collective eye roll.

But AI? Different story. The same folks who hand-coded HTML while listening to dial-up modems sing are now vibe-coding with the kids. Building things. Breaking things. Giddy about it.

One thing that sets this older generation apart is that we grew up with computers that were less like hermetically sealed magical slabs and more like prototypes that you had to hack, coax and adapt to make them do anything interesting. We grew up on learning by doing and we didn’t have Reddit or stack overflow to fall back on.

I have met plenty of business leaders from this generation who started out in creative or geeky fields and were then assimilated into the generic management borg to make a good living. I am willing to bet many of them are vibe-coding at home or itching to flex their creative muscles. Can we re-activate this desire to accelerate change?

On an organisational level, learning by doing is a better approach than asking consultants to write the perfect strategy and paying vendors to implement it. In fact, it is one of the characteristics that Andrew McAfee mentions (above) in predicting which companies will thrive with AI. And on an individual level, as AI and automation start ripping through white collar jobs, the ability to learn fast by doing – and not wait to be told – will be a key factor in enabling people to adapt.

In some ways, we might be seeing a kind of reverse industrial revolution in terms of AI’s impact on mass employment, and with corresponding implications for the education system that was built to fuel the factory era. Just in the past two weeks, McKinsey are talking about an “existential” shift in consulting jobs. Other leading consulting firms are trying to navigate what we call the Centaur team challenge of optimising human+AI small team structures. And the FT is asking what this all means for entry level job training and education.

Most AI tools and models today are very much unfinished and anomalous, and maybe that is a feature not a bug, at least in the sense that we need to learn together, work out the kinks and take an active role in sense making, rather than trust the black box.

After a particularly frustrating recent AI-assisted coding project (note to self: the more obscure the language or coding framework is, the less you should trust your LLM), I dug into other peoples’ fail conditions to learn from the experience. Some people report a phenomenon of debugging decay, where LLMs get significantly worse at debugging the more attempts they make. And this demands workarounds and clever context management to avoid. In my case, the LLM was promising me over and over again that it totally understood the error now and it guarantees this final_final.final file would build correctly, until I called it out and it sounded almost relieved to admit it couldn’t make it work.

 

Embracing Imperfection

Learning how to get the most out of these boastful stochastic parrots is an art in itself, and goes way beyond writing a single clever prompt and hoping for the best, as Azeem Azhar shared recently in a very helpful guide to better prompting:

If you hired a new team member and gave them only vague tasks with no feedback, you wouldn’t be surprised if they struggled. The same goes for AI. Models need context. They need iteration. And they benefit from review, correction and calibration, just like people do. And as AI becomes one of the most important inputs in modern work, it’s time to start managing it as such.

The last of Azeem’s 7 rules for prompting is “content is king”, and the challenge of context engineering is emerging as an important way to manage AI agents in particular, as this piece from Diginomica explores:

Because just like the person waking up daily to an increasingly messy pile of notes, pictures, and other fragments with no idea of why they are important, models also ’wake up’ as a result of subsequent invocations to find data fragments stuffed into their short-term memory.

Without any context, therefore, the model can start to over-focus on words or phrases that appear frequently, or fixate on earlier parts of the conversation which are no longer relevant to the current task.*

And, of course, AI is a pretty great learning companion already. OpenAI’s study mode seems interesting, and once we get over the ‘AI is cheating!’ stage of embracing this technology in a learning context, I am sure we will find many creative uses for it in discovery, testing, learning consolidation, etc.

Forget about the AGI hype for a moment – it increasingly sounds like a fairy story told to credulous investors and governments to maintain their FOMO and keep them on the edge of their seats. Instead, focus on the incredible things we can do today with existing AI tools and systems to transform or even reinvent the organisations that govern our collective activity and work.

In an enterprise context, developers and technologists are already well underway with self-managed learning and exploration, and the youngest emerging leaders are probably experimenting and growing with the technology. But as is so often the case, those with the wisdom and experience to help put this all into practice are often lacking the confidence and knowledge to move forward boldly. So executive education and leadership activation is a big task ahead of us. We are thinking about this a lot and will return to the topic soon.

But to close, here is a quote from an interesting response to Christina Wodtke from a recently re-activated (older) developer, which makes the point well:

When former prime ministers are writing op-eds about the AGI race, you know the fantasy has captured everyone – media, politicians, markets. They’re so busy staring at artificial general intelligence that they’re missing the actual revolution happening at ground level.

Two stories are unfolding simultaneously. One is a spectacular bubble built on geopolitical panic and sci-fi fantasies. The other is the quiet transformation of how we build everything. When the bubble narrative pops, the buildout accelerates.