Exciting and Stupid

Google has embarrassed itself once again with hasty roll-out of generative AI features, serving up ‘AI overviews’ for search topics that include glue in Pizza recipes, health suggestions such as ‘eat rocks’ and a whole host of confused, basic knowledge errors. What could possibly go wrong in treating the voluminous online output of Redditers as it it were the Library of Alexandria?

It turns out attention is not all you need to create intelligence, no matter how clever and fascinating the current generation of LLMs based on this approach have shown themselves to be.

I think we will soon reach a consensus that GenAI based on transformers is not even close to being intelligence in the human or animal sense of the word, although it will be an important building block that more advanced multi-modal AI stacks can utilise.

AI pioneer Yan LeCun’s advice to young AI students and researchers at the recent Vivatech event was illustrative of this shift:

“Don’t work on LLM. This is in the hands of large companies, there’s nothing you can bring to the table. You should work on next-gen AI systems that lift the limitations of LLMs.”

So much hope, money and computing energy has been invested in LLMs as solutions rather than components of new hybrid human+automation systems that we are running a risk of force feeding them into areas where their weaknesses might outweigh their strengths.

Meme shared on LinkedIn by Alexander Chamandy

Boring and Impactful

As with so many other developments, consumer AI is exciting and trendy, but boring enterprise AI is where the transformative ‘thick value’ is waiting to be discovered – but only if we can accelerate organisational readiness.

McKinsey released an interesting report last week about one of these dimensions of readiness: re-skilling people and re-defining job roles. The report focuses on nine major economies in Europe and compares them with the United States, predicting that technology will hugely change demand for labour by 2030, with other factors such as an aging population, infrastructure spending, etc., also having an effect. It predicts demand for health professionals and other STEM-related roles will grow by 17-30% over that period, and foresees a marked increase in automation:

“By 2030, our analysis finds that about 27 percent of current hours worked in Europe and 30 percent of hours worked in the United States could be automated, accelerated by gen AI. Our model suggests that roughly 20 percent of hours worked could still be automated even without gen AI, implying a significant acceleration.”

It is great to see more and more companies committing to the skills shifts that this implies, such as Bonnie Cheuk’s announcement of a dedicated GenAI skills programme at Astra Zeneca, supported by VP of Talent and Development Marc Howells – a welcome initiative that is part of their ‘being digital’ not just ‘doing digital’ approach to the transformation of work.

But the implications for jobs and work run deeper. As Sangeet Paul Choudary writes in his useful enterprise AI playbook, this is not just about skills: there will also be a significant unbundling and re-bundling of tasks which will change job roles and perhaps also what we regard as ‘work’.

This is a challenge that cannot be left to technology vendors alone – organisations should really be involving workers in this process, supported by the kind of multi-domain groups we talked about last week in our Digital Leadership Group playbook.

And of course, this will also include managers, who at the more political-bureaucratic end of the spectrum do a lot of basic manual coordination that is a time and resource cost we can do without, and whose automation could help make organisations more agile and responsive.

0.48kWh should be enough for anybody

A pro-human argument for automating the kinds of tasks that LLMs and basic agents are good at is to free up the space and time for doing more of what people are good at, such as making mistakes and stumbling on breakthrough innovation. Fast Company recently used the example of the discovery of Penicillin to make the point:

If the computational powers of generative AI continue to grow, where does the human brain maintain an enduring advantage? The answer may lie in the 1928 story of Fleming’s discovery: The human capacity to make and learn from mistakes is a fountain of innovation. “Do not be afraid of mistakes,” Fleming once said. “For without them, there can be no learning.”

Instead of spending a projected 10% of global energy on brute force compute to achieve slightly less wrong LLMs, we should be using this technology to augment, connect and exploit the vast amounts of under-utilised human creativity and real intelligence that runs at about 20W, or less than half a kilowatt hour per day. Real intelligence is an integral part of the evolutionary urge to survive and thrive, and it is all about constraints, clever hacks and contextual adaptation.

The magic – the force multiplier – is what people do with their tools and how they find novel ways to combine, compose or create with them, and that is a very human, social process. I love looking back at previous periods of social and technological innovation, such as the C18th period studied by Stanford’s wonderful Mapping the Republic of Letters project, to see how connections, coincidence and what bloggers used to call provisional ideas were shaped and used in new ways.

Every layer of our biological tech stack from neurons up to communication and reproduction is driven by an innate, hard-wired urge to connect. This is the renewable energy source for change.

So how can we do a better job of connecting technologies born of abundant money, and compute with the low-fi, constrained, contextual human uses to which it could be put?

The iconic Shard building in London was an insanely difficult project that demanded new building methods because of the tiny footprint of its site being surrounded by key rail routes, a hospital, and so on. So they opted to build top-down, alternating between deepening the foundation and then building more floors on top.

Despite the limitations of LLMs as a model, we have enough foundational components to power decades of building on top; but right now, there seems to be a big imbalance between the money and the attention deployed to improve the underlying technology, compared to that which is deployed in using it to accelerate the future of work and how we design, build and run the organisations we need for the future. Let’s build up as well as down.