Two very impressive demonstrations of GenAI capabilities have caused a stir in recent days, and they both challenge our assumptions and precepts about the art of the possible.

Playing the classic video game DOOM on devices is the new ‘Hello World’ of programming, and we have seen it implemented on everything from kitchen appliances to gym equipment. But the recent project by a mostly-Google research team to create a game engine powered entirely by a neural model (i.e., a playable version of DOOM with no game engine whatsoever, just a diffusion model that predicts the next frame based on keyboard inputs and its training data) is one of those affordances of GenAI systems that makes sense when you see it and think about it, but few would have predicted in advance.

Another exciting recent release, arguably more relevant to the future of organisations and software development, is Cursor AI – a software development environment (actually a fork of the popular VSCode environment) that lets you pair program with a specialist AI. Whilst this is similar to Github Copilot – also a hugely impressive new capability – early users report better contextual understanding in many cases, and it seems to be very well received.

New affordances of AI tools are emerging so rapidly, they are overwhelming the ability of organisations to assimilate them, especially as they make many of our assumptions about constraints obsolete. In a world of super-charged command line interfaces, it looks like the only limit is our imagination and the questions we ask.

So how do we pose better questions as leaders seeking to use these powers?

Slide from a recent corporate keynote on enterprise AI

 

Are we seeking marginal gains or transformation?

As with any new wave of technology adoption, the first instinct of organisations is to apply it to the way we do things today, and sometimes that might be a stepping stone towards ‘doing things differently’, but often that is as far as organisations go.

A lot of enterprise AI discussion has been about reducing costs in today’s operations by automating, simplifying or replacing labour, but as James Manyika argues in an interview with the Financial Times, this can lead to us missing a more important bigger picture:

“You don’t win by cutting costs. You win by creating more valuable outputs. So I would hope that … firms think about, ‘OK, now we have this new productive capacity, what additional value-added activities do we need to be doing to capitalise on what is now possible?’ Those are going to be the winning firms.”

And of course, as with the rise of the internet, we cannot take predictions of productivity improvements for granted:

“Right now, everyone from my old colleagues at McKinsey Global Institute to Goldman Sachs are putting out these extraordinary economic potential numbers — in the trillions — [but] it’s going to take a whole bunch of actions, innovations, investments, even enabling policy . . . The productivity gains are not guaranteed. They’re going to take a lot of work.”

 

What assumptions are holding us back?

Azeem Azhar shared an excellent piece on the imagination challenge this poses for leaders and managers last month. Based on his conversations with business leaders, he concluded that most are still at the stage of doing what they already do, but slightly cheaper or slightly better:

“They’re using AI to shave costs or incrementally improve processes, missing the opportunity to strategically rethink what their business could look like and how they approach the rethinking itself!”

“Why? … There are hidden assumptions in organisations and processes we already have. These assumptions about processes are based on things like:

  • The collective intelligence of the organisation
  • The speed with which it can make decisions
  • The types of decisions that can be made at each level of the organisation

These assumptions have been baked into the processes developed over the years.”

 

How can we get to grips with the art of the possible?

Azeem’s main advice to deal with this challenge is pretty simple:

“The first prescription is that you need to get hands-on with the technology yourself. There is no escaping it … The second is that… Actually, that’s all I recommend.”

This brings to mind a key difference I see in leaders who come from the generic management tradition and those who have some technical knowledge of computing or engineering. The latter group often demonstrate a better balance between explore and exploit management modes, and tend to engage in more continuous learning. Having even a basic conceptual understanding of systems, modularity, integration and how things work gives them the confidence and mental toolkit to re-imagine organisational structures and processes in a way the former group struggle with because all they understand is the old system.

If you can’t see the wood for the trees, and you have no concept of the art of the possible, then it can be hard to even find the right questions to ask.

For a long time in my own executive education work I have advocated for leaders to think like organisational architects and experience designers for their teams, because this enables them to think more broadly about how work is orchestrated and perhaps challenge some assumptions about ‘how’ work gets done.

Those focused only on using automation to cut costs will often miss clear opportunities to re-factor or re-design workflows and processes, and skip straight to automating what they already do, which is essentially what the pre-AI approach of Robotic Process Automation (RPA) has been doing for many years.

 

What tasks do we not want to automate?

Salesforce’s Chief Digital Evangelist Vala Afshar wrote a piece recently that sets out a maximalist approach to automation using the analogy of autonomous driving with its various levels of autonomy, from driver assistance to full autonomy:

Research shows that AI has the potential to automate most [Level 1-3] tasks in knowledge-based professions by 2030, dramatically increasing the average worker’s productivity. Humans will be elevated by AI, freed from manual, repetitive, and boring tasks — and empowered to focus on strategic and creative activities. AI also may create new opportunities for humans in this phase. This may, however, obscure the reality of what’s going to happen next. Once AI reaches level 4, we will enter the Replacement phase … It will, sooner or later, replace them, and this replacement, when it happens, will happen rapidly. Current HR and Change leaders need to start planning for this now.”

Almost certainly this argument is overstated, and this view is overly simplistic. But it does help focus the mind on the question of which tasks we might not want to automate, which is more nuanced than might first appear.

For example, as I mentioned recently, if you automate the kinds of training wheels tasks that are often a rite of passage for young professionals, how will they learn the ropes?

And if you routinely automate creative tasks like visual design or other crafts, this can eliminate the enjoyment and sense of purpose that creatives enjoy, plus it risks reducing creativity overall as new outputs become amalgams of old outputs and everything starts to look the same.

Ted Chiang argues in last weekend’s New Yorker that AI can’t make art, and this has stirred a debate about the labour theory of value versus the intrinsic value of craft and process. The value placed on abstract or modern art in New York, for example, might be more about the output and its current cultural relevance, but in other places such as Japan, more emphasis might be placed on the value of craft.

It is easy to forget that humans are evolving much slower than their tools, and they can derive pleasure and meaning from feeling grounded and connected to the world, which is why we walk in forests, do gardening and make things with our hands. There are areas of our work that are personally enriching – even ritualistic – that we might want to protect from the efficiency of automation or GenAI.

 

Asking better questions

Leaders who lack a fundamental understanding of computing and technology architectures will struggle to move beyond marginal gain questions such as ‘how can we reduce costs by 15%?’ or ‘where can we find new markets for our widget business?’, which is why Azeem’s advice about getting your hands dirty is so important.

In Dave Snowden’s Cynefin framework, he emphasises three thinking traps to avoid:

  1. Retrospective coherence, we should learn from the past but not assume that what happened will repeat, or that it had linear causality
  2. Premature convergence, coming to quickly to a single solution (although coming quickly to parallel safe to fail experiments is a good thing) rather than keeping our options open
  3. Pattern entrainment, assume that the patterns of past success will entrain the inevitability of future failure unless you actively manage to prevent it.

Right now, with such a growing gulf between the way leaders conceptualise their value chains and the art of the possible with smart technology, this is a good place to start in avoiding the wrong questions. But we need real vision and imagination to identify better questions to ask.

If the only limit is our imagination and the questions we ask our tools to help us answer, then we need leaders who can see beyond yesterday’s assumptions and goals. There are some existing methods that are widely taught in leadership development, such as systems thinking, the 5 Whys, and first principles thinking, which seems to be a useful mindset for this challenge.

Prompting and conversing with GenAI tools feels like a new form of literacy, where almost anything is possible if you ask the right questions. Building on this, the ability to conceptualise chains of AI agents interacting and cooperating based on these prompts is perhaps closer to writing a symphony.

As a leader, my advice on where to get started would be to ask the following question that I sometimes pose in talks about enterprise AI: if you have a magical command line interface to develop your organisation, what would you write exactly? If you can answer that question, you are on the right path.

If you would like us to engage or inspire leaders in your organisation and help make sense of enterprise AI use cases & adoption, get in touch