Agentic AI is starting to demonstrate real capabilities and promise. In the consumer space, OpenAI seems to be moving towards specific agentic apps as a way to popularise and monetise its underlying models, as Nathan Lambert covers today.
But in the enterprise, Agentic AI is still mostly at the stage of proofs of concept and simple stand-alone apps, and not yet ‘in production’ as part of the fabric of the digital workplace. Without more focus on readiness and work transformation, the technology may not reach its potential.
Ethan Mollick shared an article this week discussing OpenAI’s new paper Evaluating AI Model Performance On Real-World Economically Valuable Tasks, which compared AI versus expert human performance of complex real-world multi-stage tasks. Spoiler alert: the humans won, but not by much.
He went on to run similar tests on tasks that are part of his own role as a professor, and concluded that we need to think about why we do some of these tasks in the first place.
If we don’t think hard about WHY we are doing work, and what work should look like, we are all going to drown in a wave of AI content. What is the alternative? The OpenAI paper suggested that experts can work with AI to solve problems by delegating tasks to an AI as a first pass and reviewing the work. If it isn’t good enough, they should try a couple of attempts to give corrections or better instructions. If that doesn’t work, they should just do the work themselves. If experts followed this workflow, the paper estimates they would get work done forty percent faster and sixty percent cheaper, and, even more importantly, retain control over the AI.
Returning to the Sorcerer’s Apprentice analogy I used a couple of weeks ago, it might be tempting to use GenAI to write our corporate communications, but like the brooms in the original story, we risk being deluged with pointless PowerPoints and annoying emails.
If we want to embrace the complex but transformative opportunity of agentic AI rather than just allow Generative AI to run amok in the workplace, then we need to think about what we are doing, why, and how to support people in using technology more productively, rather than just as a lazy shortcut.
There is a big opportunity here for learning and organisational development, but we also need leaders and managers to focus on setting the right goals and context for the substantial change management challenges that lie ahead.
Workslop vs Doing the Work
Last week, researchers from BetterUp Labs and Stanford University (including our former colleague Kate Niederhoffer) published an article in HBR about the problem of workslop – the uncomfortable feeling of receiving sub-par work outputs from a colleague that are clearly AI-generated – and what leaders can do to avoid this having a negative effect on productivity.
When organizational leaders advocate for AI everywhere all the time, they model a lack of discernment in how to apply the technology. It’s easy to see how this translates into employees thoughtlessly copying and pasting AI responses into documents, even when AI isn’t suited to the job at hand.
In addition to arguing for better guidelines and a clearer sense of purpose in use cases, the piece also reinforces the need for more human collaboration as a solution to the workslop problem, echoing what Cerys wrote last week about the need for co-op mode in AI learning.
So many of the tasks required to work well with AI—giving prompts, offering feedback, describing context—are collaborative. Today’s work requires more and more collaboration, not only with humans but also, now, with AI. The complexity of collaboration has only deepened. Workslop is an excellent example of new collaborative dynamics introduced by AI that can drain productivity rather than enhance it. Our interactions with AI have implications for our colleagues, and leaders need to promote human-AI dynamics that support collaboration.
But workslop is really just an amplification of the existing problem of corporate spam. Reducing ideas to primary-coloured slides in the expectation that this is all leaders can understand already creates a ton of workslop. GenAI just makes it easier.
Part of the problem here is that GenAI seems easier to grasp, and also easier to teach, than transforming the way we work. It is the path of least resistance in terms of AI adoption, so it is natural that people will overuse it and take shortcuts when they are urged to adopt AI without a clear purpose, guidelines or context.
Doing the work of process mapping, value chain analysis and other aspects of AI readiness is a lot harder than just issuing vague instructions, giving people co-pilot licences and warning them their jobs are easily replaceable.
In another recent HBR piece, Melissa Valentine, Daniel J. Politzer and Thomas H. Davenport make the case that it is time to move on from experimenting with GenAI and start work on agentic AI readiness, if organisations are to see meaningful productivity gains:
Organizational readiness for gen AI starts with making unstructured data and its flows more visible, structured, and strategically prioritized.
This work proceeds best when led at the functional group level. Groups should map their data flows and work processes and identify current data being generated. Assess whether new data assets will be needed and if so, which ones.
If this is too much of a perceived distraction for leaders focused only on running the existing machine, then we can expect more failures of enterprise AI adoption.
Some leaders are notoriously short-term in focus, with compensation models that disincentive tackling necessary organisational improvement if it looks like a long-term project. And whilst they can be quick to embrace AI automation of lower-level roles, as Dan Davies suggested recently, they might not be so keen to embrace automation of even the most repetitive tasks that they themselves are paid for:
if you believe most articles in the business press about AI, we’re on the verge of creating a godlike superintelligence which we will use to send more emails. The problem is not so much that sceptics aren’t taking AI seriously, but that too many people want a revolutionary, disruptive technology which doesn’t really change anything.

I find it amusing that this AI image has text on the back of the monitor and the mouse is backwards – it nailed the workslop brief 😉
Processes as software
If we want everybody, from leaders downwards, to let go of the workplace theatre that masquerades as real work and actually think about how our organisations operate and could be improved, then there is a lot of ‘eating your greens’ readiness work to do.
There are lots of talented and committed learning and organisational development teams working hard to support the needs of the business – but at least in terms of agentic AI, the difficult question is: does the business really know what it wants?
In a sense, today’s managers are experiencing a similar challenge to that faced by software architects and developers at various times over the past five decades. Andrew Stellman recently joined the jots between the history of requirements engineering and today’s focus on prompt- and context-engineering:
Prompt engineering and requirements engineering are literally the same skill—using clarity, context, and intentionality to communicate your intent and ensure what gets built matches what you actually need.
Perhaps today’s challenge is even greater. Yes, clarity of requirements and context are vital, but for agentic AI to make our organisations programmable, we also need to understand the service building blocks and objects we can work with, and the processes – similar to functions in software – that use them. Knowledge and process geeks have talked about the value of mapping and cataloguing these for years, but I am willing to bet many leaders zoned out in these meetings.
In simple terms, we can think of agents as small dedicated, tightly scoped AI models that are given very specific instructions, context and training to manage an automated process that might need some limited intelligence to deal with exceptions and some level of self awareness and goal seeking based on a clear fitness function (e.g. minimise revenue loss from billing being out of sync with contracts). By chaining together agents in teams or groups with different roles and specialisations, we can build agentic capabilities that could usefully run and manage whole areas of an organisation’s process work.
Consider the evolution of home automation systems. We lived with manually controlled lighting and heating for a long time, with a sterotypical Dad (manager?) telling us to turn the lights off and the thermostat down. Then we developed slightly better controllers, such as motion sensors connected to the lights. But when we added simple actuators to each unit, we could suddenly control things in new ways, such as using an app on our phone or voice controls. And once each object was addressable, it became easily programmable. Home automation software is now widely and cheaply available and uses process automation to optimise things like solar power, batteries and HVAC systems. In our organisations, once we do something similar with our basic processes and services, we are on the way to smart, programmable management of the organisation.
If we want to free people to do the real work of value creation, then orchestrating and automating the underlying processes and platforms that support this work is a good goal to have in mind. Machines doing machine things so that people can do people things, rather than the organisation as a fragile, manually-run contraption that treats people as fungible process workers.
Instead of using GenAI within an existing manual process management system to create more workslop, we can instead work on automating the boring repetitive process work as far as possible and allow people to do real, value creating work.
Learning & development stepping up
Where does learning fit in all of this?
L&D functions have been asked to quickly respond to the rise of GenAI and help people develop the skills needed to use it productively and safely. It is quite challenging to develop learning programmes for a topic that seems to evolve every week, but many firms are now doing a decent job of supporting GenAI adoption in the enterprise.
However, agentic AI is not an easily defined skillset that can be taught like prompting. Much of the development of the field is still quite technical, but as it becomes commoditised into software and systems that enterprises can deploy more easily, we will probably find that the mindsets, skills and competencies we need to cultivate are more holsitic, and point to us needing to see the organisation in a different way – more like software and less like a factory.
At the moment, it feels a bit like the tail is wagging the dog in terms of enterprise AI adoption, and we need to get a better leadership handle on what we are trying to achieve and how to ensure the technology is deployed in service of advancing business capabilities.
But rather than wait for leadership to define the organisation’s needs, in a period where we are all exploring new territory, perhaps L&D can take the lead in engaging with people and teams on some of the skills and mindsets needed to use agentic AI well.
If we want people to think about their work differently, so they are not just following a process but managing towards an outcome, then there are some useful skills and ways of seeing that we can borrow from executive education, such as:
- team capability mapping and analysis
- process and value chain mapping
- requirements and context management
- test plans, observability and monitoring
- knowledge and data management
- combinatorial innovation and design thinking
These are probably of more long term use right now than getting everybody to dive into the technical domain of agent prompting, guardrails, MCP and interoperability, as these will continue to evolve rapidly, whereas the foundational business skills needed to use agentic AI will still be relevant in the future.
But there are also bigger questions that we are trying to get our heads round in thinking about the future products, services and engagement methods of L&D functions when it comes to Agentic AI:
- How can L&D facilitate continuous learning for people and agents, ideally together, without resorting to above-the-flow courses, workshops and so on?
- Can we help people imagine and create a digital twin of their teams and functions, including both its process landscape and the capabilities of its people?
- How can we help people understand and build new process systems run by agentic AI and then explore what they can do with them once they are running successfully?
- Can we realistically use personal learning agents to help people find content and tools to further their development whilst in-the-flow of work, and then experience it however they wish?
We will return to this topic soon, but in the meantime we would love to hear about your experiences of making sense of agentic AI in the enterprise, or how you are helping people learn about it.