Amid the AI hype and fear-mongering, there is value today in creating augmentative little helpers in the workplace, rather than trying to simply replace people.
We make our tools, and our tools
destroy us in a robot uprising make us
Amid reports of ChatGPT being used by a company to replace copy writers (who then tried re-hire them for less money as AI babysitters), or to write college essays for students, and with GPT-3 apparently being used to feign empathy and give mental health advice to vulnerable people, deep seated fears of AI replacing workers are bubbling to the surface again. But perhaps the most obvious value in AI and automation lies not in becoming a ‘big brother’ that replaces humans, but rather in lots of ‘little helpers’ to augment, extend or improve their work.
We should be not be so alarmist about co-evolving with technology. It is no coincidence that historians identify breakthrough moments in our species with terms like beaker people, stone age, bronze age or even the Anthropocene era – we have always used our environment and new tools to develop and evolve, for better or for worse.
Even spiders use their environment as an extension of their mind, offloading cognitive tasks to their webs – a reminder that this evolutionary reliance on tools and our environment is not limited to humans. As Kevin Kelly argues in What Technology Wants, technological evolution is not separate from human evolution, but an extension of it. I like one of the principles he shares about adopting technology:
“As a practical matter I’ve learned to seek the minimum amount of technology for myself that will create the maximum amount of choices for myself and others. The cybernetician Heinz von Foerster called this approach the Ethical Imperative, and he put it this way: “Always act to increase the number of choices.” The way we can use technologies to increase choices for others is by encouraging science, innovation, education, literacies, and pluralism. In my own experience this principle has never failed: In any game, increase your options.”
Little Helpers, not Big Brother
Every hype cycle seems to undergo a maximalist phase were pioneers try to find ways to replace people with technology, whether it is the idea of trustless systems in blockchain, military robots instead of soldiers, using ChatGPT for plagiarism, or just fake-ass online influencers using social media augmentation to portray a false identity for monetisation purposes. But in each of these examples, there are alternative approaches that follow Kelly’s maxim: using tech to connect us more easily to create trustful human networks, augmentative military tech that combines with human ingenuity (e.g. Ukraine’s use of drones or Chinese exo-skeleton infantry suits), using ChatGPT for research and thinking through a topic prior to writing, or just being your real self, but using social media to find more people to connect with.
There are so many areas in business that are ripe for assistive AI tools to reduce inefficiencies or help people connect better, and in most cases the technology required is far simpler than something like ChatGPT.
In an interview this week, Tyler Cowen talks about the potential for using AI to personalise your education:
I think the education each individual child gets will be more different than has been the case as of late.
That’s already a trend with the internet: you can customize what you read now. You’ll literally be able to design your own education by speaking to your AI. Your AI can be trained on data sets that you, or the parents, want. So you’ll have your own personalized “familiar” —to refer to the old world of witchcraft — and it will do things for you.
Pandemic-induced remote learning showed up the inadequacy of factory-style mass education for under-16s in most countries, and my own experience with executive education suggests to me that even at that high-priced level, far too much learning is generic, one-size-fits-all. So I find this a tantalising proposition, and it is something we are working on.
Another area with huge potential for improvement is the mess of process- and dumb KPI-driven bureaucratic work that is performed so poorly in large organisations. Much of this could and should be automated on shared service platforms, with simple AIs doing the water carrying that middle managers are now tasked with, freeing up people to think and work towards better outcomes, rather than bureaucratic compliance.
If you dig into large pharma companies, finance or telecoms firms, you often find a core of technologically-advanced specialists (e.g. scientists, quants, infrastructure engineers) who rely on “the business” to organise the value chain and deal with customers. But in most cases, “the business” has not evolved beyond the memo and the factory model – they just do it with email, ERP, HR tech and MS Teams meetings. As ex-EE CEO Olaf Swantee told the FT last week in a pitch for private equity to shake up public firms in his sector, “incumbent telcos have too many layers [and] too much bureaucracy”.
A few years ago we helped with an AI project in a telecoms firm to reconcile contracts with actual billing that could identify large amounts of revenue leakage resulting from running a very manual, siloed operation where the left-hand didn’t know what the right-hand was doing. The potential for smart telecoms firms to use lightweight augmentative AI is huge.
I recently played the old ‘retention team’ game to lower my own bills, and was pleased to achieve about 25% in savings. But my provider made a manual error in the contract, and multiple errors in each of their three subsequent attempts to correct it, and I received a blizzard of automated texts asking me to formally accept the new terms, without knowing which version they were referring to. I won this game of Russian Roulette and accepted the correct contract, but predictably enough they just continued billing at the old rate anyway 🤷♂️. This kind of thing is a key reason why they risk being relegated to the role of ‘dumb pipes’ while legacy-free newcomers skim off the cream in the industry.
Applying simple AI to improve the digital workplace
The Shopify story about banning meetings got a lot of attention recently. It certainly hit two of my hot buttons: meeting reduction and use of software bots to improve employee experience.
“This is in response to the fact that we want to be a better operating company,” says [Kaz] Nejatian, especially as it enters a year when many are predicting a recession, noting “our customers are incredibly judicious about their use of resources.”
I like the use of a bot to find rogue meetings in shared calendars, and I hope this can learn how to handle exceptions and not be too draconian; but really the key to the success of something like this is to encourage or teach people viable alternatives to meetings, such as better real-time and asynchronous collaboration behaviours, and shared documentation of issues and decisions. Only by doing this can you really identify the residual scenarios and use cases where large meetings are the right answer, which indeed they are in some cases.
The reactions on Hacker news are fascinating, and just go to show that even in the same company culture and among people in the same technical domains, people can have very different preferences for how to collaborate, and we should not dismiss pro-meeting views entirely.
There are many other uses of lightweight AI and bots in improving employee experience, from concierge services and room finders all the way to personal search agents and workflow builders. If we can get the UX and conversational flow right (e.g. not be too intrusive or ‘icky’) then this remains an area of potential despite some early mis-steps.
Yes, we need to be careful how we apply AI in the workplace, but if we pursue the approach of little helpers rather than big brother, this should not be unsolvable. The WEF blog published some guidelines on responsible use of AI in the workplace that seem vaguely sensible as a starting point:
- Know your data sources
- AI’s decisions should be explainable
- Employees own their data
- Share the uses and benefits (WIIFM)
100% Human, organic hipster work outputs
Despite the maximalist dreams of AI proponents, just as with “the metaverse” I suspect the best use cases will be in relatively low-scale enterprise or industrial domains (e.g. AI for actuaries or VR headsets for factory-based digital twin applications) rather than becoming the next global mega-business. And that is very much a good thing, given the potential for social harms if scaled or used inappropriately. Just as the metaverse will probably not become your future workplace, AIs will probably not become your boss either, however much you might sometimes wish for more sentient management in the meatspace.
I recall a conversation 5+ years ago with Saim Alkan in Germany, who told me about his new start-up AX Semantics and how it would revolutionise simple, formulaic copy writing for things like e-commerce catalogues, SEO optimisation, etc. I was sceptical, as usual, but that case is pretty much proven and unremarkable at this point and they seem to be doing well (Kudos Saim!).
But over the longer term, I think we will also see more value placed on 100% human, organic outputs for more meaningful use cases (e.g. writing, music, design and perhaps also in corporate culture), just as we have seen a backlash against mass industrialised products and especially food. I fully expect to go down to my local store and buy really expensive Substacks that are certified AI-free.
I wonder if we will start to see a lot more authors and paid publications add the disclaimer that they do *not* use generative AI like ChatGPT at all.
As a differentiator & response to more free sites to use these AI generators over time, to churn out written content.
— Gergely Orosz (@GergelyOrosz) January 8, 2023