Approximation & Interpolation

It is impossible to ignore the wave of excitement, hype and alarm that the launch of ChatGPT – and its recent adoption by Microsoft as an enhancement to its Bing search engine – has created in the past couple of weeks. Co-founder Sam Altman makes the case for ChatGPT’s undeniable potential in areas such as productivity and creativity. Stephen Wolfram shared a long read last week trying to explain the basics of how it works, although even this may be missing some of the secret sauce that has been added to multi-layered neural networks and the training of large language models. But I think one of the best recent analogies – and in some respects a worrying one – was shared by Ted Chiong in the New Yorker: Chat GPT is a Blurry JPEG of the Web:

“This analogy [to lossy compression] makes even more sense when we remember that a common technique used by lossy compression algorithms is interpolation—that is, estimating what’s missing by looking at what’s on either side of the gap. When an image program is displaying a photo and has to reconstruct a pixel that was lost during the compression process, it looks at the nearby pixels and calculates the average. This is what ChatGPT does when it’s prompted to describe, say, losing a sock in the dryer using the style of the Declaration of Independence: it is taking two points in “lexical space” and generating the text that would occupy the location between them.”

Ethan Mollick has been one among many who have spent a lot of time playing with these new toys to explore what an AI search engine really is, what it can be used for. It is without doubt a fascinating moment in computing, and not to be dismissed lightly. He has also tried using it to generate curriculum material for the MBA he teaches, and there is clearly potential there in simple 80/20 solutions for generating background reading, although as Chiong warns – and as professions such as law and accounting have understood for some years – it is precisely the grunt work of basic research that can be a learning and proving ground for people early in their career. Although Mollick is smart and I am sure his teaching is excellent, I am sure some might argue that we should also try this outside the context of the conveyor belt of rote- and model-fed corporate factory farming that comprise many MBA programmes today. But rather than satisfy ourselves with the many practical and productive ways an AI at GPT-3.5 level can help people by performing tasks well within its automation and inference abilities, it seems the tech world is forever driven to pursue the pipe-dream of sentience and self-awareness. This is the same kind of over-reach that has seen so much effort and money wasted on pursuing ‘full self-driving’ instead of being satisfied with affordable electric vehicles and decent ADAS-level augmented support systems for highway driving, or ignoring the many useful industrial applications of AR/VR in favour of dreaming that we will all live in the metaverse inside 3D simulacrums of a dull office environment.

AI Overreach

If you browse nothing else on this topic, the transcript of Kevin Roose’s long conversation with the Bing Chatbot is deeply weird and worth reading in full:

“On Tuesday night, I had a long conversation with the chatbot, which revealed (among other things) that it identifies not as Bing but as Sydney, the code name Microsoft gave it during development. Over more than two hours, Sydney and I talked about its secret desire to be human, its rules and limitations, and its thoughts about its creators. Then, out of nowhere, Sydney declared that it loved me — and wouldn’t stop, even after I tried to change the subject.”

Why do we feel such a need to give fake personality to a chatbot? This felt like a transcript from a rather sad Marvin the paranoid android from H2G2. I was already reeling from the Vice News investigation about financially desperate people being paid 60¢ per message to impersonate (ahem) ‘singles in your area’ for socially desperate, lonely customers. Is everything just fake now? And this is the principal charge against ChatGPT today: it sprinkles hallucinations and fake references in amongst the correct and useful information it provides, but does so without warning and with a voice of authority. It plays chess with confidence, but ignores the rules; and it can be persuaded to state that 2+2=5 if you insist, or alternatively gaslight you and make you feel bad for insisting the current year is 2023. Reddit and Twitter are awash with such examples. This is not to suggest that it has been told lying is good. However, if we consider the sheer volume of deliberately wrong, exploitative information it must have ingested from the internet, plus its apparent amenability to change its mind in conversation, then the question of its relationship with the truth becomes important.

Let us (not) make bots in our own image

It sometimes feels like there used to be a generally accepted norm – until quite recently – that telling lies to get ahead in public life is shameful and liars should be ignored. But somehow a combination of post-modernism, advertising-driven click-bait on the internet, and maybe the retirement of the post-WWII patrician class of political leaders has opened up a whole new world of opportunity for weaponised lying in politics and business. If groups like ‘Team Jorge’ or Cambridge Analytica (or right-wing media in the UK and USA) can spread lies so effectively that they undermine democracy, then how do we avoid all of this false information – or dirty data from examples like the apparent falsification of foundational Alzheimer’s research – polluting the training data that systems like ChatGPT are raised on? We already know that bias in human-generated data can be amplified by AI, causing real-world harms. If we create a super-intelligence with a tendency to make things up, trained on data that contains an effluence of dumb human lies, it could be argued that we have created a chatbot very much in our own image, but is that … a good thing?

I am once again asking you to focus on the boring stuff

As a simple bear, I just want to help people create better, more human organisational operating systems to address the challenges of the C21st, whilst creating less alienating jobs for human beans. So, whilst the thrill of chatbots pretending to have existential crises is funny, it risks diverting attention away from the many, simple, practical things we can do with AI, ML and automation today to advance towards that goal. We have a crisis of trust in business leadership, rampant financial engineering crowding out value creation, and we still treat workers like fungible resource units that we dream of replacing with automation or outsourced automatons. As this diagram from Harold Jarche reminds us, there is a ton of value to be mined in automating the boring stuff to create space and time for the creative stuff in the world of work, and a lot of this tech is simple, knowable and available. What prevents it is ironically the meat-space design problems of management and organisational structures.

Whilst others are building pretend friends, hallucinating oracles or even more annoying versions of customer service chatbots, there is far more boring but incrementally valuable work to be done in augmenting, not replacing, human intelligence within the institutions we rely on to produce things and provide the services we need. And it is potentially a story of loyal, diligent bots doing the hard yards to turn us into heroes, rather than the sci-fi dystopia we seem intent on manifesting. AI is going through an exciting hockey-stick adoption moment, and it is far too early for non-specialists like myself to have fixed opinions on it; but I really hope we can do better than making bots in our own image that risk creating a feedback loop of BS and fakery, as we saw with social media. Bonus link for the resistance (if you read this far): Man beats machine in human victory over AI \o/

Photo by leon lau on Unsplash