One theme common to both events was how the role of trusted professionals and professional values can survive and thrive in an increasingly technology-dominated future, and how human-scale trust can win out over the idea of technology-mediated trustless systems like blockchain.
The first event brought together audit professionals from various firms to discuss how they might collaborate to improve the quality of audit. The general consensus was that there is a lot of work to be done in developing better data platforms and apps, but that the role of the experienced audit professional remained important in interpreting and sense-checking data. Progress towards a more technology-rich ‘big data + AI’ audit approach is not just constrained by a lack of technology capabilities inside the firms, but also by the lack of data readiness and interoperable platforms on the client side. This suggests that mid-tier and larger firms need to do more to assist their clients in getting onto modern data platforms before any smart technology can make sense of the numbers, which seems like a win-win opportunity.
I think the profession as a whole has more to gain than to lose in collaborating to develop the basic components needed, both for the firms and their clients. It makes little sense for every firm to build their own version of every basic algorithm, calculator or component, given they operate within the same legal and regulatory environment. The larger firms that are spending a great deal of money on technology to address this challenge have an interest in more rapid adoption of new technology and data standards across the profession, and the smaller firms might have specialist knowledge to contribute, but often lack the resource to develop their own technology stack from scratch. So, despite the natural inclination of firms to hold their cards close to their chests, it might in fact benefit them to create or join open source initiatives at the component and platform level. This could enable the profession to reach agreed data and interoperability standards more quickly, plus speeding up testing and refinement of emerging tech tools based on the old adage “given enough eyeballs, all bugs are shallow”. The added value and competitive advantage is in how the components are assembled and used, and also further up the stack, for example in how AI models are trained.
Another challenge for the profession is public perception (and mis-conceptions) of the role and scope of auditors. The profession has arguably suffered greater public opprobrium for failing to spot corporate malfeasance in a series of high profile cases than the perpetrators themselves, who managed to hide their misdeeds from the auditors. There is probably a need to better communicate the role of auditors that takes into account the context of increasingly complex financialisation of firms’ operations coupled with more reliance on technology. One idea that emerged in support of this was for firms to work together to pool anonymised historical audit data so that the profession as a whole can learn what good and bad audit looks like, train models on the data to help spot future anomalies and perhaps identify patterns or correlations around bad practice to help identify problems in the future. And perhaps this data will also show the unremarkable successes of the audit process that are not so newsworthy. This seems like another area where the profession as a whole and individual firms have more to gain from pooling non-secret non-proprietary resources, rather than wait for native ‘big data’ technology firms who take a bigger picture view to enter the market and potentially eat all their lunches.
The second ICAEW event I took part in last week was about ethics in AI, and what the profession needs to consider in its journey towards more high-tech practice. The common assumption is that AI and related technologies will significantly impact the practice of accounting and the shape and nature of the firms, but huge questions remain about exactly how this will play out, and also whether the technology and data platforms inside firms are yet ready to support it.
My view of the optimal relationship between people and technology in the professions is one where firms develop native technology, data and algorithmic capabilities, assembled into their own internal platforms made up of individual micro-service components (e.g. accounting calculators, data analysers, online services, etc) that can automate much of the basic repeatable process work that is now done manually. But rather than automate the whole practice of accounting, these platforms will be used to free up professional accountants to focus on their real value proposition, which is to act a trusted business advisors, albeit supported by much more powerful tech and data. Not just artificial intelligence, but AI in service of augmented human intelligence.
In the remarks I shared at the event, I reflected on areas of technology development that have created unanticipated social or economic harms, such as the role of Facebook in undermining democracy, the manipulation of social networks by bad actors spreading hate, and the tragi-comic unregulated world of crypto-currencies and ICO scams. What each of these examples tends to have in common is a drive for monetisable scale at the expense of human trust. I shared some of the reflections from the Copenhagen Catalog process, where early pioneers and advocates of social technologies came together to analyse how and why these empowering technologies had been exploited by bad actors, and suggested that we should think carefully about how to prevent the same thing happening with AI.
To take a current example, many technologists believe the solution to scaling up trust relationships is to replace trust with code. The blockchain and related initiatives such as the Ethereum platform and Distributed Autonomous Organisations (DAOs) seek to remove inter-personal trust entirely from transactional relationships, replacing it with an immutable ledger governed by perfectly codified rules. In my opinion, trustless systems of this kind are an open invitation to exploits and misbehaviour, and the opposite of what we need technology to do, which is to increase the surface area for human trust. Blockchain makes sense if you believe nobody, least of all governance bodies and institutions, can be trusted. But the history of the accounting profession shows how professionals can cultivate a mostly-trustful business environment by applying consistent standards, ethics and values. We should not throw that away unless we have something much better to replace it with.
It might seem paradoxical, but I think accounting firms need to become more like machines at the back-end, managing a platform of services, automation, data analysis, etc., in order to be more human at the front-end, working as networked, trusted professionals who uphold their own ethical standards and reputations. The current model of bureaucratic process management is somehow neither organised enough, nor flexible or agile enough to meet the challenges of the Twenty-First Century professional. Our own experience of working with internal practice areas and business support departments to help them transition to a service-oriented way of working has demonstrated real potential for efficiencies, but it has also created a lot of excitement and engagement around the idea of owning and continuously improving a service the firm relies on, rather than just complying with an agreed process. It puts people back in touch with their core value proposition and creates incentives to improve, connect and automate services to free up more time for innovation. I think the service platforms that leading firms are starting to assemble will become the embodiment of the core IP of the future firm.
So, in my view, the accounting profession needs to be working on two levels at the same time in terms of its digital transformation. At the technology level, it should be transforming manual work and compliance processes into micro-services and assembling these into platforms, whilst developing the new algorithms and data capabilities needed for the future. Key principles here are standardisation and interoperability, which are necessary features for a system to evolve over time, and also algorithmic transparency, so that software can be queried about its assumptions and its compliance with regulations and law. But at the same time, on the human level, we should be developing the skills and capabilities needed for trusted professionals to use their imagination in composing these building blocks into bespoke client services, which will require them to have greater fluency in technology (not necessarily coding skills per se), hybrid skills that go beyond accounting, and also the ability to earn and retain trust to guide people and businesses through the complex web of accounting and business practice, using technology to light the way.
In a complex world where few of us have time to understand the intricate detail of data, accounting, law and perhaps even code, we will need trusted professionals to help us navigate complexity. So whilst automation and technology will probably reduce headcount in the profession overall, the prize for those who can use our new technology superpowers to create value for their clients will be ever greater. Algorithmic transparency and code standards can help us trust the underlying machinery, but I predict we will still place our trust in humans to make the final judgement.