In 2012, Google published the results of Project Aristotle – an in-depth analysis of the conditions needed for a team to be successful. It has led team leaders to re-evaluate the culture and environment they create. Organisations are discussing how to leverage the results to create high performing teams. How leaders can enable a change of team dynamics? How to measure such a shift? How they can reap the benefits of such high performance? The research summarises five attributes that create the conditions for a team to hyper-perform:
- Psychological Safety: team members feel safe to take risks and be vulnerable in front of one another.
- Dependability: team members get things done on time and meet a high bar for excellence.
- Structure & Clarity: team members have clear roles, plans and goals.
- Meaning: work is personally important to team members.
- Impact: team members think their work matters and creates change.
With psychological safety being the most important attribute highlighted by the research, teams should not be surprised to find that the introduction of automation, AI and machine learning into processes and practices can disrupt the flow, and cause a drop in creativity, quality and accountability.
Bringing AI to the team
The current fear running amok around AI & the future of the job market stems from the black box nature of most “AI” in the public domain, combined with future un-predictability of employment and hype-driven journalism. The lack of transparency in most AI interactions drives a level of discomfort:
- Why did the AI do that?
- Why not something else?
- When does the AI succeed?
- When does the AI fail?
- When can I trust the AI?
- How do I correct an error?
Within teams, this level of discomfort and uncertainty destroys trust. Decisions based on opaque processes, teams unable to explain successes or failures in black-box processes and, most importantly, an inability to innovate and re-create results from scratch if needed. If we look at the evolution of the future of the work, there will be areas of the organisation where AI and machine learning will perceive, learn, decide and act on their own. For team psychological safety to survive the introduction of AI and machine learning, all members must be able to understand, appropriately trust and effectively manage the inputs, outputs and resulting decisions taken by humans and robots alike.
How to restore trust?
Over the last month, we have begun to uncover how little we understand about some of the internet technologies that have pervaded our lives. This is raising questions about how internal systems are using behaviour data and algorithms to decide what we should see or respond to. Adding decision-making and predictive algorithms to the mix means we risk a bigger disconnect between employees and AI opportunities. There is some interesting early stage research that is beginning to address these issues, and it is certainly worth keeping an eye on.
Coming out of the military (DARPA specifically) the evolution of explainable AI (xAI) – working on the challenges of creating an explainable model, and an accompanying explanation interface. Teams would have access to human-readable information and could therefore interrogate decisions and predictions made to ensure they can say:
- I understand why
- I understand why not
- I know when the AI fail
- I know when the AI succeed
- I know when to trust the AI
- I know why the AI has erred
Although still conceptual, xAI is a crucial step towards ensuring psychological safety for teams working in co-operation and collaboration with AI. This restores trust in decisions and predictions made by making explicit actions taken to arrive at them.
Another area of promising research is in fundamental principles such as Trusted Autonomy – an emerging discipline for designing human and machine interactions. The primary focus of Trusted Autonomy research is the “design of algorithms and methodologies to simulate, optimise, data mine and implement systems to make their interaction with other systems and humans happens naturally including computational models to analyse and understand this interaction.” These concepts will become critical to org design at scale. The focus on interaction between multiple systems and humans mirrors much of the early automation and AI work being undertaken in organisations to join up and gain efficiencies out of legacy systems.
How can teams begin to build their own future?
Teams need some new competencies and skills, and a new mindset, when it comes to team-working. What practical steps can you take to begin de-mystifying AI?
- Raise levels of psychological safety now: read the original research, understand and apply the lessons to your teams now. Higher levels of psychological safety within teams also brings resilience. This will allow teams to respond rationally to discussions about the future of work, rather than engaging from a place of fear.
- Basic Data Literacy: What does the data tell us? What is the difference between correlation and causality (and why does this matter)? How do you judge a source? You need to know how to answer these questions if you are to interpret and guide algorithms responsibly. At its essence, a basic understanding of data and how to interrogate it is the heart of critical thinking.
- Access to examples and success stories: We cannot be what we cannot see, and we cannot achieve what we cannot imagine. Provide access to sandboxes, highlight interesting freely available AI tools and toys to help team members understand how far there is to go in the AI journey. Teams with early exposure to AI can more easily calibrate their expectations.
- New types of leadership: Leaders will need to engender trust in hybrid teams by training people on how AI and Machine Learning works, and providing teams members with the information to anticipate the machine’s responsibilities. Situational leadership takes on a new meaning!
- Story-telling: Leaders need to inspire a shared creative vision for people to trust their machine co-workers. One of the most effective ways to inspire remains a classic, well-constructed narrative.
With this in mind, here are five links to further your thinking on automation and AI in the workplace:
- AI plus Human Intelligence equals Intelligence Automation – a more positive vision for the future of organisations.
- How babies learn, and why robots can’t compete.
- Building resilience for an automated future – how can employees protect themselves against the growing avalanche of automations.
- What happens when you connect unthinking computer programmes to a culture of obedience and compliance?
- From the PostShift archives: Building, managing and leading a hybrid workforce.