IBM Cloud

To Build Trust in AI, Focus on These Three Questions

December 4, 2020

By Ritika Gunnar 

Before the term “COVID-19” entered our lexicon, I was speaking with the Chief Information Officer of a large financial institution about the future of artificial intelligence. “I have hundreds of AI experiments underway,” he lamented, “but only a handful of models in production.” 

As companies around the world grapple with the global pandemic and face a future riddled with challenges but filled with possibility, the promise of deploying AI at scale has never been greater. So why does this CIO’s comment still resonate?

In a word, trust. Trust, or the lack of it, continues to be the biggest obstacle to the widespread deployment of AI. As my team of over 1,200 highly technical experts collaborates with clients worldwide, we prioritize three key questions: Is my AI fair? Is my AI explainable? Is my AI protected?

We ask these questions throughout the AI lifecycle—from preparing data and building models to deploying those models and managing them as they evolve. Because many organizations’ data lives in multiple locations, a hybrid cloud architecture is critical to this end-to-end approach. 

Question 1: Is My AI Fair?

To ensure fair AI, we must make certain that the data the models are built upon is fair, and that the models themselves are designed to detect and mitigate bias as new data is introduced. 

Many companies are exploring how AI can augment their hiring decisions, for example. With more men than women in the workforce historically, it wouldn’t be surprising for the data to show that men were more likely than women to be hired in the past. This data point could introduce bias into an algorithm. So when preparing their historical hiring data, companies will need to remove this bias or adjust the model parameters, then continually watch for “model drift” (when new biases may be introduced).

The mandate to remove bias has become more urgent amid our intensifying global conversation around racial and economic justice. When we ensure that AI is fair, it can be an excellent tool for mitigating human bias. 

Question 2: Is My AI Explainable?

If we can’t explain why AI is making certain decisions, fears of a “black box” of mysterious algorithms can make it impossible to engender trust. 

In industries like financial services, healthcare and insurance, there’s enormous potential for using AI at scale. But it involves using highly sensitive data to make decisions that significantly impact people’s lives. It’s critical that customers understand how these decisions are being made, and why. In highly regulated industries, explainability is also important for auditing and regulatory compliance. 

Explainability requires visibility into how an AI model was built, who owns it and who validated it. What metrics were captured along the way? Does the model follow company policies? Are those policies upheld as new data comes in?  

If a job applicant is not advanced to the next stage of a hiring process, the reason can’t be “our model told us so.” Using the right data management tools, we can log the reasons behind this decision and pinpoint what data attributes would need to change for a different outcome. To ensure that the best and most equitable hiring decisions are made, and to satisfy regulators, we need an auditable data trail that gives accurate answers.

Question 3: Is My AI Protected?

As social distancing has dramatically increased the volume of work and personal business taking place online, cyberattacks have skyrocketed. Defending AI systems from malicious attacks is more complicated now than ever—and crucial to ensuring trust.

Companies must develop their AI with security built in from the start, then remain vigilant about ensuring their systems are protected during production and implementation. 

More broadly, traditional security mechanisms are incapable of keeping pace with the complexity of the modern enterprise, with its multiple vendors and frequent security threat alerts. Once we ensure that AI systems are themselves protected, we can capitalize on the strengths of AI to identify and combat adversarial attacks across an organization. 

Moving AI From Experimentation to Transformation 

AI is not an end in and of itself, and it does not exist in opposition to the human beings who make up our workforces. Its potential is unleashed when it’s part of an application or infused into a workflow, and when it’s used to augment and amplify human decision-making. 

Ensuring that AI is fair, explainable and protected is not about checking boxes. It is the lens through which we need to view the entire AI lifecycle, to establish that critical bedrock of trust. When we get there, CIOs and CEOs worldwide will be able to move from AI experimentation to AI-driven transformation.

Ritika Gunnar
Vice President for IBM Data and AI Expert Labs and Learning