Explainable AI (XAI) is meant to make decision paths visible. But trust in AI does not come from explanations alone. Only when quality, governance, and clear accountability come together can companies truly rely on AI.
How reliable are AI results, really?
The more business-critical tasks we hand over to artificial intelligence, the more pressing the question becomes: How much can companies trust the output of their AI systems? And how trustworthy can it be if an AI itself explains why it chose one decision over another? How do we get to so-called Explainable AI (XAI)?
XAI promises to open the black box of algorithms. With methods like LIME or SHAP, the influence of individual factors can be visualized. That sounds like transparency – but is still only a statistical approximation. These methods do not deliver an absolute truth about the inner logic of a model. They generate second-order explanations: the model essentially describing its own behavior.
For users, this can be helpful, because it makes decision paths more tangible and reduces distance. For quality assurance, however, it is not enough. In the context of AI, the quality of the pipeline determines whether people can – or cannot – trust the output.
Trust does not arise from transparency alone, but from the interplay of several factors. Consistently correct results in real-world scenarios are indispensable (practical relevance). Independent audits, benchmarks, and external testing create objective points of reference (evidence). But the biggest lever for building trust in AI results is this: humans in the company must carry the ultimate responsibility – not the machine. Roles, processes, and accountability must be clearly defined.
The quality of an AI result is not only about how often it is correct. What really matters is how it was produced. The entire process – from input to data processing – determines the output’s quality:
Data quality: Biased, incomplete, or false data inevitably lead to questionable results.
Model quality: Validations, stress tests, and safeguards against manipulation are essential.
Context: A model that successfully generates recommendations in e-commerce is not automatically suited for finance or healthcare.
Governance: Clear standards, audits, and defined responsibilities are the foundation for meeting regulatory requirements – and maintaining customer trust.
Explainable AI is an important tool for building trust in AI. It can make decision processes more understandable and give users the sense that they better grasp how the machine works. But CIOs should not be misled: explanations are useful, but they are no guarantee. Real trust only emerges when quality, governance, and accountability come together.
And even then, there’s a limit: Generative AI rarely delivers true “aha” moments – no sudden sparks of genius, no radically new ideas. Its strength lies in recognizing patterns, finding averages, and recombining what has already been said or written. For genuine innovation, the human spark is still needed. AI can support it – but never replace it.
We’re currently researching exactly this question. What’s your take?
Owner, Expert Consultant
Dr. Christopher H. Müller, founder and owner of Ergonomen Usability AG, earned his PhD from the Institute for Hygiene and Applied Physiology at ETH Zurich. With over 22 years of experience, he is an expert in usability and user experience. His strong sense of empathy allows him to quickly understand the needs and perspectives of his clients. With creativity and courage, he supports his clients in their digitalization projects and the optimization of products, services, and processes. He takes a practical approach, developing tailored solutions that can be effectively implemented. Dr. Christopher H. Müller is a columnist for Netzwoche. He also serves as a board member for the Zugang für alle Foundation, and is a member of two Swico advisory boards and co-president of the Regional Conference Nördlich Lägern.