Discussions about artificial intelligence often revolve around technology, performance, and potential. But an equally central question remains: When—and under what conditions—do we trust machines, and what does it take to build that trust? In our joint webinar with Netzwoche, we explored exactly this question—using solid research, real-world scenarios, and a focus on usability as the key to trust.
The study by Samuel Huber (Forventis) examined the use of robots and drones in safety-critical military environments. The findings were clear: Trust emerges when communication is clear, understandable, and comparable to human benchmarks.
Typical probability statements—“85% detection probability”—aren’t enough. These abstract values don’t help users decide whether to rely on the system or take action themselves.
Huber therefore proposes a human-referenced approach: Instead of reporting a number, the robot communicates its reliability relative to a human observer—for example:
“In this situation, I am more reliable than a human observer.”
In simulations with 20 participants, the difference was striking: Human-referenced feedback produced the highest trust, percentage values were less convincing, and systems without status messages performed worst.
Feedback must be self-explanatory rather than abstract.
Humans must be able to intuitively assess: trust or verify.
Under stress or time pressure, systems must support fast, safe decisions.
This makes one thing clear: Usability is not just “ease of use”—it is a critical factor for safety and trust, especially when lives are at stake.
In the second part, Mi Xue Tan (Die Ergonomen Usability) presented a study on how different explanation models influence trust in medical AI chatbots. Four variants were tested:
a certified bot
a confidence-displaying bot
a reasoning bot
a sources bot
At first glance, the results seemed modest: statistically, there was no major difference in overall trustworthiness—most models scored between 70–80%.
But a closer look revealed more: The reasoning bot, which explains how it arrived at its answer, inspired the most trust—closely followed by the certified bot. And the more complex or critical the topic (like health), the stronger the users’ need for understandable explanations. People do not just want an answer—they want to know how and why it was generated, and how confident the system is.
Explanations must be understandable—everyday language, not technical jargon.
Transparency is not “nice to have”; it is essential for responsibility and trust.
Users must be able to evaluate reliability: Is this advice solid? Should I act immediately or double-check?
What could possibly go wrong?
For us at Die Ergonomen Usability, the conclusion is clear: Whether we’re dealing with robotics, drones, or chatbots—clarity and interpretability are the common denominator.
Usability and explainable AI share the same purpose: making technology meaningful and accessible for humans. Technical complexity belongs in the background. What matters is what the user understands.
The clearer a system communicates—its status, uncertainty, recommendation, or limits—the safer and more confident people feel. And the more genuine trust can emerge.
The use of AI and autonomous systems is growing rapidly—in security, healthcare, infrastructure, and customer service. But potential and technical performance alone are not enough. Anyone building or deploying such systems must ensure that users understand what the machine is doing and why.
This requires:
Communication on a human level
Transparent feedback instead of abstract metrics
Visible boundaries, uncertainties, and quality indicators
Space for human judgment and verification
Only then do AI systems not just function—they become trustworthy.
Trust between humans and machines does not arise from statistics or brilliant algorithms. It comes from clear, understandable communication.
Usability is not an add-on—it is the foundation of every trustworthy AI application.
Anyone investing seriously in AI must first invest in human understanding. Only then can technology truly connect.
Owner, Expert Consultant
Dr. Christopher H. Müller, founder and owner of Ergonomen Usability AG, earned his PhD from the Institute for Hygiene and Applied Physiology at ETH Zurich. With over 22 years of experience, he is an expert in usability and user experience. His strong sense of empathy allows him to quickly understand the needs and perspectives of his clients. With creativity and courage, he supports his clients in their digitalization projects and the optimization of products, services, and processes. He takes a practical approach, developing tailored solutions that can be effectively implemented. Dr. Christopher H. Müller is a columnist for Netzwoche. He also serves as a board member for the Zugang für alle Foundation, and is a member of two Swico advisory boards and co-president of the Regional Conference Nördlich Lägern.