Usability clarifies the how; AI raises questions about the why. Only when systems make their decisions understandable does trust emerge. Good UX combines clarity with transparency.
Usability makes systems usable. AI makes them unpredictable – at least it feels that way. Yet both share the same goal: meaningful interactions between humans and machines. And still, we experience them very differently.
While usability is tangible – clickable, testable, optimizable – AI often remains diffuse. You see the result, but not the path that led there. This is exactly where it’s worth taking a closer look.
How do I navigate a system?
How do I find what I’m looking for?
How do I complete my task without fear of doing something wrong?
It’s about clarity, understandable feedback, and controllability. In short: digital hospitality.
A well-designed interface guides, explains itself, and takes users by the hand without patronizing them. Working with such a system gives the feeling: I’m in control. That creates security – and with it, the foundation for efficiency and satisfaction.
Trust in AI begins elsewhere. It asks:
Why does the system suggest this job, this route, or this diagnosis?
Which data was used?
How reliable is the result – and what does it mean for me specifically?
This is also UX.
Because both usability and trustworthiness pursue the same goal: enabling people to orient themselves, think along, and ultimately say: “That made sense.”
The difference lies in where it happens.
Usability plays out on the surface.
AI decides in the background.
And that’s where it gets challenging.
Today, interacting with AI systems usually yields a result. Sometimes impressive, sometimes confusing, but often simply inexplicable.
The surface may be clean, the button works – but what happens behind it?
Was it a recommendation or already a decision?
Did the system check options, compare alternatives, weigh uncertainties?
Or did it just guess?
Many AI applications also communicate in codes few understand. Error messages read like lectures, probabilities are presented as facts. And perhaps most importantly: there is no back button.
That doesn’t feel like collaboration.
It feels like loss of control.
That’s why the interplay is crucial. Good usability shows the way; trustworthy AI explains why it recommends that path. Only together do they create a digital experience that not only works but supports the user.
No system is all-knowing. Trust emerges where interfaces reveal uncertainties or trade-offs. Where they make transparent: “This is a probability,” or “Here are alternatives.”
This shows integrity – and prevents blind faith.
Good UX for AI considers the human: their curiosity, skepticism, and responsibility. It explains rather than hides. It offers rather than overwhelms. And it marks the difference between a machine that can do something and one we can trust to do it.
What this means from an ergonomic perspective
From an ergonomic standpoint, it’s often simple but consistently applied principles that help:
Clearly show what a system is meant for – and what it is not.
Name probabilities instead of hiding them.
Label suggestions as suggestions, not as truth.
Use language people understand – not just models.
Always provide the option to question, correct, or opt out.
Usability doesn’t end at the surface. It starts there – and continues inside.
Or as we ergonomists say: If the machine thinks, it should at least say what it was thinking.
(Wild Card in Netzwoche No. 12/2025)
Owner, Expert Consultant
Dr. Christopher H. Müller, founder and owner of Ergonomen Usability AG, earned his PhD from the Institute for Hygiene and Applied Physiology at ETH Zurich. With over 22 years of experience, he is an expert in usability and user experience. His strong sense of empathy allows him to quickly understand the needs and perspectives of his clients. With creativity and courage, he supports his clients in their digitalization projects and the optimization of products, services, and processes. He takes a practical approach, developing tailored solutions that can be effectively implemented. Dr. Christopher H. Müller is a columnist for Netzwoche. He also serves as a board member for the Zugang für alle Foundation, and is a member of two Swico advisory boards and co-president of the Regional Conference Nördlich Lägern.