eXplainable Artificial Intelligence (AI)-models
What seems impossible today may well be reality tomorrow. Digital innovation is moving fast – and at Deltares, we are keeping up. Not just by adopting new technologies, but by truly understanding them. One of the most promising directions is building explainable AI models – systems that can make complex predictions and shed light on their own reasoning. Machine Learning specialist and hydrologist Hans Korving takes us into the world of explainable AI (XAI) and explains why strong domain knowledge is essential.
From XAI to true understanding
XAI – eXplainable Artificial Intelligence – helps us open up the algorithmic ‘black box’. What began as a way to explain models after prediction is evolving into a broader approach: understanding why a model makes a prediction, when it can be trusted, and how it relates to the real world.
“With XAI, we can visualise models, check predictions and compare them with what we already know,” Hans explains. “But explainability is only the starting point. Ultimately, we want models to reason in ways that reflect reality.”
At Deltares, we’re already going beyond “classical” XAI: combining explainability with causal reasoning, reliability and accountability. Since the term XAI has become widely recognised, we use it as a familiar label for this broader field.
Domain knowledge as the foundation
What sets our AI work apart is the link between data and physical knowledge. Hans stresses that models cannot rely on patterns alone:
“The physical principles of the system must always be part of the data-driven model. Domain knowledge isn’t a nice-to-have – it’s essential.”
In recent years, a new insight has gained ground: physics matters but so does understanding causes and context. “We increasingly look at how a model reasons: which variables truly explain what’s happening, which provide context, and which might mislead,” Hans says. “That forces us to model more consciously – and leads to more robust outcomes.”
This isn’t just theory. During a session with Rijkswaterstaat and the Water Boards on the KRW-Verkenner, a complex ecological AI model was made accessible to non-AI experts thanks to XAI. The result: clearer insights and stronger decisions.
Trust and co-creation
XAI helps build trust in AI models. By making models transparent, clients and colleagues can judge whether a model does what it is supposed to do. “Sometimes a model behaves unexpectedly. Explainable AI makes that visible – and allows us to correct it together with domain experts,” Hans notes. “That enables true co-design – co-creation at its best.”
What’s new is that this transparency is now measurable. Deltares uses techniques such as conformal prediction to show how confident a model is in its forecast, without assumptions about data distribution. “This way, we know not only what the model predicts, but also how reliable that prediction is – and why that confidence changes.”
From prediction to explanation
The next step is models that don’t just predict, but explain. Techniques like SHAP analysis reveal which factors influence changes in model confidence – for example wind, water levels or wave heights. In this way uncertainty becomes a diagnostic tool, showing when system behaviour shifts.
“If the model suddenly becomes less certain, we want to know why,” Hans explains. “It may be that system dynamics are changing, or a sensor is out of sync. These signals help us respond faster and use models in a more scientific way.”
Scientific relevance
The scientific value lies in modelling causal relationships. Rather than describing correlations, we map cause-and-effect – for example to explore realistic scenarios in climate change or water management. This makes models not only useful, but testable and scientifically meaningful.
There is also growing interest in uncertainty estimation: models that not only predict, but indicate how reliable those predictions are. This is a major theme in academia, and Deltares plays a leading role, with researchers and students actively contributing.
Reliability and accountability
Our new generation of models should be designed not only for performance, but also for responsibility: explainable, verifiable and defensible. Deltares is developing methods to assess whether a variable in an AI model is causal, contextual or merely a shortcut to improve accuracy. Such shortcuts may reduce error margins, but they don’t build insight or trust, nor do they support extrapolation.
“A good model isn’t the one with the lowest error – it’s the one that can explain its logic,” says Hans. “That makes it possible to justify decisions – in science, policy and even law.”
The future of ML at Deltares
Although many applications are still in prototype phase, real-world use is growing rapidly. For example, Deltares’ ML model for discharge forecasting at Lobith now runs as a shadow system alongside Rijkswaterstaat’s current model– a key step towards operational deployment, known as ML Ops.
The role of the ML expert is shifting: from model builder to model thinker - someone who understands how algorithms reason, where their assumptions lie and how these relate to physical reality.
A strong example is a new project in which Deltares and partners are redesigning the KRW-Verkenner models. Traditionally, these models predict ecological water quality from correlations. The new approach focuses on causal relationships – on causes and mechanisms rather than data patterns. This requires a different modelling mindset, one that reflects how ecosystems actually function, resonates with experts and strengthens the link between data and policy.
Deltares as expert partner
With XAI and interpretable AI, Deltares shows we are not just following digital trends – we are helping to shape them. Our strength lies in combining technological innovation, domain knowledge and responsibility. As Hans puts it:
“ML models have almost limitless applications. But only when we understand why a model predicts something we can truly trust it. That takes more than data – it takes knowledge, collaboration and common sense.”