Interpretability in deep learning for finance: A case study for the Heston model
en-GBde-DEes-ESfr-FR

Interpretability in deep learning for finance: A case study for the Heston model

29/01/2026 TranSpread

Deep learning has become a powerful tool in quantitative finance, with applications ranging from option pricing to model calibration. However, despite its accuracy and speed, one major concern remains: neural networks often behave like “black boxes”, making it difficult to understand how they reach their conclusions. This spells a lack of validation, accountability, and risk management in financial decision-making.

In a new study published in Risk Sciences, a team of researchers from Italy and the UK investigate how interpretable deep learning models can be made in a financial setting. Their goal was to understand whether interpretability tools can genuinely explain what a neural network has learned, rather than just producing visually appealing but potentially misleading explanations.

The researchers focused on the calibration of the Heston model, one of the most widely used stochastic volatility models in option pricing, whose mathematical and financial properties are well understood. This makes it an ideal benchmark for testing whether interpretability methods provide explanations that align with established financial intuition.

“We trained neural networks to learn the relationship between volatility smiles and the underlying parameters of the Heston model, using synthetic data generated from the model itself,” shares lead author Damiano Brigo, a professor of mathematical finance at Imperial College London. “We then applied a range of interpretability techniques to explain how the networks mapped inputs to outputs.”

These techniques included local methods—such as LIME, DeepLIFT, and Layer-wise Relevance Propagation—as well as global methods based on Shapley values, originally developed in cooperative game theory.

The results showed a clear distinction between local and global interpretability approaches. “Local methods, which explain individual predictions by approximating the model locally, often produced unstable or financially unintuitive explanations,” says Brigo. “In contrast, global methods based on Shapley values consistently highlighted input features—such as option maturities and strikes—in ways that aligned with the known behavior of the Heston model.”

The team’s analysis also revealed that Shapley values can be used as a practical diagnostic tool for model design. By comparing different neural network architectures, the researchers found that fully connected neural networks outperformed convolutional neural networks for this calibration task, both in accuracy and interpretability—contrary to what is commonly observed in image recognition.

“Shapley values not only help explain model predictions, but also help us choose better neural network architectures that reflect the true financial structure of the problem,” explains co-author Xiaoshan Huang, a quantitative analyst at Barclays.

By demonstrating that global interpretability methods can meaningfully reduce the black-box nature of deep learning in finance, the study provides a pathway toward more transparent, trustworthy, and robust machine-learning tools for financial modeling.

###

References

DOI

10.1016/j.risk.2025.100030

Original Source URL

https://doi.org/10.1016/j.risk.2025.100030

Journal

Risk Sciences

Paper title: Interpretability in deep learning for finance: A case study for the Heston model
Fichiers joints
  • CNN architecture summary
29/01/2026 TranSpread
Regions: North America, United States, Europe, Italy, United Kingdom
Keywords: Applied science, Computing, Business, Financial services

Disclaimer: AlphaGalileo is not responsible for the accuracy of content posted to AlphaGalileo by contributing institutions or for the use of any information through the AlphaGalileo system.

Témoignages

We have used AlphaGalileo since its foundation but frankly we need it more than ever now to ensure our research news is heard across Europe, Asia and North America. As one of the UK’s leading research universities we want to continue to work with other outstanding researchers in Europe. AlphaGalileo helps us to continue to bring our research story to them and the rest of the world.
Peter Dunn, Director of Press and Media Relations at the University of Warwick
AlphaGalileo has helped us more than double our reach at SciDev.Net. The service has enabled our journalists around the world to reach the mainstream media with articles about the impact of science on people in low- and middle-income countries, leading to big increases in the number of SciDev.Net articles that have been republished.
Ben Deighton, SciDevNet
AlphaGalileo is a great source of global research news. I use it regularly.
Robert Lee Hotz, LA Times

Nous travaillons en étroite collaboration avec...


  • e
  • The Research Council of Norway
  • SciDevNet
  • Swiss National Science Foundation
  • iesResearch
Copyright 2026 by DNN Corp Terms Of Use Privacy Statement