arXiv Analytics

Sign in

arXiv:2104.09476 [q-fin.PR]AbstractReferencesReviewsResources

Interpretability in deep learning for finance: a case study for the Heston model

Damiano Brigo, Xiaoshan Huang, Andrea Pallavicini, Haitz Saez de Ocariz Borde

Published 2021-04-19Version 1

Deep learning is a powerful tool whose applications in quantitative finance are growing every day. Yet, artificial neural networks behave as black boxes and this hinders validation and accountability processes. Being able to interpret the inner functioning and the input-output relationship of these networks has become key for the acceptance of such tools. In this paper we focus on the calibration process of a stochastic volatility model, a subject recently tackled by deep learning algorithms. We analyze the Heston model in particular, as this model's properties are well known, resulting in an ideal benchmark case. We investigate the capability of local strategies and global strategies coming from cooperative game theory to explain the trained neural networks, and we find that global strategies such as Shapley values can be effectively used in practice. Our analysis also highlights that Shapley values may help choose the network architecture, as we find that fully-connected neural networks perform better than convolutional neural networks in predicting and interpreting the Heston model prices to parameters relationship.

Related articles: Most relevant | Search more
arXiv:2404.02343 [q-fin.PR] (Published 2024-04-02)
Improved model-free bounds for multi-asset options using option-implied information and deep learning
arXiv:2008.08576 [q-fin.PR] (Published 2020-08-11)
Series expansions and direct inversion for the Heston model
arXiv:1001.3003 [q-fin.PR] (Published 2010-01-18, updated 2010-11-12)
On refined volatility smile expansion in the Heston model