arXiv:2005.00130 [cs.LG]AbstractReferencesReviewsResources
Hide-and-Seek: A Template for Explainable AI
Thanos Tagaris, Andreas Stafylopatis
Published 2020-04-30Version 1
Lack of transparency has been the Achilles heal of Neural Networks and their wider adoption in industry. Despite significant interest this shortcoming has not been adequately addressed. This study proposes a novel framework called Hide-and-Seek (HnS) for training Interpretable Neural Networks and establishes a theoretical foundation for exploring and comparing similar ideas. Extensive experimentation indicates that a high degree of interpretability can be imputed into Neural Networks, without sacrificing their predictive power.
Comments: 24 pages, 14 figures. Submitted on a special issue for Explainable AI, on Elsevier's "Artificial Intelligence"
Related articles: Most relevant | Search more
arXiv:2208.11404 [cs.LG] (Published 2022-08-24)
Augmented cross-selling through explainable AI -- a case from energy retailing
arXiv:2410.02970 [cs.LG] (Published 2024-10-03)
F-Fidelity: A Robust Framework for Faithfulness Evaluation of Explainable AI
arXiv:2308.08407 [cs.LG] (Published 2023-08-16)
Explainable AI for clinical risk prediction: a survey of concepts, methods, and modalities