arXiv Analytics

Sign in

arXiv:2210.06650 [cs.LG]AbstractReferencesReviewsResources

Interpreting Neural Policies with Disentangled Tree Representations

Tsun-Hsuan Wang, Wei Xiao, Tim Seyde, Ramin Hasani, Daniela Rus

Published 2022-10-13Version 1

Compact neural networks used in policy learning and closed-loop end-to-end control learn representations from data that encapsulate agent dynamics and potentially the agent-environment's factors of variation. A formal and quantitative understanding and interpretation of these explanatory factors in neural representations is difficult to achieve due to the complex and intertwined correspondence of neural activities with emergent behaviors. In this paper, we design a new algorithm that programmatically extracts tree representations from compact neural policies, in the form of a set of logic programs grounded by the world state. To assess how well networks uncover the dynamics of the task and their factors of variation, we introduce interpretability metrics that measure the disentanglement of learned neural dynamics from a concentration of decisions, mutual information, and modularity perspectives. Moreover, our method allows us to quantify how accurate the extracted decision paths (explanations) are and computes cross-neuron logic conflict. We demonstrate the effectiveness of our approach with several types of compact network architectures on a series of end-to-end learning to control tasks.

Related articles:
arXiv:2206.00843 [cs.LG] (Published 2022-06-02)
DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks
Yonggan Fu et al.
arXiv:1604.05753 [cs.LG] (Published 2016-04-19)
Sketching and Neural Networks
arXiv:2410.08339 [cs.LG] (Published 2024-10-10)
Simultaneous Weight and Architecture Optimization for Neural Networks