深度神经网络可解释性方法汇总,附Tensorflow代码实现
副标题[/!--empirenews.page--]
理解神经网络:人们一直觉得深度学习可解释性较弱。然而,理解神经网络的研究一直也没有停止过,本文就来介绍几种神经网络的可解释性方法,并配有能够在Jupyter下运行的代码连接。 Activation Maximization通过激活最化来解释深度神经网络的方法一共有两种,具体如下: 1.1 Activation Maximization (AM) 相关代码如下: http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/1.1%20Activation%20Maximization.ipynb 1.2 Performing AM in Code Space 相关代码如下: http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/1.3%20Performing%20AM%20in%20Code%20Space.ipynb 层方向的关联传播,一共有5种可解释方法。Sensitivity Analysis、Simple Taylor Decomposition、Layer-wise Relevance Propagation、Deep Taylor Decomposition、DeepLIFT。它们的处理方法是:先通过敏感性分析引入关联分数的概念,利用简单的Taylor Decomposition探索基本的关联分解,进而建立各种分层的关联传播方法。具体如下: 2.1 Sensitivity Analysis 相关代码如下: http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/2.1%20Sensitivity%20Analysis.ipynb 2.2 Simple Taylor Decomposition 相关代码如下: http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/2.2%20Simple%20Taylor%20Decomposition.ipynb 2.3 Layer-wise Relevance Propagation 相关代码如下: http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/2.3%20Layer-wise%20Relevance%20Propagation%20%281%29.ipynb http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/2.3%20Layer-wise%20Relevance%20Propagation%20%282%29.ipynb 2.4 Deep Taylor Decomposition 相关代码如下: http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/2.4%20Deep%20Taylor%20Decomposition%20%281%29.ipynb http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/2.4%20Deep%20Taylor%20Decomposition%20%282%29.ipynb 2.5 DeepLIFT 相关代码如下: (编辑:晋中站长网) 【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容! |