Sci论文 - 至繁归于至简,Sci论文网。 设为首页|加入收藏

SCI论文网开场白:为SCI创作者提供分享合作的小而美圈子

当前位置:首页 > 计算机论文 > 正文

Learning to Explain: An Information-Theoretic Perspective on Model Interpretation(附PDF版原文下载)

发布时间:2018-07-15 23:22:49 文章来源:SCI论文网 我要评论














SCI论文(www.scipaper.net):

   小编特地整理了蚂蚁金服人工智能部研究员ICML贡献论文系列 第一篇论文,以下只是改论文摘录出来的部分英文内容和翻译内容,具体论文英文版全文,请在本页面底部自行下载学习研究。

Learning to Explain: An Information-Theoretic Perspective
on Model Interpretation
学习解释:模型解释的信息理论视角


       We introduce instancewise feature selection as a methodology for model interpretation. Our method is based on learning a function to extract a subset of features that are most informative for each given example. This feature selector is trained to maximize the mutual information between selected features and the response variable,where the conditional distribution of the response variable given the input is the model to be explained. We develop an efficient variational approximation to the mutual information, and show the effectiveness of our method on a variety of synthetic and real data sets using both quantitative metrics and human evaluation.

       翻译中文:我们引入了实例特征选择作为模型解释的方法学。我们的方法是基于学习一个函数来提取信息最丰富的特征子集。对于每个给定的示例。该特征选择器是训练使所选特征与响应变量之间的相互信息最大化,其中响应的条件分布给定输入的变量是要解释的模型。我们给出了相互信息的有效变分逼近,并证明了我们的方法的有效性使用两种定量方法的合成数据集和真实数据集度量和人的评价。


1. Introduction
       Interpretability is an extremely important criterion when a machine learning model is applied in areas such as medicine,financial markets, and criminal justice (e.g., see the discussion paper by Lipton ((Lipton, 2016)), as well as references therein). Many complex models, such as random forests,kernel methods, and deep neural networks, have been developed and employed to optimize prediction accuracy, which can compromise their ease of interpretation. In this paper, we focus on instancewise feature selection as a specific approach for model interpretation. Given a machine learning model, instancewise feature selection asks for the importance scores of each feature on the prediction of a given instance, and the relative importance of each feature are allowed to vary across instances. Thus, the importance scores can act as an explanation for the specific instance, indicating which features are the key for the model to make its prediction on that instance. A related concept in ma-1University of California, Berkeley 2Work done partially during an internship at Ant Financial 3Georgia Institute of Technology 4Ant Financial. Correspondence to: Jianbo Chen .Proceedings of the 35 th International Conference on Machine Learning, Stockholm, Sweden, PMLR 80, 2018. Copyright 2018 by the author(s).


       1. 介绍
       可解释性是一个非常重要的标准。机器学习模型应用于医学等领域,金融市场和刑事司法(例如,见Lipton的讨论文件(Lipton,2016)以及参考文献)。许多复杂的模型,比如随机森林,采用核方法和深层神经网络对预测精度进行了优化。会影响他们的解释。在本文中,我们重点讨论了一个实例特征选择。模型解释的具体方法。给机器学习模型,实例特征选择要求的预测中,每个特征的重要性分数。给定的实例,以及每个特性的相对重要性。允许在不同的实例中发生变化。因此,重要性分数可以作为特定实例的解释,指示哪些特性是模型要创建的关键。它对那件事的预测。一个相关的概念加州大学伯克利分校2在AntFinancial 3 GeorgiaInstituteofTechnology 4 AntFinancial实习期间完成了部分工作。致:陈建博.第35届国际机器会议纪要学习,斯德哥尔摩,瑞典,PMLR 80,2018年。版权2018年由提交人提出。

       以上只摘录了论文一小部分,具体论文全文请点击下方下载,用于研究和学习。


      
蚂蚁金服人工智能部研究员ICML贡献论文系列 第一篇论文
     
Learning to Explain: An Information-Theoretic Perspective on Model Interpretation》:
        点击下载:http://www.scipaper.net/uploadfile/2018/0715/20180715112949666.pdf

      关注SCI论文创作发表,寻求SCI论文修改润色、SCI论文代发表等服务支撑,请锁定SCI论文网!

文章出自SCI论文网转载请注明出处:http://www.scipaper.net/jisuanjilunwen/158.html
0

相关内容

发表评论

Sci论文网 - Sci论文发表 - Sci论文修改润色 - Sci论文期刊 - Sci论文代发
Copyright © Sci论文网 版权所有 | SCI论文网手机版