Sci论文 - 至繁归于至简,Sci论文网。 设为首页|加入收藏

SCI论文网开场白:为SCI创作者提供分享合作的小而美圈子

当前位置:首页 > 计算机论文 > 正文

Feature Propagation on Graph: A New Perspective to Graph Representation Learning

发布时间:2018-07-16 16:49:38 文章来源:SCI论文网 我要评论














SCI论文(www.scipaper.net):

       小编特地整理了 蚂蚁金服人工智能部研究员ICML贡献论文系列 第六篇论文,以下只是改论文摘录出来的部分英文内容和翻译内容,具体论文英文版全文,请在本页面底部自行下载学习研究。

        Feature Propagation on Graph: A New Perspective to Graph Representation Learning
        图的特征传播:图表示学习的新视角

Biao Xiang, Ziqi Liu, Jun Zhou, Xiaolong Li
碧翔,刘子琪,周军,李晓龙
Ant Financial(蚂蚁金融)
{xiangbiao.xb, ziqiliu, jun.zhoujun, xl.li}@antfin.com

       Abstract
       We study feature propagation on graph, an inference process involved in graph representation learning tasks. It’s to spread the features over the whole graph to the t-th orders, thus to expand the end’s features. The process has been successfully adopted in graph embedding or graph neural networks, however few works studied the convergence of feature propagation. Without convergence guarantees, it may lead to unexpected numerical over-flows and task failures. In this paper, we first de-fine the concept of feature propagation on graph formally, and then study its convergence conditions to equilibrium states. We further link feature propagation to several established approaches such as node2vec and structure2vec. In the end of this paper, we extend existing approaches from represent nodes to edges (edge2vec) and demonstrate its applications on fraud transaction detection in real world scenario. Experiments show that it is quite competitive.

      中文翻译:
      摘要:
       我们研究了在图上的特征传播,这是一个涉及到图表示学习任务的推理过程。它将整个图的特征扩展到t阶,从而扩展en D的特征。该方法已成功地应用于图形嵌入或图神经网络中,但对特征传播收敛性的研究较少。如果没有收敛保证, 它可能会导致意外的数值超流和任务失败.本文首先形式化地给出了特征在图上传播的概念,然后研究了特征在图上的收敛条件。 平衡状态。我们进一步将特征传播与一些已建立的方法(如node2vec和structure2vec)联系起来。在本文的最后,我们扩展了现有的方法,从代表NOD。 证明了其在现实世界场景中的欺诈交易检测中的应用。实验表明,它具有很强的竞争力。

      1 Introduction
       In this paper, we study the feature propagation on graph,which forms the building blocks in many graph representation learning tasks. Typically, the graph representation learning tasks aim to learn a function f(X, G) to somehow utilize the additional graph structure in space G, compared with traditional learning tasks f(X) by only considering each sample independently. The successes of graph representation approaches [Grover and Leskovec, 2016; Dai et al., 2016;
       Kipf and Welling, 2016; Hamilton et al., 2017] have proven to be successful on citation networks [Sen et al., 2008], biological networks [Zitnik and Leskovec, 2017], and transaction networks [Liu et al., 2017] that can be formulated in graph structures.
       One major process of graph representation learning tasks involves the feature propagation over the graph up to t-th orders. Those approaches define various propagation manners based on such as, adjacency matrices [Belkin and Niyogi, 2002], t-order adjacency matrices [Cao et al., 2015], expected co-occurency matrices [Perozzi et al., 2014][Grover and Leskovec, 2016] by conducting random walks. Recently, graph convolutional networks have shown their promising results on various datasets. They rely on either graph Laplacians [Kipf and Welling, 2016] or on carefully-designed operators like mean, max operators over adjacency matrix [Hamilton et al., 2017].
       However, few of graph representation learning tasks study the propagation process used in their inference procedures. For instance, GCN [Kipf and Welling, 2016] or structure2vec [Dai et al., 2016] implicitly involve this procedure
       in the form H (t+1) = φ(A)H (t)W, where H ∈ R N,K denotes the learned embeddings of N nodes in vector space R K , the t denotes the t-th iteration, φ(.) de-fines the operator on adjacency matrix A ∈ {0, 1} N,N given graph G = {V, E}. This propagation process is parameterized by W ∈ R K,K . This iterative propagation process essentially propagate and spread each node i’s signals to i’s T-th step neighborhood over the graph. Without the careful designs of the process under certain conditions, the propagation could be under risk of numeric issues.
       In this paper, we are interested in the convergence condition of the propagation process to equilibrium state [Langville and Meyer, 2006], hopefully can help the understanding of existing literatures in this domain: (1)we first formulate the generic framework of feature propagation on graphs; (2) we connect existing classic approaches such as node2vec [Grover and Leskovec, 2016], a random walk based graph embedding approach, and structure2vec [Dai et al., 2016], a graph convolution based approach, to our feature propagation framework; (3) we study the convergence condition of feature propagation over graph to equilibrium state with T → ∞ by using theory of Mmatrix [Plemmons, 1977], which is quite simple and easy to implement by gradient projection; (4) we further extend the existing node representation approaches to edge representation, i.e. we propose “edge2vec” and show its applications on fraud transaction detection in a real world transaction networks, which is essentially important in any financial systems. More importantly, “edge2vec” can deal with multiple links (transaction among two accounts over a time period) among two nodes, which is essentially different from traditional settings like recommender systems (the user i could have only one rating ri j on the item j, i.e. only one link among two nodes).
       This paper is organized as follows. In section 2, we sets up the preliminary of this paper, and propose pairs of general definitions for feature expansion and feature propagation in a unified learning framework. In section 3, we discuss a typical feature propagation way, and propose the sufficient conditions for its convergence. In section 4, we explore the connection between feature propagation and two types of graph representation approaches. We finally extend the node embedding to edge embedding, and demonstrated its effectiveness by conducting experiments on fraud transaction detection in section 5 and section 6 respectively.

      中文翻译:
       1 介绍
       本文研究了特征在图上的传播问题,它构成了许多图表示学习任务中的构造块。通常,图表示学习任务的目标是学习。 与传统的学习任务f(X)相比,函数f(X,G)在某种程度上利用了空间G中的附加图结构,只考虑了每个样本。图r的成功 电子介绍方法[Grover和Leskovec,2016年;DAI等人,2016年;

      Kipf和Wling,2016;Hamilton等人,2017年]在引文网络[Sen等人,2008年]、生物网络[Zitnik和Leskovec,2017]和交易网络[Liu et.]上证明是成功的。 可以用图结构表示。

       图表示学习任务的一个主要过程是特征在图上的传播,直到t阶。这些方法定义了各种基于adja的传播方式。 Cency矩阵[Belkin和Niyogi,2002年],t阶邻接矩阵[CaO等人,2015年],预期共生矩阵[Perozzi等人,2014年][Grover和Leskovec,2016],通过进行随机游动。r 图卷积网络在各种数据集上都显示出了良好的应用前景。它们要么依赖于拉普拉斯图(Kipf和Wling,2016),要么依赖像m这样精心设计的运算符。 [J]EN,max算子在邻接矩阵上[Hamilton等人,2017]。

\

       然而,很少有图表示学习任务研究它们在推理过程中使用的传播过程。例如,GCN[Kipf和Wling,2016]或structure2vec[DAI等人,2016]i 隐式地涉及此过程在形式H(T1)=φ(A)H(T)W中,H∈R N,K表示向量空间RK中N个节点的学习嵌入,t表示t-迭代,φ(.)去细化邻接矩阵A∈上的算子。 0, 1 }n,n给定图g= {v,e}。这种传播过程是由W. R,K,K参数化的。这种迭代传播过程基本上传播和传播每个节点I的信号到第i个步骤NE。 图上的八角形。如果没有在一定条件下对过程进行仔细的设计,传播就会面临数值问题的风险。

       本文对传播过程到平衡态的收敛条件感兴趣[Langville和Meyer,2006],希望能对现有文献的理解有所帮助。 在这一领域:(1)我们首先建立了图上特征传播的一般框架;(2)将现有的经典方法如node2vec[Grover和Leskovec,2016](随机游动b)连接起来。 基于图嵌入方法和基于图卷积的结构2vec[dai等人,2016]到我们的特征传播框架中;(3)研究了特征概率的收敛条件。 利用M矩阵理论实现图到平衡态的方法[Plemmons,1977],该方法简单且易于用梯度投影实现;(4)进一步推广了已有的n。 ODE表示方法用于边缘表示,即我们提出了“edge2vec”,并展示了其在实际事务网络中欺诈事务检测中的应用。 在任何金融体系中都很重要。更重要的是,“edge2vec”可以处理两个节点之间的多个链接(一个时间段内两个帐户之间的事务),这与t有本质上的不同。 像推荐系统这样的辐射设置(用户i只能在项目j上有一个评级ri j,即在两个节点之间只有一个链接)。

       本文的结构如下。在第二节中,我们初步建立了本文的研究框架,并给出了特征展开和特征传播在统一学习中的一般定义。 框架。在第三节中,我们讨论了一种典型的特征传播方法,并给出了其收敛的充分条件。在第四节中,我们将探讨特征传播之间的联系。 两种类型的图表示方法。最后将节点嵌入扩展到边缘嵌入,并通过对欺诈事务检测i的实验验证了该方法的有效性。 n第5条和第6条。

       以上只摘录了论文一小部分,具体论文全文请点击下方下载,用于研究和学习。
       蚂蚁金服人工智能部研究员ICML贡献论文系列 第七篇论文

     《Feature Propagation on Graph: A New Perspective to Graph Representation Learning》全文PDF版下载链接:http://www.scipaper.net/uploadfile/2018/0716/20180716045729803.pdf

       关注SCI论文创作发表,寻求SCI论文修改润色、SCI论文代发表等服务支撑,请锁定SCI论文网!

文章出自SCI论文网转载请注明出处:http://www.scipaper.net/jisuanjilunwen/164.html
0

相关内容

发表评论

Sci论文网 - Sci论文发表 - Sci论文修改润色 - Sci论文期刊 - Sci论文代发
Copyright © Sci论文网 版权所有 | SCI论文网手机版