Next Article in Journal
Claim Consistency Checking Using Soft Logic
Previous Article in Journal
Analysis of Personality and EEG Features in Emotion Recognition Using Machine Learning Techniques to Classify Arousal and Valence Labels
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Exploiting Weak Ties in Incomplete Network Datasets Using Simplified Graph Convolutional Neural Networks

1
Department of Computer Science, University of Central Florida (UCF), Orlando, FL 32816, USA
2
Department of Statistics and Data Science, University of Central Florida (UCF), Orlando, FL 32816, USA
*
Author to whom correspondence should be addressed.
Mach. Learn. Knowl. Extr. 2020, 2(2), 125-146; https://doi.org/10.3390/make2020008
Submission received: 20 April 2020 / Revised: 19 May 2020 / Accepted: 20 May 2020 / Published: 21 May 2020
(This article belongs to the Section Network)

Abstract

:
This paper explores the value of weak-ties in classifying academic literature with the use of graph convolutional neural networks. Our experiments look at the results of treating weak-ties as if they were strong-ties to determine if that assumption improves performance. This is done by applying the methodological framework of the Simplified Graph Convolutional Neural Network (SGC) to two academic publication datasets: Cora and Citeseer. The performance of SGC is compared to the original Graph Convolutional Network (GCN) framework. We also examine how node removal affects prediction accuracy by selecting nodes according to different centrality measures. These experiments provide insight for which nodes are most important for the performance of SGC. When removal is based on a more localized selection of nodes, augmenting the network with both strong-ties and weak-ties provides a benefit, indicating that SGC successfully leverages local information of network nodes.

1. Introduction

In addition to providing entertainment and social engagement, social networks also serve the important function of rapidly disseminating scientific information to the research community. Social media platforms such as ResearchGate and Academia.edu help authors rapidly find related work and supplement standard library searches. Twitter not only serves as an important purveyor of standard news [1] but also disseminates specialty news in fields such as neuroradiology [2]. Venerable academic societies such as the Royal Society (@royalsociety) now have official Twitter accounts. Shuai et al. [3] discuss the role of Twitter mentions within the scientific community and the citations that create a topological arrangement between scientific publications. Given that scientific articles possess the potential to change the landscape of technology, it is important to understand the information transference properties of academic networks: can techniques originally developed for social networks yield insights about scientific networks as well?
Pagerank [4] produced a revolution in the ability to search through the myriad of webpages by examining the network structure for relevance. This concept has been applied to citation networks in academic literature, which conceptually has many overlaps with the interlinking of websites, and Ding et al. [5] apply the Pagerank algorithm to citations. The same first author extends this work to investigate endorsement in the network as well [6]. Endorsement as a process produces an effect that not only allows readers to navigate essential information, relevant overlaps or apply appropriate credit but also amplifies readership which is a dynamic seen frequently in online social processes. These links are direct links and also referred to as strong-ties [7].
Indirect or weak-ties have also been identified as important in social networks, most notably in Granovetter’s seminal work: “The strength of weak ties” [8]. These indirect edges can be created through edges that span different communities (clusters of nodes) acting as ‘bridges’. They can also be the result of ‘triangulation’ where ‘friends-of-friends’ produce a link due to the common connection they share. Figure 1 shows an example of a weak-ties connection between nodes A-C with a dotted edge representing connectivity due to a shared connection with node B. It can be said that A-C are connected from friends-of-friends, or triangulation creating a weak-tie. The work of [9] applies this concept to predicting edge production in a social network of professional profiles and shows that the explicit modeling of this triangulation dynamic leads to improved performance.
For our study on academic connections, we used two datasets, Cora [10] and Citeseer [11], which are discussed in more detail in Section 3. In addition to network structure, these datasets contain class labels, relating to the publication venues, which can be used to test machine learning prediction algorithms. In datasets that exhibit homophily, utilizing the features of nodes within a topological arrangement (an adjacency matrix) produces improved classification performance over an instance only framework. Our investigation focuses on whether the explicit addition of weak ties will assist the inference process since they provide assistance in other network based processes. For instance, Roux et al. [12] investigate whether expertise within groups can cross the ‘boundaries’ from communities which are cohesive due to strong-tie connections. The work of [13] considers how differentiation of the edge types can improve the accuracy of social recommendations.
Determining the effect of interactions between nodes can be a time consuming process, which requires computational resources to analyze large networks. With the goal of understanding how node connections influence labels, various methodologies, such as [14], have been proposed where nodes iteratively propagate information throughout the network until convergence is achieved. A notable example of such is DeepWalk [15] which uses local information from truncated random walks to learn the latent variable representations. Relational neighbor classifiers such as Social Context Relational Neighbor (SCRN) have been shown to achieve good performance at inferring the labels of citation networks [14]. Graph Convolutional Networks (GCNs) [16] extended the methodology of Convolutional Neural Networks (CNNs) from images to graphs. GCNs, as CNNs, are constructed upon multiple layers of neural networks which makes them less amenable to interpretation [17,18,19].
This paper uses the approach of the Simplified Graph Convolutional Neural Networks (SGC) [20] to investigate the importance of strong vs. weak ties. The methodological framework, discussed in more detail in Section 4, provides a reduction in the complexity of the model and computation time required. It has an intuitive manner of producing feature projections and generating the non-linearity for different classes. Even though it is a deep learning approach that accounts for multiple-layers, the simplification allows a single parameter matrix to be produced that can more easily be interpreted if desired. An SGC implementation can be found in the DGL library [21,22].
The SGC uses the features of the nodes and the connectivity in the adjacency matrix to infer the class labels. In this paper we augment this adjacency matrix so that weak-ties are included as well. This produces a matrix in which the strong-ties and the weak-ties are treated equally at the start of the inference procedure. Results using this augmented adjacency matrix are compared to the label produced using the original adjacency matrix. Our experiments also consider the possibility of missing nodes. Obtaining complete datasets of networks is a challenge for a wide range of reasons; for instance, online platforms limit the API calls from developer accounts to reduce the website loads. It is then a crucial question as to whether the results of the investigation are sensitive to missing nodes [23]. Therefore, our experiments remove a range of pre-selected percentages of the network to compare the results. Nodes are ranked for removal based on three different centrality algorithms: betweenness, closeness, and VoteRank [24]. These algorithms sort the nodes in descending order and remove the top percentage chosen (e.g., 20%) so that the inference is performed without the influence of these nodes. We seek to answer the question: which gaps in the data are most likely to affect the SGC? The results help provide another piece of evidence towards the utility of weak-ties in sociological processes.
An added incentive for exploring the use of the SGC, is that it addresses an issue with the application of GCNs (graph convolutional neural networks) [16], where the increase in the number of layers beyond 2–3 can produce a degradation in the results. The number of layers employed by the GCN also corresponds to the K t h order neighborhood used in the SGC, and the results will be compared between both methodologies in Section 5. Although the application with GCNs [16] displays the degradation with an increase in the number of layers, L (corresponding to the K t h order neighborhood), the SGC does not display this degradation with an increase in K. The authors of [20] describe how the non-linearity for applications such as social networks may introduce unnecessary complexity.
The next section presents key work in the development of graph CNNs and other scientific explorations attesting to the power of weak-ties. Section 3 describes the datasets used in this study, and Section 4 outlines the methodology of the SGC. Then, in Section 5, we present results on the effects of augmenting the adjacency matrix with weak ties and removing nodes ranked on a selection of centrality measures. Within the results section is a subsection which compares the performance of the SGC with that of the GCN. Lastly, the conclusion is presented in Section 6, which summarizes the outcomes, outlines future work, and discusses the application to other datasets.

2. Related Work

This section presents an overview of related work on graph convolutional neural networks and weak ties.

2.1. Graph Convolutional Neural Networks

The work of [25] introduces how graph based methods can be used with convolutional neural networks (CNNs). These graph based methods are spectrally defined and their use within a spatial application utilizes recursive polynomials on the graph Laplacian. This enables spectrally motivated approaches to handle heterogeneous graphs. Convolutional neural networks [26] are employed as they allow for containing an efficient model architecture for extracting meaningful statistical patterns in high-dimensional datasets that can also be large (big data applications [27]). The application of CNNs to learn local stationary structures and apply them to hierarchical pattern searches has driven many advancements in ML tasks [28]. A key contribution of [25] is that the extension of the model to generalize to graphs is founded upon localized graph filters instead of the CNN localized convolution filter (or kernel). It presents a spectral graph formulation and how filters can be defined in respect to individual nodes in the graph with a certain number of ‘hops’ distance. An introduction to the field can be found in [29], where the reader can find a motivation for the fundamental analysis operations of signals from regular grids (lattice structures) to more general graphs. The authors of [30] build upon the work of signals on graphs and shows that a shift-invariant convolution filter can be formulated as a polynomial of adjacency matrices. These filters are defined as polynomials of functions of the graph adjacency matrix, which describes an intuitive spatial formulation of the graph convolutional neural network. The use of the adjacency matrix is utilized in the approach described in Section 4. The filter uses the adjacency matrix and defines the exponent of the matrix as the degree polynomial. Effectively these exponents in the polynomial represent the number of edges (’hops’) from any node.
The approach used here follows that of the work of [31], which relies on the adjacency matrix for filtering. It is noteworthy to emphasize that the graph based approach, which the authors provide code for, shows performance similar to the approach of CNNs the on CIFAR-10 and ImageNet datasets. This generalization could help understand how signals produced in the datasets can be collected whether they be image, document, sound based, brain region based, and more. Allowing for graph data that is heterogeneous is a flexibility which can produce more interesting applications. The model employed describes taking the single hop (defined edges) and the ‘2-hop’ edges to be filtered upon that allows an extended radius of feature influence to be introduced for classification. This concept of using the adjacency matrix powers is explored in Section 5 where the exponent is taken over a range of values which represents the number of ‘hops’. The authors of [16] also use this concept of the hop number that relies on the number of hidden-layers in the neural network and is the basis for the comparison approach employed in this work (described in Section 4).

Weak-Ties

After the publication of Granovetter’s seminal work on the importance of weak ties [8] there have been multiple follow-up studies exploring how social links affect a member’s ability to interact with others in the network and how different types of edges serve disparate roles in transmission and collaboration. Weak ties are important in many types of organizational structures; for instance, Patacchini et al. [32] studied the role of weak ties in criminal collusion. Since innovation requires teamwork and collaboration, organizations need to empower their workers to leverage their weak ties as described in [33]. The work of [34] looks at the effect of weak-ties on the job search process. Extracting information from the network of interactions is not a straightforward process and the work of [35] examines how this can be performed. Given recent evidence for increased social contention [36,37], the research of [38] considers the important question of how weak-ties can facilitate the increase of emotions such as anger on social media. Although weak-ties can be found to play a role in negative situations, there are other contexts where they play an important positive role, such as psychological well being where casual friendships add to happiness (strong ties plus weak ties) [39].

3. Data

Two datasets, Cora [10] and Citeseer [11], were used in this study. The Cora dataset is a citation network where the nodes refer to unique authors and the edges represent a weighted value for the mean citation relationship (from scientific publications). These scientific publications are classified and labeled into seven categories. The data were divided into a separate training and test set in order to provide consistent benchmarks between methodologies. The Citeseer dataset is another network dataset based upon citations where the nodes are also authors and edge values represent the mean citation relationship; it includes six different class labels. The SGC methodology described in Section 4 was used to infer the correct labels for a subset of the nodes in these datasets using both the original connectivity matrix and an augmented one. Table 1 provides an overview of some of the basic information of the datasets.
Figure 2 shows the degree distribution for the Cora dataset and how the distribution changes when different percentages of the network were removed. Those percentages of the network were removed according to different network centrality measures: betweenness in Figure 2a, closeness in Figure 2b, and VoteRank in Figure 2c. Figure 3 shows the same operation but using the Citeseer dataset. It is interesting to note how the Cora and Citeseer plots differ between their equivalent subfigures. The plots for the betweenness and the closeness change much more than VoteRank which provides evidence that it is more robust against choosing nodes with many edges as a measure of centrality in different datasets.

4. Methodology

For a graph G = ( V , A ) , V is the node within a set of N nodes V = { v 1 , v 2 , , v N } , and the adjacency matrix is a symmetric matrix, A R N x N . Each element of A , a i , j , holds the value of the weighted edge between two nodes v i and v j (an absence of an edge is represented by a i j = 0 ). The degree matrix D = diag ( d 1 , d 2 , , d N ) is a diagonal matrix of zero off diagonal entries and each diagonal entry is the row sum of the matrix A ; d i = j A i j . There is a feature vector, x i , for each node i so that the set of features in the network of nodes is a n × d matrix, X R n × d where d is the dimensionality of the feature vector. Each node is assigned a class label from the set of classes C; for each node we wish to utilize both A and X to infer y i { 0 , 1 } C . y i { 0 , 1 } C is ideally a one-hot encoded vector which can be supplied data to assist the parameter estimations.
The normalized adjacency matrix with included self-loops is defined as,
S = D ˜ 1 2 A ˜ D ˜ 1 2 ,
and A ˜ = A + I and D ˜ = diag ( A ˜ ) . The classifier employed by the SGC is:
Y ^ = softmax ( S K X Θ ) .
Here, the softmax can be replaced with σ as used in binary logistic regression when C = 2 , and for the softmax on multiple categories we have softmax ( x ) = exp ( x ) / c = 1 C exp ( x c ) . The component Θ is the matrix of parameter values for the projections of the feature vectors so that it is of dimensionality d × C , Θ R d × C . Intuitively this can be understood as the parameter matrix holding a single vector of parameters of length equal to that of the feature vector and as many of these vectors as there are class labels. This linearization derives from the general concept in deep learning for sequential affine transforms in layers which are subsequent stages,
Y ^ = softmax ( S S S X Θ ( 1 ) Θ ( 2 ) Θ ( K ) ) .
It can then be seen how the value of K chosen represents the number of layers in the network employed. More details can be found in [20] where the methodological derivation is elaborated upon. A key requirement in this framework is the setting of the parameter value k. This can be considered as a tuning parameter for varying of the number of propagation steps taken. This relates to the matrix powers of an adjacency matrix which produce in each entry the number of ‘walks’ between nodes [40,41].
From the adjacency matrix the matrix including weak-ties produced through ‘triangulation’ ([9]) can be found via the walks of length two with A 2 . The original adjacency matrix is said to contain the strong-ties and there is considerable sociological research into the value of each type of connectivity [8]. In this work we explore the use of an adjacency matrix which contains both the strong-ties and the weak-ties via;
A = A 2 + A .
Figure 4 demonstrates this, and it can be seen visually in the subfigures. Figure 4a shows a hypothetical network with 4 nodes connected in a chain and Figure 4b shows how those nodes are connected when A is produced from including both the strong-ties and the weak-ties.
Figure 5 shows a demonstration of the SGC methodology in its ability to accurately predict class labels on the Cora and Citeseer datasets. To explore how robust the methodology is, different percentages of the network were removed; nodes were selected for removal based on their rank calculated from different centrality measures: betweenness, closeness, and VoteRank. Each network measure expresses different aspects of a node’s position in a network and therefore changes in the prediction accuracy, which assist in understanding empirically how node network placements contribute the most in correct label prediction. The VoteRank algorithm considers local node influences more than betweenness or closeness. Figure 5a shows results obtained from running the model on the Cora dataset, and the Citeseer results are shown in Figure 5b.

5. Results

This section explores how the class label prediction accuracy is affected by different removal strategies when the connectivity matrix contains both the links for the strong ties and the weak ties. These results show how the parameter k can affect the accuracy of the prediction of class labels. Section 4 explores how the Simplified Graph Convolutional Neural Network (SGC) methodology performs on the datasets of Cora and Citeseer when different percentages of the nodes are removed. The nodes are removed according to their rank in terms of network centrality positions: betweenness, closeness, and VoteRank. For example, if 20% are removed using closeness as a measure, the nodes were ordered according to the value of closeness from largest to smallest, and the top 20% of the nodes in that percentile of closeness are removed. The purpose of this manipulation is to explore how robust the methodology is to central node removals whose influence on class labels can extend beyond their immediate vicinity.
As shown in Figure 5, we explore how the accuracy is affected by the different network measures used to rank nodes for removal but with the modified adjacency matrix that defines the connectivity for each node. This modification incorporates direct edges (links) called ‘strong ties’, as well as links between nodes that have a common friend. These newly introduced edges are the ‘weak ties’ that are a result of ‘triangulation’ as shown in Figure 4. The changes in the results due to the inclusion of the weak-ties can assist in establishing their importance in people’s classification efforts in real life. A set of plots compare the accuracy of the SGC prediction of class labels with different network removal rankings given the addition of weak ties. The effect of the parameter value of k on accuracy is also explored to understand the sensitivity of the results to the only parameter that requires tuning in SGC.
Figure 6 shows the results of applying the SGC with different values of k for predicting class labels. On the horizontal axis is the value of k and on the vertical axis the accuracy as a percentage of the test class labels predicted correctly. The betweenness metric is used to rank the nodes and different percentages of the network’s nodes are removed. The percentage values for each line are indicated in the legend. Figure 6a,b shows the results obtained from using the Cora and Citeseer datasets where the adjacency matrix used contains direct links between nodes and their strong-ties as well as their weak-ties as described in Section 4. Figure 6c,d shows the results when the original adjacency matrix containing only the strong-ties is used. For k = 0 similar results are obtained and for the final k value, k = 7 , but the progression differs. The difference in progression is evident for the Cora dataset at k = 1 and up to k = 4 where the predictive accuracy for Figure 6a is reduced. This also applies to the Citeseer dataset, and especially to the scenario where 20% or 30% of the nodes have been removed. When k = 0 the SGC operates effectively in a manner similar to logistic regression where the network information is not used and inference is conducted using only the features of the node in question. These results support the conclusion that the augmented network topology of the strong-ties and the weak-ties does not facilitate improved accuracy of label prediction.
Figure 7 also shows the results of applying the SGC with different values of k for predicting class labels. The value of k is shown on the horizontal axis and on the vertical axis the accuracy as a percentage of test class labels being predicted correctly. Here the closeness metric is used to rank the nodes for removal. The different percentages for the removal of network nodes for each line is shown in the plot legends. Figure 7a,b shows the results obtained from using the Cora and Citeseer datasets where the adjacency matrix used contains direct links between nodes and their immediate neighbors (strong-ties) as well as their weak-ties (edges obtained via triangulation) as described in Section 4. Figure 7c,d shows the results when the original adjacency matrix containing only the strong-ties is used. For k = 0 similar results are obtained between the different pairs as the connectivity of the adjacency is not incorporated and node inference looks only at the features obtained from the node of concern. For k = 7 similar values are obtained through the extended radius of the adjacency power, but the progression of the trace differs between pairs of the plots. The difference between the pairs of traces can be easily seen by inspection of the application to the Cora dataset at values k = 1 and up to k = 4 where the predictive accuracy for Figure 7a is reduced. This also applies to the Citeseer dataset, and is attenuated when 20% or 30% of the nodes have been removed. These results also support the conclusion that the augmented network topology of the strong-ties and the weak-ties does not facilitate improved accuracy of label prediction and that these conclusions are robust according to removal with a different network centrality ranking.
Figure 8 also shows the results of applying the SGC with different values of k for predicting class labels but uses the VoteRank centrality metric to rank the nodes for removal. The different percentages of node removal for each line are shown in the plot legends. Figure 8a,b shows the results obtained from the Cora and Citeseer datasets where the adjacency matrix used contains direct links between nodes and their immediate neighbors (strong-ties) as well as their weak-ties (edges obtained via triangulation) as described in Section 4. Figure 8c,d shows the results when the original adjacency matrix containing only the strong-ties is used. When k = 0 similar results are obtained between the different pairs as the connectivity of the adjacency is not incorporated and node inference looks only at the features from the node of concern. The application of VoteRank changes the interpretation of the previous results where both applications to Cora and Citeseer have improved results for the augmented adjacency matrix (strong-ties and weak-ties) from k = 3 and upwards.
The set of results show that for k < 3 the adjacency matrix containing the set of original strong-ties edges suffices to produce the best results. For larger values of k the augmented adjacency, which contains both the strong-ties and the weak-ties, can show improved performance when nodes are removed according to the VoteRank algorithm and not according to betweenness or closeness. This emphasizes that there is a complex interplay between how node centrality is measured and the manner in which the inference methodology operates. It cannot be considered an a priori principle that weak-ties can provide an increased predictive power due to its support from the social science domain and its adherence to it. On the contrary, they induce a requirement for larger values of k to reach the maximum accuracy implying that the SGC requires more ‘layers’ which effectively aggregates information from more distant nodes in order to counter balance the introduction of weak-ties as strong-ties. This can provide anecdotal evidence that those two types of edges may require separate treatment. Further experiments conducted, working with a starting network of only the weak-ties, produced networks with an increased number of disconnected components.
These results also support the claims of the authors of ‘VoteRank’ when they state that the methodology identifies a set of decentralized spreaders as opposed to focusing on a group of spreaders which overlap in their sphere of influence. This is why the VoteRank targeted node removal was more effective in reducing the accurate label inference since more locally influential nodes for classification were identified; the weak-ties provided extra information about local labels in the absence of these essential strong-tie connected nodes.

Comparison to GCN

This section compares results from applying SGC [20] vs. using the original GCN framework [16]. Appendix B of [16] discusses the effect of adding more network layers on accuracy. It states that the best choice is 2–3 layers and that after 7 layers there is steep degradation of accuracy. The number of layers corresponds to the number of ‘K’ hops as explored with the SGC previously. The SGC methodology encapsulates the K hop neighborhood without the non-linearity and therefore avoids the degradation of accuracy with increased K or L. Figure 9, Figure 10 and Figure 11 present the results of applying the GCN in the same set of situations that we evaluated with SGC. The number of layers L is on the x-axis (corresponding analogously to the K in SGC) and the y-axis is the accuracy. In each of these figures, the Cora and Citeseer datasets are used when examining the strong with weak ties in an augmented network as well as using the original network containing only the strong ties. Each figure removes a percentage of the nodes based upon the rank of the nodes with the centrality measures of betweenness, closeness, and VoteRank respectively. The plots have three lines per plot where there are different percentages (10%, 20%, and 30%) of the nodes removed based upon the centrality measure. In each of the Figure 9, Figure 10 and Figure 11 it can be seen how the accuracy degrades after L = 1 showing how the SGC is able to include more network information about each node without introducing unnecessary complexity which degrades accuracy. The degradation of the accuracy in relation to the choice of centrality measure is comparable between the results, showing that the GCN is less specific to the node network positions than the SGC is, which can be attributed to the non-linearity the GCN introduces via the layers.
In Appendix A, an additional set of tables are provided in order to see the comparison between the effectiveness of the SGC and the GCN over the values of K and L respectively. The two datasets and the centrality measures with the different edge sets are examined.

6. Conclusions

This paper explores the uses of the recently introduced methodology, the Simplified Graph Convolutional Neural Network (SGC); class label inferences are produced based on the network structure, represented by an adjacency matrix, in combination with node feature vectors. There is interest in exploring this model in more depth since it provides a succinct yet expressive formulation for describing how nodes can influence class label prediction within a network. Besides the parameters fitted in order to optimize the target label prediction, there is only a single parameter value k, which requires manual tuning. This parameter is related to the number of layers S k (described in Section 4).
The exploration conducted here investigates the degree to which the accurate prediction of class labels is reduced by removing percentages of the network ranked by centrality metrics. This provides evidence to the practitioner who collects data, that may contain gaps in the network, and needs to know if the conclusions can be drastically affected by missing data on key nodes as to whether the the SGC is sensitive to such issues. Three different network centrality measures are used to select removal nodes: betweenness, closeness, and VoteRank. We find that the methodology does manage to produce analogous predictions based upon different percentages of removal (10/20/30). The largest observed changes were when the nodes were selected for removal with the VoteRank algorithm and not with betweenness or closeness. This shows that the SGC label assignments are more sensitive to the local label information derived from the features of the local nodes than well connected groups of nodes in the center of the network. This also explains why it has displayed the ability to be robust in its predictions.
The other question explored is whether the results would change if the SGC was supplied an adjacency matrix that contained the ‘triangulated edges’ to begin with. The existing edges in the adjacency matrix can be referred to as strong-ties as they are direct links; the edges that connect friends-of-friends (produced from triangulation A 2 ), can be referred to as weak-ties. A matrix with both of these edge sets was supplied to the SGC to compare the accuracy predictions. There is considerable sociological literature discussing the importance of these edges in helping to discover important connections. Our results show a degraded outcome with the exception of when nodes were removed with the VoteRank algorithm. This indicates that the inclusion of the weak-ties provides a more robust edge set when important local nodes are removed. The results do not show an ability to improve the prediction of class labels for low removal percentages when weak-ties are included.
The datasets used in this study contained monolithic graphs, where every node is reachable from any other node. There are many datasets where the data contains disjoint graphs, and this can be particularly common when the observational capabilities are limited in comparison to the process. A notable example is with protein interaction graphs. Applying the investigation taken here with such data would alter the adjacency matrix but not in a way that would cause a failure in its ability to follow the procedures described. Since the exploration did not depend upon a small fraction of the number of nodes, the study could continue with such data as long as the distribution of the relative betweenness and closeness is not excessively skewed for the subgraphs. The investigation therefore can be conducted on a wide range of datasets to explore the role of weak ties in the networks. Corporate networks are an interesting avenue for extensions as the nodes would be more ‘complex’ entities which may rely on their network connections in different ways. A key aspect of the extendibility is the overhead of the approach. Since the parameter, feature and adjacency matrix are combined with linear operators with a non-nested set of intermediate features, inferences are relatively cheaper than other approaches that build deeper trees and introduce further non-linearities.

Author Contributions

Conceptualization, N.H.B. and A.V.M.; formal analysis, N.H.B. and A.V.M.; investigation, N.H.B. and A.V.M.; methodology, N.H.B. and A.V.M.; resources, G.S.; software, N.H.B.; supervision, A.V.M. and G.S.; validation, N.H.B. and A.V.M.; visualization, N.H.B.; writing—original draft, N.H.B.; writing—review and editing, A.V.M. and G.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by grant FA8650-18-C-7823 from the Defense Advanced Research Projects Agency (DARPA). The views and opinions contained in this article are the authors and should not be construed as official or as reflecting the views of the University of Central Florida, DARPA, or the U.S. Department of Defense.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Comparison of Results between the SGC and GCN

The following tables present the comparison of the results between the use of the SGC and the GCN (shown separately in the Results section). Each table lists the centrality metric used to remove nodes based upon its ranking: betweenness, closeness and VoteRank. The column ‘P’ identifies the percentage of the network nodes removed based upon that centrality metric. The column ‘L’ represents the number of layers used by the GCN and the column ‘K’ is for the exponent of the normalized adjacency matrix in the SGC. Under the columns ‘GCN’ and ‘SGC’ which refer to the graph convolutional neural network and simplified graph convolutional neural network respectively are the columns ‘S’ for the strong-ties used and ‘SW’ for the strong-ties and weak-ties aggregated. In Table A1, Table A2, Table A3, Table A4, Table A5 and Table A6 the cell entries show the accuracy; each centrality measure is reported for the two datasets Cora and Citeseer.
Table A1. The GCN and SGC methodologies were applied to predicting the class labels of the Cora dataset. The betweenness metric is used to rank and remove different percentages of the network. S stands for when the network has the initial strong connections only and SW represents when the network is augmented with weak ties alongside the strong ties. L and K denotes the number of layers and the power in GCN and SGC framework respectively.
Table A1. The GCN and SGC methodologies were applied to predicting the class labels of the Cora dataset. The betweenness metric is used to rank and remove different percentages of the network. S stands for when the network has the initial strong connections only and SW represents when the network is augmented with weak ties alongside the strong ties. L and K denotes the number of layers and the power in GCN and SGC framework respectively.
MetricPGCNSGC
LSSWKSSW
MeanstdMeanstdMeanstdMeanstd
1000.790.030.840.0300.760.020.740.01
10.790.030.840.0310.860.020.750.01
20.730.050.820.0420.880.010.760.01
30.630.080.770.0630.870.020.760.01
40.540.080.690.1140.870.020.770.01
50.500.070.650.1150.870.020.770.01
60.450.110.470.1960.870.020.770.01
70.380.130.460.2270.870.010.770.01
Betweenness2000.790.030.830.0400.760.020.720.02
10.790.030.840.0410.850.020.730.02
20.720.060.810.0520.860.020.740.02
30.630.090.770.0630.860.020.750.02
40.540.080.640.1140.860.020.750.02
50.500.110.570.1650.860.020.750.02
60.430.120.580.1460.860.020.760.02
70.370.110.460.2270.850.020.760.02
3000.770.030.830.0400.760.020.700.02
10.770.030.830.0410.850.020.720.03
20.710.050.790.0720.850.020.730.03
30.640.090.720.0930.850.020.740.03
40.540.110.640.1140.850.020.740.02
50.500.100.570.1450.850.020.740.02
60.420.110.490.1760.850.020.750.02
70.390.120.430.1970.860.020.750.02
Table A2. The GCN and SGC methodologies were applied to predicting the class labels of the Cora dataset. The closeness metric is used to rank and remove different percentages of the network. S stands for when the network has the initial strong connections only and SW represents when network is augmented with weak ties alongside the strong ties. L and K denotes the number of layers and the power in GCN and SGC framework respectively.
Table A2. The GCN and SGC methodologies were applied to predicting the class labels of the Cora dataset. The closeness metric is used to rank and remove different percentages of the network. S stands for when the network has the initial strong connections only and SW represents when network is augmented with weak ties alongside the strong ties. L and K denotes the number of layers and the power in GCN and SGC framework respectively.
MetricPGCNSGC
LSSWKSSW
MeanstdMeanstdMeanstdMeanstd
1000.800.040.830.0300.770.020.760.01
10.800.040.830.0310.850.020.830.03
20.750.060.820.0320.870.010.850.02
30.670.080.780.0530.870.010.850.01
40.550.120.680.0940.870.020.860.01
50.490.100.620.1450.880.010.860.01
60.440.120.570.1660.860.020.860.02
70.400.120.480.1870.870.020.860.02
Closeness2000.790.030.840.0400.760.020.770.02
10.790.030.840.0310.860.010.840.04
20.720.070.810.0520.880.020.860.02
30.650.100.770.0730.870.020.870.02
40.560.130.680.1340.880.020.870.02
50.510.100.590.1350.870.010.870.02
60.440.120.560.1560.870.020.870.02
70.430.120.470.1670.870.020.870.02
3000.780.040.830.0400.750.020.750.03
10.780.040.830.0410.860.020.820.03
20.720.060.810.0520.870.020.850.03
30.650.110.740.0930.870.020.860.02
40.560.120.660.1340.870.030.870.02
50.500.100.610.1250.870.030.870.02
60.460.120.570.1460.870.020.870.02
70.360.100.490.1770.870.020.880.03
Table A3. The GCN and SGC methodologies were applied to predicting the class labels of the Cora dataset. The VoteRank metric is used to rank and remove different percentages of the network. S stands for when the network has the initial strong connections only and SW represents when network is augmented with weak ties alongside the strong ties. L and K denotes the number of layers and the power in GCN and SGC framework respectively.
Table A3. The GCN and SGC methodologies were applied to predicting the class labels of the Cora dataset. The VoteRank metric is used to rank and remove different percentages of the network. S stands for when the network has the initial strong connections only and SW represents when network is augmented with weak ties alongside the strong ties. L and K denotes the number of layers and the power in GCN and SGC framework respectively.
MetricPGCNSGC
LSSWKSSW
MeanstdMeanstdMeanstdMeanstd
1000.780.040.830.0300.750.020.760.02
10.780.040.830.0310.860.010.830.03
20.710.070.810.0520.870.020.850.02
30.610.120.750.0830.870.020.860.02
40.560.100.630.1340.870.020.860.02
50.490.090.610.1250.870.010.860.03
60.410.100.490.1760.870.020.870.02
70.400.110.390.2170.870.020.870.02
Vote Rank2000.770.030.820.0300.750.020.750.02
10.770.040.820.0410.850.020.810.02
20.690.060.780.0620.860.020.840.02
30.620.080.730.0930.860.020.850.02
40.530.080.630.1140.860.030.860.02
50.490.080.570.1450.850.030.860.02
60.430.090.460.1460.850.020.860.01
70.340.070.420.1870.850.020.860.01
3000.760.030.810.0400.760.020.750.02
10.760.030.810.0410.840.020.810.03
20.690.050.770.0520.850.010.830.03
30.590.070.690.1130.840.020.840.03
40.530.080.600.1140.850.020.840.03
50.480.060.570.1550.850.020.850.03
60.430.100.490.1460.850.020.850.02
70.340.070.340.1970.840.010.850.02
Table A4. The GCN and SGC methodologies were applied to predicting the class labels of the Citeseer dataset. The betweenness metric is used to rank and remove different percentages of the network. S stands for when the network has the initial strong connections only and SW represents when network is augmented with weak ties alongside the strong ties. L and K denotes the number of layers and the power in GCN and SGC framework respectively.
Table A4. The GCN and SGC methodologies were applied to predicting the class labels of the Citeseer dataset. The betweenness metric is used to rank and remove different percentages of the network. S stands for when the network has the initial strong connections only and SW represents when network is augmented with weak ties alongside the strong ties. L and K denotes the number of layers and the power in GCN and SGC framework respectively.
MetricPGCNSGC
LSSWKSSW
MeanstdMeanstdMeanstdMeanstd
1000.730.020.750.0200.720.020.740.01
10.730.020.750.0210.760.020.750.01
20.660.060.710.0420.780.020.760.01
30.600.070.660.0830.760.010.760.01
40.520.080.610.0840.770.010.770.01
50.420.150.540.1450.770.020.770.01
60.310.160.330.1860.760.020.770.01
70.210.020.250.1370.770.020.770.01
Betweenness2000.720.020.740.0300.710.010.720.02
10.730.020.740.0310.760.020.730.02
20.650.040.700.0420.760.020.740.02
30.570.080.630.0630.770.020.750.02
40.540.080.580.0740.750.020.750.02
50.360.120.480.1650.760.020.750.02
60.220.050.300.1660.760.020.760.02
70.230.060.210.0270.770.020.760.02
3000.730.020.720.0300.710.020.700.02
10.730.020.720.0310.740.020.720.03
20.650.050.670.0420.750.020.730.03
30.580.040.590.0730.750.020.740.03
40.540.070.540.1040.750.010.740.02
50.410.130.340.1450.760.020.740.02
60.260.110.310.1260.740.030.750.02
70.230.080.230.0670.750.020.750.02
Table A5. The GCN and SGC methodologies were applied to predicting the class labels of the Citeseer dataset. The closeness metric is used to rank and remove different percentages of the network. S stands for when the network has the initial strong connections only and SW represents when network is augmented with weak ties alongside the strong ties. L and K denotes the number of layers and the power in GCN and SGC framework respectively.
Table A5. The GCN and SGC methodologies were applied to predicting the class labels of the Citeseer dataset. The closeness metric is used to rank and remove different percentages of the network. S stands for when the network has the initial strong connections only and SW represents when network is augmented with weak ties alongside the strong ties. L and K denotes the number of layers and the power in GCN and SGC framework respectively.
MetricPGCNSGC
LSSWKSSW
MeanstdMeanstdMeanstdMeanstd
1000.740.020.740.0200.720.020.720.02
10.740.020.740.0210.760.020.740.02
20.690.050.700.0420.760.020.750.02
30.620.060.650.0730.760.010.750.02
40.560.090.540.1240.760.010.750.02
50.390.160.460.1650.760.020.750.02
60.300.110.340.1760.740.020.750.02
70.240.040.270.1170.740.020.750.02
Closeness2000.720.020.740.0300.720.020.720.02
10.720.030.740.0310.760.030.730.02
20.660.060.690.0520.760.010.730.02
30.590.070.630.0830.770.020.740.02
40.530.070.540.1140.750.020.740.02
50.440.120.460.1650.750.010.740.02
60.300.120.310.1160.740.020.740.02
70.270.080.280.0970.740.030.740.02
3000.710.020.740.0300.710.030.730.02
10.720.020.730.0310.750.020.740.03
20.660.040.680.0520.760.030.750.03
30.590.070.640.0630.770.010.760.03
40.530.070.550.1140.760.030.760.03
50.380.130.450.1650.750.020.760.03
60.260.030.400.1660.760.020.760.02
70.250.020.260.0670.760.020.760.02
Table A6. The GCN and SGC methodologies were applied to predicting the class labels of the Citeseer data set. The VoteRank metric is used to rank and remove different percentages of the network. S stands for when the network has the initial strong connections only and SW represents when network is augmented with weak ties alongside the strong ties. L and K denotes the number of layers and the power in GCN and SGC framework respectively.
Table A6. The GCN and SGC methodologies were applied to predicting the class labels of the Citeseer data set. The VoteRank metric is used to rank and remove different percentages of the network. S stands for when the network has the initial strong connections only and SW represents when network is augmented with weak ties alongside the strong ties. L and K denotes the number of layers and the power in GCN and SGC framework respectively.
MetricPGCNSGC
LSSWKSSW
MeanstdMeanstdMeanstdMeanstd
1000.730.030.740.0200.720.020.730.03
10.730.030.740.0210.760.010.750.03
20.660.050.700.0420.760.010.760.03
30.590.070.650.0530.740.010.760.02
40.490.150.580.0940.760.010.760.02
50.430.130.450.1950.750.020.760.02
60.250.090.340.1960.760.020.760.02
70.220.070.260.1670.750.020.760.02
Vote Rank2000.730.020.730.0300.730.010.710.03
10.730.020.730.0310.710.010.730.04
20.660.040.680.0420.750.020.740.04
30.590.030.640.0530.750.010.740.03
40.540.040.580.0940.770.020.750.04
50.390.130.500.1750.750.020.750.03
60.210.020.380.1760.760.020.750.03
70.210.020.250.1270.750.040.750.03
3000.720.030.740.0300.730.010.700.03
10.720.020.730.0310.710.030.720.03
20.650.030.680.0420.750.010.730.03
30.580.060.630.0730.720.030.730.02
40.520.080.570.0840.730.010.730.02
50.410.150.510.1650.740.030.740.03
60.280.120.310.1460.740.040.740.03
70.220.080.270.1370.740.020.740.03

References

  1. Kwak, H.; Lee, C.; Park, H.; Moon, S. What is Twitter, a social network or a news media? In Proceedings of the International Conference on World Wide Web, Raleigh, NC, USA, 26–30 April 2010; pp. 591–600. [Google Scholar]
  2. Wadhwa, V.; Latimer, E.; Chatterjee, K.; McCarty, J.; Fitzgerald, R. Maximizing the tweet engagement rate in academia: Analysis of the AJNR Twitter feed. Am. J. Neuroradiol. 2017, 38, 1866–1868. [Google Scholar] [CrossRef] [Green Version]
  3. Shuai, X.; Pepe, A.; Bollen, J. How the scientific community reacts to newly submitted preprints: Article downloads, Twitter mentions, and citations. PLoS ONE 2012, 7, e47523. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Page, L.; Brin, S.; Motwani, R.; Winograd, T. The Pagerank Citation Ranking: Bringing Order to the Web; Technical Report; Stanford InfoLab: Stanford, CA, USA, 1999. [Google Scholar]
  5. Ding, Y.; Yan, E.; Frazho, A.; Caverlee, J. PageRank for ranking authors in co-citation networks. J. Am. Soc. Inf. Sci. Technol. 2009, 60, 2229–2243. [Google Scholar] [CrossRef] [Green Version]
  6. Ding, Y. Scientific collaboration and endorsement: Network analysis of coauthorship and citation networks. J. Inf. 2011, 5, 187–203. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Bian, Y. Bringing strong ties back in: Indirect ties, network bridges, and job searches in China. Am. Sociol. Rev. 1997, 62, 366–385. [Google Scholar] [CrossRef]
  8. Granovetter, M.S. The strength of weak ties. In Social Networks; Elsevier: Amsterdam, The Netherlands, 1977; pp. 347–367. [Google Scholar]
  9. Mantzaris, A.V.; Higham, D.J. Infering and calibrating triadic closure in a dynamic network. In Temporal Networks; Springer: Berlin/Heidelberg, Germany, 2013; pp. 265–282. [Google Scholar]
  10. Mccallum, A. CORA Research Paper Classification Dataset. 2001. Available online: people.cs.umass.edu/mccallum/data.html.KDD (accessed on 20 May 2020).
  11. Caragea, C.; Wu, J.; Ciobanu, A.; Williams, K.; Fernández-Ramírez, J.; Chen, H.H.; Wu, Z.; Giles, L. Citeseer x: A scholarly big dataset. In European Conference on Information Retrieval; Springer: Berlin/Heidelberg, Germany, 2014; pp. 311–322. [Google Scholar]
  12. Roux, V.; Bril, B.; Karasik, A. Weak ties and expertise: Crossing technological boundaries. J. Archaeol. Method Theory 2018, 25, 1024–1050. [Google Scholar] [CrossRef]
  13. Ghaffar, F.; Buda, T.S.; Assem, H.; Afsharinejad, A.; Hurley, N. A framework for enterprise social network assessment and weak ties recommendation. In Proceedings of the IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), Barcelona, Spain, 28–31 August 2018; pp. 678–685. [Google Scholar]
  14. Wang, X.; Sukthankar, G. Multi-Label Relational Neighbor Classification using Social Context Features. In Proceedings of the ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Chicago, IL, USA, 11–14 August 2013; pp. 464–472. [Google Scholar]
  15. Perozzi, B.; Al-Rfou, R.; Skiena, S. Deepwalk: Online learning of social representations. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, New York, NY, USA, 24–27 August 2014; pp. 701–710. [Google Scholar]
  16. Kipf, T.N.; Welling, M. Semi-Supervised Classification with Graph Convolutional Networks. arXiv 2016, arXiv:cs.LG/1609.02907. [Google Scholar]
  17. Samek, W.; Wiegand, T.; Müller, K.R. Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv 2017, arXiv:1708.08296. [Google Scholar]
  18. Samek, W. Explainable AI: Interpreting, Explaining and Visualizing Deep Learning; Springer Nature: Berlin/ Heidelberg, Germany, 2019; Volume 11700. [Google Scholar]
  19. Lundberg, S.M.; Lee, S.I. A unified approach to interpreting model predictions. In Advances in Neural Information Processing Systems; Neural Information Processing Systems Foundation, Inc.: San Diego, CA, USA, 2017; pp. 4765–4774. [Google Scholar]
  20. Wu, F.; Zhang, T.; Souza, A.H.d., Jr.; Fifty, C.; Yu, T.; Weinberger, K.Q. Simplifying graph convolutional networks. arXiv 2019, arXiv:1902.07153. [Google Scholar]
  21. Zhang, Z.; Cui, P.; Zhu, W. Deep learning on graphs: A survey. arXiv 2018, arXiv:1812.04202. [Google Scholar] [CrossRef] [Green Version]
  22. Wang, M.; Yu, L.; Zheng, D.; Gan, Q.; Gai, Y.; Ye, Z.; Li, M.; Zhou, J.; Huang, Q.; Ma, C.; et al. Deep graph library: Towards efficient and scalable deep learning on graphs. arXiv 2019, arXiv:1909.01315. [Google Scholar]
  23. Angulo, M.T.; Lippner, G.; Liu, Y.Y.; Barabási, A.L. Sensitivity of complex networks. arXiv 2016, arXiv:1610.05264. [Google Scholar]
  24. Zhang, J.X.; Chen, D.B.; Dong, Q.; Zhao, Z.D. Identifying a set of influential spreaders in complex networks. Sci. Rep. 2016, 6, 27823. [Google Scholar] [CrossRef] [PubMed]
  25. Defferrard, M.; Bresson, X.; Vandergheynst, P. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in Neural Information Processing Systems; Neural Information Processing Systems Foundation, Inc.: San Diego, CA, USA, 2016; pp. 3844–3852. [Google Scholar]
  26. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef] [Green Version]
  27. Amin, J.; Sharif, M.; Yasmin, M.; Fernandes, S.L. Big data analysis for brain tumor detection: Deep convolutional neural networks. Future Gener. Comput. Syst. 2018, 87, 290–297. [Google Scholar] [CrossRef]
  28. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  29. Shuman, D.I.; Narang, S.K.; Frossard, P.; Ortega, A.; Vandergheynst, P. The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains. IEEE Signal Process. Mag. 2013, 30, 83–98. [Google Scholar] [CrossRef] [Green Version]
  30. Sandryhaila, A.; Moura, J.M. Discrete signal processing on graphs. IEEE Trans. Signal Process. 2013, 61, 1644–1656. [Google Scholar] [CrossRef] [Green Version]
  31. Such, F.P.; Sah, S.; Dominguez, M.A.; Pillai, S.; Zhang, C.; Michael, A.; Cahill, N.D.; Ptucha, R. Robust spatial filtering with graph convolutional neural networks. IEEE J. Sel. Top. Signal Process. 2017, 11, 884–896. [Google Scholar] [CrossRef]
  32. Patacchini, E.; Zenou, Y. The strength of weak ties in crime. Eur. Econ. Rev. 2008, 52, 209–236. [Google Scholar] [CrossRef]
  33. Ruef, M. Strong ties, weak ties and islands: Structural and cultural predictors of organizational innovation. Ind. Corp. Chang. 2002, 11, 427–449. [Google Scholar] [CrossRef]
  34. Montgomery, J.D. Job search and network composition: Implications of the strength-of-weak-ties hypothesis. Am. Sociol. Rev. 1992, 57, 586–596. [Google Scholar] [CrossRef]
  35. Ryan, L. Looking for weak ties: Using a mixed methods approach to capture elusive connections. Sociol. Rev. 2016, 64, 951–969. [Google Scholar] [CrossRef] [Green Version]
  36. Conover, M.D.; Ratkiewicz, J.; Francisco, M.; Gonçalves, B.; Menczer, F.; Flammini, A. Political polarization on twitter. In Proceedings of the International Conference on Weblogs and Social Media, Barcelona, Spain, 17–21 July 2011. [Google Scholar]
  37. Fiorina, M.P.; Abrams, S.J. Political polarization in the American public. Annu. Rev. Political Sci. 2008, 11, 563–588. [Google Scholar] [CrossRef] [Green Version]
  38. Fan, R.; Xu, K.; Zhao, J. Weak ties strengthen anger contagion in social media. arXiv 2020, arXiv:2005.01924. [Google Scholar]
  39. Sandstrom, G.M.; Dunn, E.W. Social interactions and well-being: The surprising power of weak ties. Personal. Soc. Psychol. Bull. 2014, 40, 910–922. [Google Scholar] [CrossRef] [Green Version]
  40. Katz, L. A new status index derived from sociometric analysis. Psychometrika 1953, 18, 39–43. [Google Scholar] [CrossRef]
  41. Grindrod, P.; Parsons, M.C.; Higham, D.J.; Estrada, E. Communicability across evolving networks. Phys. Rev. E 2011, 83, 046120. [Google Scholar] [CrossRef] [Green Version]
Figure 1. This figure illustrates how weak-ties can be produced through triangulation.
Figure 1. This figure illustrates how weak-ties can be produced through triangulation.
Make 02 00008 g001
Figure 2. These plots show the degree distributions for the Cora network of publications [10] and how those distributions are altered when a certain percentage of the nodes are removed based upon a metric. Each subfigure shows the results of applying a different metric to sort and remove nodes: (a) node ‘closeness’; (b) node ‘betweenness’; (c) node ‘VoteRank’ [24].
Figure 2. These plots show the degree distributions for the Cora network of publications [10] and how those distributions are altered when a certain percentage of the nodes are removed based upon a metric. Each subfigure shows the results of applying a different metric to sort and remove nodes: (a) node ‘closeness’; (b) node ‘betweenness’; (c) node ‘VoteRank’ [24].
Make 02 00008 g002
Figure 3. These plots show the degree distributions for the Citeseer network of publications [11] and how those distributions are altered when a certain percentage of the nodes are removed based upon a metric. Each subfigure shows the results of using a different metric to sort and remove nodes: (a) node ‘closeness’; (b) node ‘betweenness’; (c) node ‘VoteRank’ [24].
Figure 3. These plots show the degree distributions for the Citeseer network of publications [11] and how those distributions are altered when a certain percentage of the nodes are removed based upon a metric. Each subfigure shows the results of using a different metric to sort and remove nodes: (a) node ‘closeness’; (b) node ‘betweenness’; (c) node ‘VoteRank’ [24].
Make 02 00008 g003
Figure 4. These show the concept of weak-ties produced through triangulation and how it can affect a small network: (a) shows a hypothetical network; (b) shows the result of introducing the weak-ties into the network as well as the original strong ties which were direct links.
Figure 4. These show the concept of weak-ties produced through triangulation and how it can affect a small network: (a) shows a hypothetical network; (b) shows the result of introducing the weak-ties into the network as well as the original strong ties which were direct links.
Make 02 00008 g004
Figure 5. The Simplified Graph Convolutional Neural Networks (SGC) methodology was applied to predicting the test case labels when a certain percentage of the nodes were removed based upon the metrics closeness, betweenness, and VoteRank with the accuracy on the y-axis shown. These results shown in (a,b) are for the network datasets Citeseer and Cora.
Figure 5. The Simplified Graph Convolutional Neural Networks (SGC) methodology was applied to predicting the test case labels when a certain percentage of the nodes were removed based upon the metrics closeness, betweenness, and VoteRank with the accuracy on the y-axis shown. These results shown in (a,b) are for the network datasets Citeseer and Cora.
Make 02 00008 g005
Figure 6. The SGC methodology was applied to predicting the class labels of the datasets Cora and Citeseer where the accuracy was plotted against the parameter k. The betweenness metric is used to rank and remove different percentages of the network: (a,b) show how the prediction changes when the network consists of strong-ties and weak-ties, and (c,d) show the results when the original adjacency matrix containing only strong-ties is used.
Figure 6. The SGC methodology was applied to predicting the class labels of the datasets Cora and Citeseer where the accuracy was plotted against the parameter k. The betweenness metric is used to rank and remove different percentages of the network: (a,b) show how the prediction changes when the network consists of strong-ties and weak-ties, and (c,d) show the results when the original adjacency matrix containing only strong-ties is used.
Make 02 00008 g006
Figure 7. The SGC methodology was applied to predicting the class labels of the Cora and Citeseer datasets where the accuracy was plotted against the parameter k. The closeness metric is used to rank and remove different percentages of the network: (a,b) show the prediction changes when the network consists of strong-ties and weak-ties, and (c,d) show the results when the original adjacency matrix containing only strong-ties is used.
Figure 7. The SGC methodology was applied to predicting the class labels of the Cora and Citeseer datasets where the accuracy was plotted against the parameter k. The closeness metric is used to rank and remove different percentages of the network: (a,b) show the prediction changes when the network consists of strong-ties and weak-ties, and (c,d) show the results when the original adjacency matrix containing only strong-ties is used.
Make 02 00008 g007
Figure 8. The SGC methodology was applied to predicting the class labels of the Cora and Citeseer datasets where the accuracy is plotted against the parameter k. The VoteRank metric is used to rank and remove different percentages of the network: (a,b) show the prediction changes when the network consists of strong-ties and weak-ties, and (c,d) show the results when the original adjacency matrix containing only strong-ties is used.
Figure 8. The SGC methodology was applied to predicting the class labels of the Cora and Citeseer datasets where the accuracy is plotted against the parameter k. The VoteRank metric is used to rank and remove different percentages of the network: (a,b) show the prediction changes when the network consists of strong-ties and weak-ties, and (c,d) show the results when the original adjacency matrix containing only strong-ties is used.
Make 02 00008 g008
Figure 9. The GCN methodology was applied to predicting the class labels of the datasets Cora and Citeseer where the accuracy is plotted against the parameter L. The betweenness metric was used to rank and remove different percentages of the network: (a,b) show the prediction changes when the network consists of strong-ties and weak-ties, and (c,d) show the results when the original adjacency matrix containing only strong-ties is used.
Figure 9. The GCN methodology was applied to predicting the class labels of the datasets Cora and Citeseer where the accuracy is plotted against the parameter L. The betweenness metric was used to rank and remove different percentages of the network: (a,b) show the prediction changes when the network consists of strong-ties and weak-ties, and (c,d) show the results when the original adjacency matrix containing only strong-ties is used.
Make 02 00008 g009
Figure 10. The Graph Convolutional Networks (GCNs) methodology was applied to predicting the class labels of the Cora and Citeseer datasets where the accuracy is plotted against the parameter l. The closeness metric is used to rank and remove different percentages of the network: (a,b) show the prediction changes when the network consists of strong-ties and weak-ties, and (c,d) show the results when the original adjacency matrix containing only strong-ties is used.
Figure 10. The Graph Convolutional Networks (GCNs) methodology was applied to predicting the class labels of the Cora and Citeseer datasets where the accuracy is plotted against the parameter l. The closeness metric is used to rank and remove different percentages of the network: (a,b) show the prediction changes when the network consists of strong-ties and weak-ties, and (c,d) show the results when the original adjacency matrix containing only strong-ties is used.
Make 02 00008 g010aMake 02 00008 g010b
Figure 11. The GCN methodology was applied to predicting the class labels of the Cora and Citeseer datasets where the accuracy is plotted against the parameter l. The VoteRank metric is used to rank and remove different percentages of the network: (a,b) show the prediction changes when the network consists of strong-ties and weak-ties, and (c,d) show the results when the original adjacency matrix containing only strong-ties is used.
Figure 11. The GCN methodology was applied to predicting the class labels of the Cora and Citeseer datasets where the accuracy is plotted against the parameter l. The VoteRank metric is used to rank and remove different percentages of the network: (a,b) show the prediction changes when the network consists of strong-ties and weak-ties, and (c,d) show the results when the original adjacency matrix containing only strong-ties is used.
Make 02 00008 g011
Table 1. Summary statistics of the networks from the datasets used in this study: Cora [10] and Citeseer [11]. Each of these datasets has a set of classes used to identify groups of publications in Cora as well as with Citeseer.
Table 1. Summary statistics of the networks from the datasets used in this study: Cora [10] and Citeseer [11]. Each of these datasets has a set of classes used to identify groups of publications in Cora as well as with Citeseer.
CoraCiteseer
# of Nodes27083327
# of Edges10,5569228
# of Classes76
Average degree3.89812.8109
Density0.001430.00084
Triadic closure0.09340.13006

Share and Cite

MDPI and ACS Style

Bidoki, N.H.; Mantzaris, A.V.; Sukthankar, G. Exploiting Weak Ties in Incomplete Network Datasets Using Simplified Graph Convolutional Neural Networks. Mach. Learn. Knowl. Extr. 2020, 2, 125-146. https://doi.org/10.3390/make2020008

AMA Style

Bidoki NH, Mantzaris AV, Sukthankar G. Exploiting Weak Ties in Incomplete Network Datasets Using Simplified Graph Convolutional Neural Networks. Machine Learning and Knowledge Extraction. 2020; 2(2):125-146. https://doi.org/10.3390/make2020008

Chicago/Turabian Style

Bidoki, Neda H., Alexander V. Mantzaris, and Gita Sukthankar. 2020. "Exploiting Weak Ties in Incomplete Network Datasets Using Simplified Graph Convolutional Neural Networks" Machine Learning and Knowledge Extraction 2, no. 2: 125-146. https://doi.org/10.3390/make2020008

Article Metrics

Back to TopTop