Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Artificial neural networks for predicting social comparison effects among female Instagram users

  • Marta R. Jabłońska ,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

    marta.jablonska@uni.lodz.pl

    Affiliation Department of Computer Science in Economics, Faculty of Economics and Sociology, Institute of Logistics and Informatics, University of Lodz, Lodz, Poland

  • Radosław Zajdel

    Roles Conceptualization

    Affiliation Department of Computer Science in Economics, Faculty of Economics and Sociology, Institute of Logistics and Informatics, University of Lodz, Lodz, Poland

Abstract

Systematic exposure to social media causes social comparisons, especially among women who compare their image to others; they are particularly vulnerable to mood decrease, self-objectification, body concerns, and lower perception of themselves. This study first investigates the possible links between life satisfaction, self-esteem, anxiety, depression, and the intensity of Instagram use with a social comparison model. In the study, 974 women age 18–49 who were Instagram users voluntarily participated, completing a questionnaire. The results suggest associations between the analyzed psychological data and social comparison types. Then, artificial neural networks models were implemented to predict the type of such comparison (positive, negative, equal) based on the aforementioned psychological traits. The models were able to properly predict between 71% and 82% of cases. As human behavior analysis has been a subject of study in various fields of science, this paper contributes towards understanding the role of artificial intelligence methods for analyzing behavioral data in psychology.

Introduction

Social media are significantly changing the way people interact, increasing the role of user-generated content that is also subject to feedback from other users [1,2]. Given that social networking sites (SNSs) provide rich opportunities for social comparison [3], researchers have begun to explore the consequences for psychological well-being [4]. By spending a vast amount of time viewing others’ posts, users are inevitably drawn into the process of social comparison, especially when using SNSs dedicated to visual content, such as Instagram [5]. Social comparison research examines how individuals react when comparing themselves with others. It distinguishes two acts of comparisons: upward and downward. People make an upward comparison when comparing themselves unfavorably to others. By contrasting themselves with those they perceive as somehow better, individuals may display unpleasant and painful emotions, or it may form part of the drive to self-improvement. Downward comparison takes place when people compare themselves favorably to others whom they perceive as worse off. It can help to restore threatened self-esteem, but also happiness and pride [5,6]. Thus, both comparisons are not always straightforward, and they can have a positive or negative effect on the development of self-evaluation and identity [1,6,7].

The effects of regular exposure to social media posts causing social comparisons are an important area to research [8,9]. Women, in particular, compare their image to others, so they are particularly vulnerable to self-objectification, anxiety, body satisfaction, and lower perceptions of themselves [9,10]. In this paper, social comparisons made by young women due to Instagram usage were researched. The links between types of social comparisons, the intensity of using Instagram, and selected psychological traits were studied using the comparison-identification-contrast model [11,12]. Thanks to this model, people make comparisons by identifying or contrasting themselves with others, where ‘identification’ refers to closeness to the target and ‘contrast’ to distance. By differentiating the upward and downward interpretation in each comparison, four possible feelings may arise while making social comparisons: upward-contrast indicates inferiority, downward-contrast–superiority, upward-identification–hope, and downward-identification–disappointment [12]. The results were used to construct a set of artificial neural networks (ANNs) that can be used to diagnose the intensity of each type of social comparison based on derived psychological traits. As predicting behavior is a classical problem of psychology [13], human behavior analysis has become a subject of study in artificial intelligence methods for analyzing behavioral data [14,15]. ANNs are able to analyze data, learn from them, and then make classifications, diagnoses, or predictions [16]. They are often called “black boxes”, tools where a researcher has little concern about what is going on inside. As it may appear as a flaw, it is actually an advantage, because ANNs don’t require physical pre-information before modeling a system, and ANNs are used when there is a need for complex answers and algorithms and possible relations between data are unknown. There is a wide range of functionalities provided by these algorithms what makes them promising for use in predicting in psychology, including: mental health, behavior, emotions, and personality traits [1721]. ANNs approaches explicitly concentrate on statistical learning of nonlinear functions from multidimensional data sets to make further generalized predictions about data vectors not seen during training. Thus, they have a potential to boost decisions associated with the diagnosis, prognosis, and treatment in psychology and psychiatry [22]. Still, despite various promising studies, the automated understanding of the behavior, emotions or personality traits of both individuals and groups stays a challenging issue to artificial intelligence techniques [23]. As current psychological studies concentrate highly on explaining the roots of behavior using complex psychological mechanism theories with poor predicting abilities, ANNs implementations could help psychology become a more predictive science that can induce better understanding of human behavior [24].

The paper aimed to: 1) study if selected psychological characteristics (self-esteem, life satisfaction, depression, anxiety) and the intensity of Instagram usage are related to social comparisons made by young women; 2) test the possibility of implementing artificial neural networks in psychological data to predict the intensification of social comparisons. Thus, the paper is structured as follows. Section 2 starts with a description of the current online literature on social comparisons that was used to identify the psychological characteristics used in the study. Next, all details concerning the questionnaire survey and the analysis of the resulting data are provided. Finally, it describes the methods used to construct artificial neural networks. Section 3 provides the results of the questionnaire survey as well as the performance of the artificial neural network. The paper ends with a discussion and conclusions presenting the limitations of the study and avenues of future research.

Materials and methods

Selection of the psychological traits

To define which characteristics should be included in the study, an analysis of the current literature was performed using CiteSpace software. It is a freely available tool for interactive and exploratory analysis of the evolution of a scientific domain, ranging from a single specialty to multiple interrelated scientific frontiers, and it was instrumental in revealing insightful patterns from the set of relevant publications [25, 26]. The analysis was made based on 305 articles on online social comparisons from the Web of Science database published between 2014 and 2019 (details provided in S1 Dataset: CiteSpace input text file and S2 Appendix: CiteSpace analysis criteria). They were analyzed, clustered and labeled, with each set representing an underlying theme, topic, or line of research. Detailed cluster descriptions extracted from CiteSpace are presented in S1 Table. The final citation network created by CiteSpace comprised five clusters, including 126 papers. Among them 49 were selected by CiteSpace as the most influential in the main areas of online social comparison studies (S1 Appendix). The biggest, first cluster was dedicated mainly to works on body dissatisfaction caused by Instagram usage. It referred also to terms including social comparison, instagrammers’ body image, ideal appearance, thinspiration, fitspiration communities, and young women. The second cluster was defined by terms social comparison and subjective well-being. Among papers classified to this set, following topics were the most popular: social media use impact and motives, personality traits, communication type, envy, and depression. Millennial population and specific social media behaviors described the third cluster, dedicated also to studies on depressive disorder, identity distress, social comparison, and psychological well-being. The fourth cluster emphasized a role of self-comparison and body image dissatisfaction. Finally, the last cluster was consisting of research on internet support group, predictors of anxiety and depression, young women, and anonymous online settings. CiteSpace analysis helped us to find the most influential works on subject of online social comparisons. Manuscripts from the clusters were searched for studies describing possible links between online social comparisons and psychological characteristics. A set of traits that were mentioned repeatedly was selected for further questionnaire study. Among other psychological traits, present in clusters, following can be mentioned: adult attachment, narcissism, envy, pride, self-objectification, loneliness, and happiness. As they appeared rarely, in limited number of papers, we excluded them from our study, concentrating on the most often occurring traits presented in Table 1.

thumbnail
Table 1. Works describing psychological traits possibly related to online social comparisons.

https://doi.org/10.1371/journal.pone.0229354.t001

Measures

To collect data, a questionnaire survey was conducted (S3 Appendix). The psychological traits described in the previous section (Table 2) were used in the questionnaire. Anxiety and depression were assessed using the 14-item Hospital Anxiety and Depression Scale (HADS) [42]. The HADS is a frequently used self-rating scale, comprised of subscales of anxiety and depression, developed to assess psychological distress in non-psychiatric patients. It is attractive to clinicians and researchers who need a rapid, efficient assessment [43] and has demonstrated satisfactory psychometric properties in assessing the symptom severity and caseness of anxiety disorders and depression in both somatic, psychiatric and primary care patients; in the general population as well as in different groups, including primary care patients, cognitively intact nursing home patients, and cancer inpatients [4450]. Internal consistency was high in the present sample (Cronbach’s α = 0.81). Self-esteem was measured with the Rosenberg 10-item scale [51], which is widely used in social-science research and measures global self-worth by assessing both positive and negative feelings about the self like: “On the whole, I am satisfied with myself”, “I feel that I have a number of good qualities” or “I feel I do not have much to be proud of”. Positive and negative items were presented alternately in order to reduce the effect of respondent set. While the reader may question one or another item, there is little doubt that the items generally deal with a favorable or unfavorable attitude toward oneself [51]. Internal consistency in the present sample was Cronbach’s α = 0.84. To evaluate life satisfaction, the Satisfaction With Life Scale (SWLS) [52] was used. It has been deployed heavily as a measure of the life satisfaction component of subjective well-being. In the area of health psychology, the SWLS has been used to measure the subjective quality of life of people experiencing serious health concerns [52] The SWLS is a five-item instrument consisting of statements: “In most ways my life is close to my ideal”, “The conditions of my life are excellent”, “I am satisfied with my life”, “So far I have gotten the important things I want in life”, and “If I could live my life over, I would change almost nothing”. It has been shown to correlate with measures of mental health and to be predictive of future behaviors [53]. For the analyzed sample, internal consistency was high (Cronbach’s α = 0.82). To measure the intensity of Instagram usage, the 13-item Facebook Intensity Scale [54] was adopted. This scale is used to measure Facebook usage beyond simple measures of frequency and duration, incorporating emotional connectedness to the site and its integration into individuals’ daily activities; it can separate problematic and non-problematic aspects of Facebook use [55]. Internal consistency in the present sample was Cronbach’s α = 0.79. Finally, a 12-item social comparison scale [56] was used after being reconfigured for Instagram usage. It was developed in order to measure identification and contrast with upward and downward comparison others. An upward comparison occurs when someone compares himself unfavorably to others (i.e. “When I compare my Instagram profile to people who have better profiles, I feel frustrated with the level of my own profile”), while downward comparison takes place when compares favorably to others whom they perceive as worse off (i.e. “When I compare my Instagram profile to people who have weaker profiles, I feel how good I am doing). Internal consistency was high in the present sample (Cronbach’s α = 0.86).

A total number of questions was 54. In the original versions, the measures used possessed different Likert scales. And so, HADS and Rosenberg self-esteem scale had a 4-point scale, Facebook Intensity Scale used a 5-point scale, while SWLS and social comparison scale were measured with 7-point Likert scale. To make our questionnaire more clear to respondents, all items were modified by implementing a 7-point Likert scale (from strongly agree to strongly disagree). Likert scales are very useful and the most common approach in opinion surveys for several reasons including they are relatively easy to write and respondents are familiar with such questions [57]. The use of Likert scales, is a common means of assessing people’s attitudes, values, internal states, and judgments about their own or others’ behaviors in both research and clinical practice [58]. Respondents are asked to rank their agreement (from strongly disagree to strongly agree) with a set of items on a scale that has a limited number of possible responses that are presented in a sequence for a real or hypothetical situation under study [57]. Further this data set can be statistically treated with Pearsons’ correlation coefficient, ANOVA, and regression analysis [59]. We can distinguish symmetric and asymmetric Likert scales. In the first type, the position of neutrality (i.e. “I don’t know”) lies exactly in between two extremes of strongly disagree to strongly agree, providing independence to a participant to choose any response in a balanced and symmetric way in either directions. The asymmetric Likert scale offer less choices on one side of neutrality than the other side, so it can cause forced choices where there is no perceived value of neutrality of the researcher [59,60]. Likert scale originally consists of five items, but different variations also occur, such as seven or ten point scales. In such cases adjacent options are less radically different from each other as compare to a 5 point scale. It may help participant to pick the most preferable rather than a “nearby” one [61]. A seven point Likert scale would generate data that can be used as interval data with a lower measurement error and a correspondingly higher precision when compared with the five point original scale [62].

Respondents filled in the questionnaire using text values: strongly agree, agree, rather agree, neither agree or disagree, rather disagree, disagree, and strongly disagree. During data preprocessing, these values were changed to numerical ranging from 1 (strongly disagree) to 7 (strongly agree). So the higher score respondent gained for particular measure, the higher level of psychological trait was evinced. Some questions (i.e. in HADS) were reversed. That means that selecting “strongly agree” indicated respondent’s activity or belief decreasing level of a measured trait. The scales of these questions were reversed at the point of data preprocessing to get the calculations correct. The total sum for each trait was an input for a neuron network. That is why, our networks had 5 neurons an input layer, one for each psychological trait (anxiety, depression, self-esteem, life satisfaction, and Instagram usage intensity).

Participants and data collection

The sample comprised 974 women age 18–49 who were Instagram users. Of the participants, 20.8 percent were aged 18–20, 68.5 percent were in their twenties, 7.6 percent were aged 30–39, and 3.1 percent were 40–49. All participants were Caucasian women, of Polish origin. The respondents voluntarily accessed an online questionnaire through a link advertised on social media (Facebook and Instagram). The study was advertised as an anonymous, confidential questionnaire investigating Instagram’s impact on well-being among women. It was approved by the University Committee for bioethics research (no. 1/KBBN-UŁ/IV/2018). As we gathered 974 responses, we analyzed them according to search for abnormal values. Due to the fact that the questionnaire was entirely constructed based on the Likert 7 point scale and also free of open questions, we assumed that there is a risk that the respondent will mark all the answers in the same way, which may be an expression of a lack of interest in completing the questionnaire. However, such observations did not appear in the data set. Therefore, we assumed that our collection does not have abnormal data to reject and the final set comprised of 974 records (S2 Dataset).

Data analysis

For each respondent, levels of self-esteem, life satisfaction, anxiety, depression, and the intensity of Instagram use were calculated. As social comparisons may cause positive or negative effects, they were grouped in the following way: for each record, the values concerning upward identification and downward contrast were added up, as were those for downward identification and upward contrast. A high score indicated whether a respondent was assigned to a “Positive” or “Negative” group. If both scores were the same, the record was labeled “Equal.” The results before and after aggregation are presented in Table 2.

As the empirical research has drawn attention to the issue of common method variance (CMV) and the potential bias it may cause, we decided to include this verification into our data analysis. The CMV’s sources include the use of only one type of item context, respondent, measurement context, and item characteristics, it may also result from certain tendencies while answering a survey on different measures [6264]. Also a social desirability is a well-known tendency of the respondent that refers to the tendency of respondents to give answers that make them present better [65]. One of the recommended statistical remedies from existing literature is Harman’s Single-Factor Test. It is a post hoc statistical test conducted to investigate the presence of common method effect. In Harman’s single factor score all items are loaded into one common factor. If the total variance for a single factor is less than 50%, it suggests that common method bias doesn’t affect data [66]. We calculated the Harman’s single factor score for our data and obtained a result of 32.78%.

The data were first analyzed in search of possible links in order to justify implementing ANNs. The Shapiro–Wilk test rejected the assumption of normality, so instead of ANOVA, the Kruskal-Wallis test (one-way ANOVA on ranks) was performed. In the next step, a data mining technique–affinity analysis–was used to check co-occurrence relationships. Affinity analysis–association rule mining–is a powerful tool in data mining, used to identifying correlations or patterns [67]. In the final step we implemented and trained a set of ANNs. A classification model of ANN was selected. Table 3 presents all variants included in an implementation process. The data subsets were divided in a following manner: training (70%), testing (15%), and validation (15%). We have performed several different iterations with varying configurations of those settings and then saved the best results.

thumbnail
Table 3. Variants of ANNs components used in the implementation.

https://doi.org/10.1371/journal.pone.0229354.t003

In the initial phase of network learning, weights and thresholds are given values that are small random numbers. The stopping conditions included the maximum number of epochs, the allowable error rate, the minimum error reduction required in a given learning period and they were set to default values. The number of generated networks in each configuration was set to 1 000, that means that a thousand of random network configuration from all possible ones was automatically built. As for generator settings for initialization, a initial value wasn’t set, so it was provided automatically by Statistica. Network architecture was defined as follows. Number of neurons in an input layer was set to 5 due to the amount of input variables: anxiety, depression, self-esteem, life satisfaction, and Instagram usage intensity. Possible number of neurons in a hidden layer was ranging depending on a network type: MLP from 3 to 500, RBF from 21 to 500 (minimum values set automatically, maximum estimated experimentally in subsequent iterations). Finally, number of neurons in an output layer was varying between 2 or 3 (depending on classified comparison type). All calculations were made using Statistica v.13.3.

This paper presents two powerful, intelligent calculation models, that are two different types of ANNs algorithms, namely multilayer perceptron (MLP) and radial basis function (RBF). MLP consists of basically three layers: an input layer, a hidden layer (one or more), and an output layer [6870]. Each neuron receives the input data from other neurons and passes through the hidden layers to an output layer. In MLP neurons are interconnected processing nodes to make a network. During training, the MLP starts with small random initial weights values, and then computes a one-pass back propagation algorithm that includes a forward pass propagating the input vector through the network layer by layer, and a backward pass to update the weights by the gradient descent rule [71]. The output of each neuron is a result of weighted set of inputs. Also RBF consists of the same three layers but a hidden layer is single. The input signals are each assigned to a node in the input layer and then passed directly to the hidden one without weights [71]. After processing in hidden layer it goes through output layer, which generates the output data [69,72].

We were using ANN Statistica module in an automatic mode. Thus, the parameters used in training ANN in this mode were limited. We have defined the following sets of values: function describing structure (RBF, MLP), error function (sum of squares and mutual entropy), MLP activation function (linear, logistic, hyperbolic tangent "tanh", exponential, and sine), output neurons function (linear, logistic, hyperbolic tangent "tanh", exponential, and sine), no weights reduction, number of neurons in hidden layer (MLP from 3 to 500, RBF from 21 to 500), number of neurons in an output layer (2 or 3), number of generated networks (1 000). Stopping conditions are set to default values, so without option of modification by the user in an automatic mode. Finally, initial weights were chosen randomly, so we cannot present them explicitly. This is important to highlight, as the results obtained during our analysis may differ when they are re-performed even on the same data set. It results from the use of a random number generator by Statistica, for example when determining the initial weights of neurons. Also, if the decision is made to change the initial value of the random number generator (which we did not modify), it is possible to receive a different division of the available data into a training, test and validation set, which to some extent affects the result of the analysis.

Results

Descriptive statistics for each type of online social comparison type are presented in Table 4. Respondents labeled as positive (with high levels of upward identification and downward contrast) have the highest scores on Instagram intensity usage. Those marked as negative (downward identification and upward contrast) evinced topmost levels of depression and anxiety and the lowest levels of self-esteem and life satisfaction. Participants that scored equally on positive and negative scale, displayed exactly the opposite tendencies regarding the negative group, and evinced the lowest Instagram intensity usage.

The Kruskal-Wallis test was performed with the social comparison type (positive, equal, negative) as an independent (grouping) variable with the psychological traits (self-esteem, depression, anxiety, life satisfaction, and the intensity of Instagram use) as the dependent variables (Table 5). The results showed that all p values were lower than 0.05, which means that there could be grounds to reject the null hypothesis that all of the population distribution functions are identical. The one-way ANOVA on ranks is an omnibus test statistic and cannot tell which specific groups were statistically significantly different from each other, only that at least two groups were. In our case, this means that the analyzed psychological trait levels for each type of comparison are not equal, so the levels of these traits among different social comparison types differ.

thumbnail
Table 5. The Kruskal-Wallis test (one-way ANOVA on ranks) results.

Independent (grouping) variable: Social comparison type (positive, equal, negative).

https://doi.org/10.1371/journal.pone.0229354.t005

To find more about the associations between social comparison types and the analyzed psychological traits, an affinity analysis was performed as the next step. As this technique requires dichotomous data (1 –if it occurs, 0 –if not present), a database was recalculated so each psychological trait (self-esteem, depression, anxiety, life satisfaction, and the intensity of Instagram use) was given a value of “1” if a score exceeded a sample mean, so was “above average”, and “0” otherwise. The results are described with three coefficients: support, confidence, and lift. “Support” means the probability of the simultaneous occurrence of both traits–“if A then B.” “Confidence” is a conditional probability that an occurrence of one trait (predecessor) will cause an occurrence of another trait (successor). The third measure called the lift or lift ratio is the ratio of confidence to expected confidence. The correlation coefficient is a quotient of the support value and the element from the product of support values, separately for A and B. To present the core association rules, only those with a confidence level above 50% are presented in the results (Table 6).

thumbnail
Table 6. Affinity analysis results (with a confidence level above 50%).

https://doi.org/10.1371/journal.pone.0229354.t006

The above rules presented essentially that respondents possessing high self-esteem were more likely to experience positive comparisons than negative ones. High levels of depression or anxiety diminished equal comparison types, though people scoring strong on the latter had also the tendency to positive comparisons. Being satisfied with one’s life implied positive and reduced possibility of negative comparisons. Vast Instagram usage was associated with positive comparisons as well. Also positive comparison, as a predecessor variable, implicated low depression and anxiety as well as high self-esteem, life satisfaction and Instagram usage intensity.

The one-way ANOVA on ranks and association rule mining gave grounds to state that there are relationships between the analyzed psychological features and social comparison types, which justifies the construction of ANNs on the basis of the collected data. The first ANN model had an output neuron with one type of social comparison variable. In this case, the ANN was able to predict whether a respondent was going to indicate a positive, negative, or equal style of social comparison on Instagram. Unfortunately, this model achieved only average quality testing results, amounting to 61.64%. Quality of testing is calculated for classification networks as the relative number of cases correctly classified (relative to the total number of cases), so this value means that less than two-thirds of all cases were classified correctly. We concluded that these results were unsatisfactory, and another approach was implemented. The second model comprised three independent ANNs, each predicting the possibility of a positive, negative, or equal social comparison. In this model, the results were more satisfactory, with quality of testing varying from 71% to 82%. One ANN model, in particular, proved to be the most fitting: an RBF 5-100-2 network, with an RBFT learning algorithm, cross-entropy error function, Gaussian activation function in a hidden layer, and Softmax activation function in an output layer. The values 5, 100, and 2 are the numbers of neurons in the input, hidden, and output layers, respectively. The best network predicting three types of comparisons was RBF 5-25-3, that means it possessed 25 neurons in a hidden layer, while new networks structures were RBF/MLP 5-100-2. This amount was automatically estimated by a software. A huge number of hidden neurons may cause network overfitting that means some nodes are unnecessary [73]. Overfitting occurs when the performance on test set is much lower than the performance on learning set due to the fact that the model fits too much to seen data, and do not generalize well. As our results provided higher testing results comparing to learning outcomes, overfitting wasn’t recorded in our networks. This is coherent with the approach that nowadays the trend is to design artificial neural networks with more hidden neurons, and it depends solely on data particularities [74]. RBFT is the default learning algorithm for RBF networks in Statistica. The cross-entropy error function assumes that data comes from a family of exponential distributions and provides a direct probabilistic interpretation of network outputs. Cross entropy is used only for classification networks. If this type of error function is selected, the activation function in an output layer will always be set to Softmax; it is a specialized activation function adapted to classification problems in which the representation of a one-out-of-N output variable is used. Softmax sets the normalized values of the exponential function (add up to unity). In conjunction with the error function based on mutual entropy, it is possible to estimate the likelihood of belonging to particular classes by means of a perceptron [75]. Finally, for an RBF network, the activation function for the hidden layer is automatically set to Gauss.

For a model predicting equal social comparison, a MLP model was also calculated as the most powerful. The network was formed as an MLP 5-100-2, with a Broyden-Fletcher-Goldfarb-Shanno (BFGS) 8 learning algorithm. It means that this algorithm was used to optimize the weights of the network and that the learning process required 8 epochs (learning cycles). BFGS is of the most recommended techniques used by STATISTICA for training neural networks. It performs significantly better than the more traditional algorithms such as Gradient Descent but it is more memory intensive and computationally demanding. Nonetheless, BFGS may require a smaller number of iterations to train a neural network given their fast convergence rate and more intelligent search criterion. This model used logistic activation function in the hidden layer and Tanh in the output layer. Sum-of-squares was selected as the error function during network training process. Still, the second model for the equal type of social comparison, with only slightly worse quality of testing (80.14 compared to 80.82 for MLP), was the aforementioned RBF network.

To summarize this part on ANNs performance, let’s start with the fact that MLP and RBF belong to a general class of neural networks called feed-forward, where the information processing follows one direction from the input to output neurons. Despite their similarities in structure, the are several vast differences. RBFs with a single hidden layer (the structure used by Statistica) are usually simpler and easier to train than MLPs due to their three-layer structure. In MLPs outputs are decided by all the neurons. Since RBF networks use only fixed basis functions, their representation power could be largely restricted. When multilayer RBF networks are used more efficient local approximation is possible and other advantages in generalization and training, unfortunately this option isn’t offered by Statistica so far [76]. Both MLP and RBF possess also different classification techniques, which are hyper surfaces and hyper spheres accordingly [70,77]. RBFs are widely used in various fields such as function approximation and pattern classification (the latter one is important for our ANNs design). Among the most important advantages of RBF following should be mentioned: easy design, very strong tolerance to input noise, fast and comprehensive training, responding well to new patterns absent in learning [7781]. Though, RBFs are usually slower and larger than MLPs, and they often have weaker performance. RBFs also have lower performance than MLPs, especially with a large number of input variables (they are more sensitive to switching on unnecessary inputs). In our study, RBF networks operated better in most cases, the only one where MLP scored better was a network with equal comparison output. As RBF perform well in classification tasks, such results seem to be reasonable (Table 7).

Discussion

Social media may cause negative effects on women’s mood and body image, raising concerns about appearance and lowering self-esteem, well-being, as well as life satisfaction [82]. This paper was aiming at studying whether psychological characteristics, including self-esteem, life satisfaction, depression, anxiety, and intensity of Instagram use could be related to social comparisons made by young women on Instagram. The one-way ANOVA on ranks results showed that there is a significant difference between the studied groups (positive, negative, and equal social comparison types). However, it does not provide any information about which groups are different. To find more about the possible directions of associations between the social comparison types and the analyzed psychological traits, an affinity analysis was performed. Among the links with the highest confidence level, the following can be mentioned. The occurrence of high self-esteem and life satisfaction, as well as low levels of anxiety and depression, all resulted in there not being a negative social comparison. The equal type of social comparison was described by dissociation rules concerning high anxiety and depression, and low self-esteem and life satisfaction; that means that these traits also did not produce this comparison type. The positive comparison type was associated with high Instagram intensity usage, low depression, and high life satisfaction and self-esteem. Concerning the direction of association between the psychological traits and social comparison types, these results suggest that having different levels of these psychological traits may cause different social comparison types rather than vice versa.

The second aim was to test the possibility of implementing ANNs in psychological data to predict a type of social comparison. The results show that three models using binary classification have quality of the predictions as it ranged between 71% to 82% of all analyzed cases. This result may be considered acceptable, taking into consideration that an ANN operation will only be as good as the strength of the associations between the variables. The type of ANN that turned out best (RBF) was quite surprising, however. MLP is the most common type of network; it can be time-consuming but the received networks are small, fast, and they give better results than other types of networks. RBF networks are usually slower and larger than MLPs, and they often have weaker performance, especially with a large number of input variables. The justification of the better results of the RBF networks may be that the input layer consisted of only five neurons, so they would not weaken their performance significantly.

The study has several limitations. First, all the data were gathered by questionnaires completed by the respondents. It should be emphasized that self-assessment scales are only valid for screening purposes; a definitive diagnosis must rest on a clinical examination. Second, associations between the variables existed, but–in several cases–the affinity analysis proved that the opposite association rules scored similar confidence levels. One such example was the influence of life satisfaction on the positive social comparison type. A high level of life satisfaction indicated a positive social comparison, with a confidence level at 58.65%; a low level of this trait was indicated at 57.54%. Finally, the strength of associations influenced the operation of the ANNs, scoring a maximum of 82% for its effectiveness.

Despite the limitations, the study showed that psychological traits like self-esteem, life satisfaction, depression, anxiety, and the intensity of Instagram use may influence the way young women from the studied sample compared themselves while using this social medium. The results also supported the implementation of ANNs in psychological studies. As human behavioral analysis has been a subject of study in various fields of science, including sociology, psychology, and computer science, an increased interest in artificial intelligence methods for analyzing behavioral data in psychology can also be noticed [14,15]. Among the avenues of future research, the set of analyzed psychological traits could be broadened, and male respondents could be included as a control group to investigate whether these associations are typical for women or if they are independent of gender.

Supporting information

S1 Table. CiteSpace clusters on social comparisons online research from 2014–2019 (Web of Science).

https://doi.org/10.1371/journal.pone.0229354.s001

(DOCX)

S1 Appendix. Manuscripts in clusters (created automatically by CiteSpace).

https://doi.org/10.1371/journal.pone.0229354.s002

(DOCX)

S3 Appendix. Questionnaire (in English and Polish).

https://doi.org/10.1371/journal.pone.0229354.s004

(DOCX)

References

  1. 1. Verduyn P, Ybarra O, Résibois M, Jonides J, Kross E. Do Social Network Sites Enhance or Undermine Subjective Well-Being? A Critical Review. Soc Iss Policy Rev. 2017;11: 274–302.
  2. 2. Mingoia J, Hutchinson AD, Wilson C, Gleaves DH. The Relationship between Social Networking Site Use and the Internalization of a Thin Ideal in Females: A Meta-Analytic. Review Front Psychol. 2017;8: 1351. pmid:28824519
  3. 3. Yang CC. Instagram Use, Loneliness, and Social Comparison Orientation: Interact and Browse on Social Media, But Don’t Compare. Cyberpsych Beh Soc N. 2016;19(12): 703–708. pmid:27855266
  4. 4. Lup K, Trub L, Rosenthal L. Instagram #Instasad?: Exploring Associations Among Instagram Use, Depressive Symptoms, Negative Social Comparison, and Strangers Followed. Cyberpsych Beh Soc N. 2015;18(5): 247–252. pmid:25965859
  5. 5. Thomas L, Briggs P, Hart A, Kerrigan F. Understanding social media and identity work in young people transitioning to university. Comput Hum Behav. 2017;76: 541–553.
  6. 6. Utz S, Muscanell NL. Your Co-author Received 150 Citations: Pride, but Not Envy, Mediates the Effect of System-Generated Achievement Messages on Motivation. Front Psychol. 2018;9: 628. pmid:29780339
  7. 7. Yang CC, Holden SM, Carter MDK, Webb JJ. Social media social comparison and identity distress at the college transition: A dual-path model. J Adolescence. 2018;69: 92–102. pmid:30278321
  8. 8. Jin SV, Ryu E, Muqaddam A. Dieting 2.0!: Moderating effects of Instagrammers’ body image and Instafame on other Instagrammers’ dieting intention. Comput Hum Behav. 2018;87: 224–237.
  9. 9. Powell E, Wang-Hall J, Bannister JA, Colera E, Lopez FG. Attachment security and social comparisons as predictors of Pinterest users’ body image concerns. Comput Hum Behav. 2018;83: 221–229.
  10. 10. Fardouly J, Diedrichs PC, Vartanian LR, Halliwell E. The Mediating Role of Appearance Comparisons in the Relationship Between Media Usage and Self-Objectification in Young Women. Psychol Women Quart. 2015;39(4): 447–457.
  11. 11. Taylor SE, Lobel M. Social comparison activity under threat: downward evaluation and upward contacts. Psychol Rev. 1989;96: 569–575. pmid:2678204
  12. 12. Kang S, Chung W, Mora AR, Chung Y. Facebook comparisons among adolescents: how do identification and contrast relate to wellbeing? Asian Journal of Information and Communications 2013;5(2): 1–21.
  13. 13. Gupta U, Chatterjee N. Personality Traits Identification Using Rough Sets Based Ma-chine Learning. In: 2013 International Symposium on Computational and Business Intelligence, New Delhi; 2013. p. 182–185.
  14. 14. Borja-Borja LF, Saval-Calvo M, Azorin-Lopez J. Machine Learning Methods from Group to Crowd Behaviour Analysis. In: Rojas I, Joya G, Catala A, editors. Advances in Computational Intelligence, IWANN 2017, Lecture Notes in Computer Science: Cham: Springer; 2017;10306. p. 294–305. https://doi.org/10.1007/978-3-319-59147-6_26
  15. 15. Koul A, Becchio C, Cavallo A. PredPsych: A toolbox for predictive machine learning-based approach in experimental psychology research. Behav Res Methods. 2018;50(4): 1657–1672. pmid:29235070
  16. 16. Srividya M, Mohanavalli S, Bhalaji NJ. Behavioral Modeling for Mental Health using Machine Learning Algorithms. J Med Syst. 2018;42: 88. pmid:29610979
  17. 17. Kuzma M, Andrejková G. Predicting user’s preferences using neural networks and psychology models. Appl Intell 2016;44(3): 526–538.
  18. 18. Sommer M, Olbrich A, Arendasy M. Improvements in Personnel Selection With Neural Networks: A Pilot Study in the Field of Aviation Psychology, The International Journal of Aviation Psychology 2004;14(1): 103–115.
  19. 19. Srividya M, Mohanavalli S, Bhalaji NJ. Behavioral Modeling for Mental Health using Machine Learning Algorithms. J Med Syst 2018;42: 88. pmid:29610979
  20. 20. Bleidorn W, Hopwood CJ. Using Machine Learning to Advance Personality Assessment and Theory. Pers Soc Psychol Rev 2019;23(2): 190–203. pmid:29792115
  21. 21. Devillers L, Vidrascu L, Lamel L. Challenges in real-life emotion annotation and machine learning based detection. Neural Networks. 2005;18(4): 407–422. pmid:15993746
  22. 22. Dwyer DB, Falkai P, Koutsouleris N. Machine Learning Approaches for Clinical Psychology and Psychiatry. Annu Rev Clin Psycho. 2018;7(14): 91–118. pmid:29401044
  23. 23. Borja-Borja LF, Saval-Calvo M, Azorin-Lopez J. Machine Learning Methods from Group to Crowd Behaviour Analysis. In: Rojas I, Joya G, Catala A, editors. Advances in Computational Intelligence, IWANN 2017, Lecture Notes in Computer Science, Cham: Springer; 2017;10306. p. 294–305. https://doi.org/10.1007/978-3-319-59147-6_26
  24. 24. Yarkoni T, Westfall J. Choosing Prediction Over Explanation in Psychology: Lessons From Machine Learning. Perspect Psychol Sci 2017;12(6): 1100–1122. pmid:28841086
  25. 25. Chen C. Visualizing and exploring scientific literature with CiteSpace: An introduction to a half-day tutorial. In: Proceedings of ACM CHIIR conference, New Brunswick, NJ, USA; 2018. p. 369–370.
  26. 26. Chen C. The CiteSpace Manual. [Internet]. 2014 [cited 2019 Aug 10]. http://cluster.ischool.drexel.edu/~cchen/citespace/CiteSpaceManual.pdf
  27. 27. Powell E. Attachment security and social comparisons as predictors of Pinterest users’ body image concerns. Comput Hum Behav. 2018;83: 221–229.
  28. 28. Alberga AS, Withnell SJ, von Ranson KM. Fitspiration and thinspiration: a comparison across three social networking sites. Journal of Eating Disorders 2018;6: 39. pmid:30534376
  29. 29. Fardouly J, Willburger BK, Vartanian LR. Instagram use and young women’s body image concerns and self-objectification: Testing mediational pathways. New Media Soc. 2018;20(4): 1380–1395.
  30. 30. Weinstein E. Adolescents’ differential responses to social media browsing: Exploring causes and consequences for intervention. Comput Hum Behav. 2017;76: 396–405.
  31. 31. Turner PG, Lefevre CE. Instagram use is linked to increased symptoms of orthorexia nervosa. Eat Weight Disord-St. 2017;22(2): 277–284. pmid:28251592
  32. 32. Gerson J, Plagnol AC, Corr PJ. Subjective well-being and social media use: Do personality traits moderate the impact of social comparison on Facebook?. Comput Hum Behav. 2016;63: 813–822.
  33. 33. Chow TS, Wan HY. Is there any ‘Facebook Depression’? Exploring the moderating roles of neuroticism, Facebook social comparison and envy. Pers Indiv Differ. 2017;119: 277–282.
  34. 34. Chae J. Reexamining the relationship between social media and happiness: The effects of various social media platforms on reconceptualized happiness. Telemat Inform. 2018;35(6): 1656–1664.
  35. 35. Meier A, Schäfer S. The Positive Side of Social Comparison on Social Network Sites: How Envy Can Drive Inspiration on Instagram. Cyberpsych Beh Soc N. 2018;21(7): 411–417. pmid:29995526
  36. 36. Yang CC, Holden SM, Carter MDK. Social Media Social Comparison of Ability (but not Opinion) Predicts Lower Identity Clarity: Identity Processing Style as a Mediator. J Youth Adolescence. 2018;47(10): 2114–2128. pmid:29327168
  37. 37. Robinson A, Bonnette A, Howard K, Ceballos N, Dailey S, Lu Y, Grimes T. Social comparisons, social media addiction, and social interaction: An examination of specific social media behaviors related to major depressive disorder in a millennial population. Journal of Applied Biobehavioral Research 2019;24: e12158.
  38. 38. Yang CC, Robinson A. Not necessarily detrimental: Two social comparison orientations and their associations with social media use and college social adjustment. Comput Hum Behav. 2018;84: 49–57.
  39. 39. Tiggemann M, Zaccardo M. “Exercise to be fit, not skinny”: The effect of fitspiration imagery on women’s body image. Body Image 2015;15: 61–67. pmid:26176993
  40. 40. Walker M, Thornton L, De Choudhury M, Teevan J, Bulik CM, Levinson CA, Zerwas S. Facebook Use and Disordered Eating in College-Aged Women. J Adolescent Health. 2015;57(2): 157–163. pmid:26206436
  41. 41. Kim JW, Chock TM. Body image 2.0: Associations between social grooming on Facebook and body image concerns. Comput Hum Behav. 2015;48: 331–339.
  42. 42. Zigmond AS, Snaith RP. The hospital anxiety and depression scale. Acta Psychiat Scand. 1983;67(6): 361–370. pmid:6880820
  43. 43. Boxley L, Flaherty JM, Spencer RJ, Drag LL, Pangilinan PH, Bieliauskas LA. Reliability and factor structure of the Hospital Anxiety and Depression Scale in a polytrauma clinic. J Rehabil Res Dev. 2016;53(6): 873–880. pmid:28273327
  44. 44. Bjelland I, Dahl AA, Haug TT, Neckelmann D. The validity of the Hospital Anxiety and Depression Scale: An updated literature review, J Psychosom Res. 2002;52(2): 69–77. pmid:11832252
  45. 45. Djukanovic I, Carlsson J, Årestedt K. Is the Hospital Anxiety and Depression Scale (HADS) a valid measure in a general population 65–80 years old? A psychometric evaluation study. Health Qual Life Out. 2017;15: 193. pmid:28978356
  46. 46. Zigmond AS, Snaith RP. The hospital anxiety and depression scale. Acta Psychiatr Scand. 1983;67: 361–70. pmid:6880820
  47. 47. Gale CR, Allerhand M, Sayer AA, Cooper C, Dennison EM, Starr JM, et al. The structure of the hospital anxiety and depression scale in four cohorts of community-based, healthy older people: the HALCyon program. Int Psychogeriatr. 2010;22: 559–571. pmid:20214846
  48. 48. Drageset J, Eide GE, Ranhoff AH. Anxiety and depression among nursing home residents without cognitive impairment. Scand J Caring Sci. 2013;27: 872–881. pmid:23072281
  49. 49. Annunziata M, Muzzatti B, Altoe G. Defining hospital anxiety and depression scale (HADS) structure by confirmatory factor analysis: a contribution to validation for oncological settings. Ann Oncol 2011;22: 2330–2333. pmid:21339383
  50. 50. Iani L, Lauriola M, Costantini M. A confirmatory bifactor analysis of the hospital anxiety and depression scale in an Italian community sample. Health Qual. Life Outcomes. 2014;12: 1.
  51. 51. Rosenberg M. Society and the adolescent self-image. Princeton: University Press; 1965.
  52. 52. Diener E, Emmons RA, Larsen RJ, Griffin S. The Satisfaction with Life Scale. J Pers Assess. 1985;49: 71–75. pmid:16367493
  53. 53. Pavot W, Diener E. The Satisfaction With Life Scale and the emerging construct of life satisfaction. The Journal of Positive Psychology 2008;3(2): 137–152.
  54. 54. Orosz G, Tóth-Király I, Bőthe B. Four facets of Facebook intensity—The development of the Multidimensional Facebook Intensity Scale. Pers Indiv Differ. 2016;100: 95–104.
  55. 55. Ellison NB, Steinfield C, Lampe C. The benefits of Facebook ‘‘friends:” Social capital and college students’ use of online social network sites. J Comput-Mediat Comm. 2007;12: 1143–1168.
  56. 56. Van der Zee K, Buunk B, Sanderman R, Botke G, van den Bergh F. Social comparison and coping with cancer treatment. Pers Indiv Differ. 2000;28: 17–34.
  57. 57. Claveria O. A New Metric of Consensus for Likert Scales. SSRN Electronic Journal [Internet]. 2018 Sep [cited 2019 Nov 21]. https://ssrn.com/abstract=3255555.
  58. 58. Mellor D, Moore KA. The Use of Likert Scales With Children. J Pediatr Psychol. 2014;39(3): 369–379. pmid:24163438
  59. 59. Joshi A, Kale S, Chandel S, Pal DK. Likert Scale: Explored and Explained. British Journal of Applied Science & Technology. 2015;7(4): 396–403.
  60. 60. Tsang KK. The use of midpoint on Likert scale: The implications for educational research. Hong Kong Teachers Centre Journal. 2012;11: 121–130.
  61. 61. Dawes J. Do data characteristics change according to the number of scale points used? An experiment using 5-point, 7-point and 10-point scales. Int J Market Res. 2008;50(1): 61–77.
  62. 62. Liang H, Saraf N, Hu Q, Xue Y. Assimilation of enterprise systems: The effect of institutional pressures and the mediating role of top management. MIS Quart. 2007;31(1): 59–87.
  63. 63. Reio G. The threat of common method variance bias to theory building. Hum Resour Dev Rev. 2010;9(4): 405–411.
  64. 64. Podsakoff PM, MacKenzie SB, Lee JY, Podsakoff NP. Common method biases in behavioral research: A critical review of the literature and recommended remedies. J Appl Psychol. 2003;88(5): 879–903. pmid:14516251
  65. 65. Paulhus DL. Measurement and control of response bias. USA: Academic Press; 1991.
  66. 66. Tehseen S, Ramayah T, Sajilan S. Testing and Controlling for Common Method Variance: A Review of Available Methods. Journal of Management Sciences. 2017;4(2): 142–168.
  67. 67. Karthiyayini R, Balasubramanian R. Affinity Analysis and Association Rule Mining using Apriori Algorithmin Market Basket Analysis. International Journal of Advanced Research in Computer Science and Software Engineering. 2016;6(10): 241–246.
  68. 68. Kalogirou SA. Applications of artificial neural-networks for energy systems. Appl Energy. 2000;67(1–2): 17–35.
  69. 69. Haykin S. Neural Networks, a Comprehensive Foundation. New Jersey: Prentice-Hall; 1994.
  70. 70. Fath AH, Madanifar F, Abbasi M. Implementation of multilayer perceptron (MLP) and radial basis function (RBF) neural networks to predict solution gas-oil ratio of crude oil systems. Petroleum. 2018; Forthcoming.
  71. 71. Park JW, Venayagamoorthy GK, Harley RG. MLP/RBF neural-networks-based online global model identification of synchronous generator. IEEE T Ind Electron. 2005;52(6): 1685–1695.
  72. 72. Wang L, Kisi O, Kermani MZ, Salazar GA, Zhu Z, Gong W. Solar radiation prediction using different techniques: model evaluation and comparison. Renew Sustain Energy Rev. 2016;61: 384–397.
  73. 73. Foram S, Panchal M. Review on Methods of Selecting Number of Hidden Nodes in Artificial Neural Network. International Journal of Computer Science and Mobile Computing 2014;3(11): 455–464.
  74. 74. Hagan MT, Demuth HB, Beale MH, De Jesús O, Neural Network Design. 2nd Edtion. Oklahoma: Martin Hagan; 2014.
  75. 75. Bishop CM. Neural Networks for Pattern Recognition. 1 edition. Oxford: Oxford University Press; 1995.
  76. 76. Chao J, Hoshino M, Kitamura T, Masuda T. A multilayer RBF network and its supervised learning. Proceedings of IJCNN'01 International Joint Conference on Neural Networks. Washington. IEEE; 2001. p.1995-2000.
  77. 77. Yu H, Xie T, Paszczynski S, Wilamowski BM. Advantages of radial basis function networks for dynamic system design. IEEE Trans Ind Electron. 2011;58: 5438–5450.
  78. 78. Fu X, Wang L. Data dimensionality reduction with application to simplifying RBF network structure and improving classification performance. IEEE Trans Syst Man Cybern B Cybern. 2003;33: 399–409. pmid:18238187
  79. 79. Moody J, Darken C. Fast learning in networks of locally-tuned processing units. Neural Comput. 1989;1(2): 281–294.
  80. 80. Nabney IT. Efficient training of RBF networks for classification. Int J Neural Syst. 2004;14(3): 201–208. pmid:15243952
  81. 81. Pochmullcr W, Halgamugc SK, Glcsncr M, Schwcikcrt P, Pfcffcrmann A. RBF and CBF neural network learning procedures. In: IEEE World Congress on Computational Intelligence; Orlando. IEEE; 1994. p. 407–412.
  82. 82. Fardouly J, Diedrichs PC, Vartanian LR, Halliwell E. Social comparisons on social media: The impact of Facebook on young women’s body image concerns and mood. Body Image 2015;13: 38–45. pmid:25615425