Next Article in Journal
Poka Yoke in Smart Production Systems with Pick-to-Light Implementation to Increase Efficiency
Previous Article in Journal
A System for Monitoring and Normative Qualification of Building Structure Vibrations Induced by Nearby Construction Works
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Impact of Fake News on Traveling and Antisocial Behavior in Online Communities: Overview

by
Igor Stupavský
1,*,†,
Pavle Dakić
1,2,† and
Valentino Vranić
1
1
Faculty of Informatics and Information Technologies, Slovak University of Technology in Bratislava, 842 16 Bratislava, Slovakia
2
Faculty of Informatics and Computing, Singidunum University, Danijelova 32, 842 16 Belgrade, Serbia
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2023, 13(21), 11719; https://doi.org/10.3390/app132111719
Submission received: 25 September 2023 / Revised: 21 October 2023 / Accepted: 23 October 2023 / Published: 26 October 2023
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
The concept of “fake news” has become widespread in recent years, especially with the rise of the Internet. Fake news has become a worldwide phenomenon in the consumption of online information, as it is often designed to look like real news and is widely shared on social networks. Concerns regarding the possible detrimental effects of fake news on the public’s knowledge of events and topics, as well as on democracy and public discourse in general, have arisen as a result of the rise of social networks. This article aims to provide a summary of a scientific investigation of antisocial behavior from historical research, conceptual analysis, and qualitative research in the form of a case study method. With the aim of analyzing online forums and the concept of disinformation using fake news, its implications have consequences that provoke reflection on this phenomenon. In the results, we propose a framework for investigating and evaluating the concepts of fake news and its interaction with other forms of antisocial behavior, including whether we can achieve satisfactory results with a reduced amount of searched text. The desire is to observe whether we can use our proposed procedure with the application of artificial intelligence with the VADER BERT model in combination with the intensity of individual types of sentiment.

1. Introduction

For a variety of reasons, people can engage in antisocial behavior online, while others can do it to draw attention to themselves or feel powerful. Whereas, in contrast, others would do it to vent their rage or hatred on a certain travel company or another individual. Some may indulge in it out of resentment towards the neighborhood or a lack of interpersonal interaction. Furthermore, they cannot understand the norms of online behavior and behave improperly by posting offensive comments or sharing harmful material.
Ultimately, some people might behave antisocially in online chats because they believe they can remain anonymous and will not be held accountable for their actions. It is challenging to pinpoint the moment or place where community abuse first occurred because it is a global problem that has appeared in a number of vibrant ways in many groups and nations. However, as experimenters, interpreters, and policymakers worked to comprehend and solve these challenging and complicated challenges more fully in recent decades, the study of community abuse and asocial geste as social and scientific issues has received increasing attention.
Throughout human history, antisocial behavior has defied society’s norms and values. There have always been individuals who have decided to engage in ways that are harmful or disruptive to their society, although specific types of asocial behavior may have varied over time. Asocial geste may have previously been more common in closed-knit, lower-class societies, where people had closer ties to one another on a human level. In incomparable communities, the effects of engaging in asocial behavior might be more severe, since it could result in acceptance or other types of social control.
An antisocial geste is a broad term that encompasses a wide range of conditioning that is mischievous toward the well-being of individuals and communities. This can include non-criminal and felonious graffiti, similar to theft and vandalism, with littering, graffiti, and excessive noise. Community abuse and asocial behavior can both have a significant negative effect on the individuals involved and on the larger community. They can weaken the social cohesion of society and foster a feeling of fear and instability. Social services, community organizations, and law enforcement agencies often work together to combat community abuse and asocial behavior. The goal is to intervene quickly and effectively to support these acts and help in the healing and recovery of those who have been harmed.
When writing our article, we followed the structure to define the analysis of individual research articles and related forms of antisocial behavior. The proposed solution is then tested in practice using specific statistical techniques that are covered in more detail in the sections that follow. These statistical techniques include the essential verification of the data and their evaluation.
The research structure is organized into the following logical units that cover specific aspects: traveling and community abuse, antisocial behavior, results, discussion, etc. The presentation of the research is divided into appropriate sections, as follows, below.

1.1. Literature Review

As there are many types of antisocial behavior, there are also a large number of secondary investigations depending on specific circumstances and situations. Some potential investigation methods will include the following:
  • Evidence Gathering from various articles: This may include gathering witness statements, surveillance footage, or other types of evidence that may be relevant to the case and help shed light on the conduct in question.
  • Finding studies interviewing witnesses or suspects: Interviewing people who may have witnessed the behavior or are suspected of engaging in it can provide valuable information about the motivations and circumstances surrounding the behavior. Use of psychological or sociological theories: Understanding the underlying factors can help investigators better understand and address the behavior of related individuals.
  • Collecting papers that cover work with experts: Depending on the nature of the behavior, it may be helpful to consult with experts in fields such as psychology, sociology, or criminology to better understand the behavior and how to address it. Ultimately, the most effective approach to investigating antisocial behavior will often involve a combination of these methods, tailored to the specific circumstances of the situation.
Within this unit, the first research question is addressed by covering the historical development of antisocial behavior on maps and literature that can be related to this research question. In modern times, with the advent of larger, more anonymous societies, it may be easier for some people to engage in antisocial behavior without facing the same consequences as in a smaller community. However, there are still social norms and values that govern behavior in modern societies, and people who engage in antisocial behavior can still face consequences such as legal sanctions or social ostracism.

1.2. Evidence Gathering from Various Articles

In some research studies, the issue of collecting evidence in online discussions is examined; [1] divides the task into two parts: (1) identify a hostility factor that may increase if some subsequent comments are hostile to a given series of neutral remarks in a conversation, and (2) determine whether the first hostile comment in a discussion will lead to increasing hostility in subsequent comments given the first hostile comment in a discussion. A corpus of more than 30 K Instagram comments annotated from over 1100 posts was used to assess the strategy [1].
There is an interesting study in this area that compares the prevalence of young adult homelessness in Washington State, USA, with Victoria, Australia, using state representative samples from the International Youth Development Study (IYDS; n = 1945, 53% female). Based on the insights gained from this work, [2] discuss the implications and considerations for the OMHC proposal.

1.3. Finding Studies Interviewing Witnesses or Suspects

In addition, marked antisocial behavior creates opposite emotional poles in people. A study published in [3] found that a mass shooting had a positive and statistically significant effect on prosocial behavior that was measured mainly by monetary contributions. As a social system model, ref. [4] examines targeted interventions to combat cheating and antisocial behavior in online communities, schools, organizations, and sports to stop cheating and antisocial behavior in general. As online information is being consumed more widely, fake news has become a worldwide phenomenon. Thus, ref. [5] aims to analyze the concept of disinformation through fake news and its implications and consequences, in order to provoke reflections on it and its discursive appropriation.
The paper [6] proposes a demonstration of fake news detection using content-based highlighting and machine learning (ML) computations. Likewise, ref. [7] appears to be an unpaid programmed placement of fake news demonstrated on the basis of deep geometric models. Meanwhile [8] presents a controllable content creation demo called “Grover”. The audit [9] article presents a comprehensive diagram of the latest discoveries in fake news. Based on these findings, ref. [10] can recommend preventive measures (that is, mindfulness methodologies) to combat the spread of fake news in Nigeria. Driven by the victory of causal inference, [11] design an unused evidence-based propensity detection system for fake news.
According to [12], one of the common reasons for the cyberbullying of other users is one of the frequent motives for the dissemination of fake news. Early diagnosis is crucial from a prevention perspective since different antisocial behaviors complement each other.
Therefore, we deal with the detection that we defined in the third research question. Here, we use the latest knowledge from various scientific fields and transform it into a functional model.
They used a natural experiment in [13] that occurs in 200,000 messages from 7000 online mental health conversations to evaluate the effects of moderation on online mental health discussions. For illustration, the gap-filling article is [14] and fills an investigative crevice by collecting and explaining a huge dataset of more than 40 million tweets related to COVID-19. The examination of collected information that agrees with the built-up classifications of trolling-like behavior [15] presents a conversational examination of trolling-like collaboration techniques that disturb online discourses. Since clients in these communities regularly stay in the standard and periphery stages, reserved behavior can carry over to most stages. Online stages face a duty to keep their communities gracious and aware.
The authors of [16] think about this possible spillover by analyzing about 70,000 clients from three prohibited communities that moved to the periphery stages. They tried the coordinate reaction in [17], where they considered a specific kind of trolling, which is conducted by asking a provocative question on a community question-answer site. As of now, Wikipedia contains a human-driven preparation in place to distinguish online mishandling. Here, however, ref. [18] proposes a system for understanding and uncovering this mishandling within the English Wikipedia community.
All of this adds up to the visible results of [19] that the spread of fake news is more likely to depend on emotions that can be measured. The main reason for the analysis of emotions can be reflected in a better understanding of the state and situation within which certain reactions can be expected. The logical reaction of an ordinary user is to respond to another user’s provocative message and copy the behavior of the crowd. In the end, this can manifest itself in the emergence of conflicting opinions, leading to mutual polemics and a deepening of the conflict of opinion.

1.4. Collecting Papers That Cover Work with Experts

On the basis of the second defined question and research in the literature, we managed to collect the following information about the influence of emotions on the emergence. We can observe the presentation [20] of the findings of a national online study of self-administered IBSA throughout life in Australia (n = 4053), focusing on the degree, nature, and implementation indicators. The authors of [21] argue that the community should make three essential changes: (1) expand the scope of problems to handle more subtle and real forms of mistreatment, and (2) create proactive advances that prevent or prevent mistreatment. Sometimes it hurts recently, and (3) reframes the effort within the justice system to develop healthy communities.
The work [22] presents a system that coordinates best-practice approaches from Plan Science and Community-Based Participatory and Human-Centered Client Involvement Plan Standards for the reason of (1) guiding community-based, goal-directed software design and (2) building and evaluating a substance utilization, mishandling, and recuperation program application. We investigate the interrelationships in [23], between legislative issues and religion, resistance, and the social media community through the case ponder #EmptyThePews. In inquiries centered on a portion of the field of wellbeing, ref. [24] looked to characterize and obtain the convergences between the concept of computerized enclaves and online liquor abuse to bolster communities. The talk in [25] is approximately the part of modeling the client and the online community within the discovery of misuses. This paper highlights how the community setting can improve the classification that occurs within the discovery of damaging dialect [26,27].

1.5. Contributions and Novelty

Science can be used to study antisocial behavior to gain a deeper understanding and develop more effective interventions to address it. These contributions to a scientific understanding of antisocial behavior can come from a variety of sources, including research studies, theoretical papers, and case reports. Antisocial behavior research can be considered novel if it advances our understanding of the causes or consequences of such behavior.
An example of a novel study would be one that identifies a previously unknown risk factor for antisocial behavior. In the same way, if a new intervention reduces antisocial behavior significantly more effectively than existing ones, it could be considered a novel intervention. Another way to consider antisocial behavior research as a novel approach is to adopt a fresh or innovative approach to studying the subject. For example, a study that employs a new research design or data analysis method could be considered novel. In general, contributions and novelty in the scientific study of antisocial behavior can originate from various sources and may involve advances in our understanding of the behavior or novel approaches to its investigation.
What we consider to be a certain new contribution and novelty within this are the following:
  • One of the contributions we consider relevant is presented in the form of processed and existing datasets, which lays a better foundation for future researchers in this field.
  • A post describing the production process and the result is a fake news factory. Since this world is hidden, it is advisable to learn from its workings for the possibilities of more effective detection methods.
  • Starting a discussion in society and creating a good basis for the identification of potential distortions and ways to solve them.

2. Materials and Methods

The foundation for the research approach will be the application of the research questions and the processing of the obtained data that is public in nature and is accessible for research purposes under applicable licenses. The initial author of the cited desired sources is credited for using the FakeNewsNet source dataset under the CC-BY license. According to the “open source” concept, the author shares the source code for updating the data, which can then be utilized with the original source cited. The included sample uses experimental studies that test factors that influence the ability of users to recognize fake news, their likelihood of trusting it, or their intention to engage with such content. Based on the scoping review methodology, the authors then collated and summarized the available evidence using a conceptual analysis method. There are a number of criteria that can be used to identify false news or other forms of misleading or false information. Some potential criteria will include the following (schematic representation in Figure 1):
  • Conceptual analysis: This can be a useful method for examining and evaluating the concepts of fake news and antisocial behavior. Here are some steps that could be involved in using a conceptual analysis to study these concepts. To identify the concept or concepts to be analyzed in this case, the concepts of fake news and antisocial behavior will be the focus of the analysis.
  • Define the concepts: This would involve providing clear and precise definitions of fake news and antisocial behavior, taking into account the various ways in which these concepts are used and the different contexts in which they may be applied.
  • Examine the relationships of concepts with other concepts: This would involve exploring how fake news and antisocial behavior relate to other concepts such as media literacy, critical thinking, and social norms.
  • Evaluate the concepts: This would involve evaluating the strengths and limitations of the concepts of fake news and antisocial behavior, considering whether they are useful or appropriate in the context of understanding and addressing these phenomena.
  • Lack of credibility: Fake news is often published by sources that lack credibility, such as websites that have a history of publishing false or misleading information.
  • Lack of evidence: Fake news is often not supported by evidence or relies on weak or fabricated evidence to support its claim.
  • Biased or slanted: Fake news is often biased or tilted in a way that is designed to appeal to the emotions or biases of the reader.
  • Disinformation: Fake news may be intentionally created and disseminated to mislead or manipulate public opinion.
  • Inconsistencies: Fake news may contain inconsistencies or contradictions that call its veracity into question. It is essential to be critical of the news and to fact-check the information before sharing it to help reduce the spread of fake news.

2.1. Selection Criteria

We utilized numerous databases and search engines on the Internet to obtain better results in finding scholarly sources utilizing keywords in the following sources: Google Scholar, JSTOR, Scopus, Web of Science, and ProQuest are just a few examples. These resources allow us to search for and filter papers, books, conferences, and other sources on a wide range of subjects.
The conducted research will be within these research types:
  • Historical research: This involves the study of past events or phenomena using sources such as primary research documents or newspapers.
  • Case study: This will involve an in-depth examination of a specific existing case or instance, often with the goal of understanding a broader phenomenon.
  • Qualitative research: This involves collecting and analyzing data in the form of words, images, or sounds, often with the goal of understanding the meaning or experiences of individuals or groups.

2.2. Keywords

During our investigation, we have used some of the following keywords:
  • Antisocial behavior and chemical influences;
  • Community abuse and misuse of fake news that manifests itself in anti-social behavior;
  • Fake news and its appearance in different industries;
  • Online communities and the most frequent abuses.

2.3. Questions

There are numerous different methodologies that can be used to collect specific information on the research topic itself. In this regard, we want to use research questions to clearly guide the course of this study. As a result, we have defined specific questions as a distinct direction for our research. In light of this, we want to seek clarification on the following questions:
  • What are the historical aspects of the development of news and solving problems in the past with antisocial behavior?
  • What are the current and historical problems related to the emergence of fake news?
  • How are fake news and antisocial behavior identified in the data collected from the dataset in modern times?
During research, the limits of the research questions may appear as a direct omission due to the complexity that is not covered by the currently defined research questions. The main reason for the appearance of the limiting factor in research questions is the interconnectedness of research topics with other scientific fields, which may exceed our professional qualifications. The very realization of this research enables further research and connections with colleagues in other fields.

3. Traveling and Community Abuse

Within this unit, the second research question is addressed by covering the discussion of the cross-section of the issue of fake news in connection with fake reviews. Before most public communication moved to digital means, people were subjected to rumors, political misinformation, and tactical misinformation [28]. While traveling, people can still address and prevent social injustice by studying. We carefully research the laws, traditions, and potential safety issues associated with abuse in the community before traveling. We make every effort to observe local laws, customs, and standards while traveling. Keeping cultural differences in mind will help us avoid taking any actions that could be interpreted as harsh or harmful.
Assisting others: If we witness community abuse, we should consider speaking out against it and assisting those affected. If we are mistreated while traveling, we will seek assistance by contacting local authorities, embassies, or tourist support organizations [28,29]. We should report incidents of community abuse to the appropriate authorities to bring attention to the problem and prevent it from happening to others. By following these steps, individuals can help ensure a safe and respectful environment while traveling and contribute to the resolution of community abuse. Traveling can present new challenges, but by being aware and taking steps to prevent and address community abuse, people can help promote a safer and more respectful environment for everyone [29].

3.1. Fake Reviews

Reviews are becoming an increasingly popular source of information for consumers. However, fake customer reviews that paint a false impression of a product’s quality limit the usefulness of online ratings. Numerous businesses that operate with this type of message can be found, mainly in states with lower implementation rates for statutory legal protection. False reviews can be restricted for a variety of reasons. They could make it difficult for people to form reliable judgments about other people’s experiences, which could make choosing items or services difficult. False or fraudulent company reviews can damage their reputation and cost them sales. There are many techniques to spot fake reviews and prevent them. Therefore, it is critical to track down fake reviews. In every business, customer reviews are becoming a more vital source of information. However, the value of online evaluations is restricted by fake reviews that give a false picture of the quality of a product. Therefore, it is critical to track down fake reviews. Some tactics include letting customers report suspicious reviews and using algorithms to spot patterns in reviews that might indicate that they are fraudulent. Other tactics include requiring reviewers to confirm that they have actually used the product or service they are evaluating [29].
Booking and travel websites such as rurAllure, booking, and Airbnb, for example, can stop fraudulent reviews by implementing verification procedures, algorithmic monitoring, manual reviews, promoting honest evaluations, and swiftly resolving phony reviews. This safeguards the credibility of your review system and gives customers reliable information [29,30].

3.2. Trolling

An antisocial online behavior known as trolling is actively agitating people by inciting conflict for the “troll’s” own amusement. The purpose of the current study is to examine the usefulness of narcissistic traits such as agentic, communal, antagonistic, and neurotic. In addition to the variance described by gender, psychopathy, and sadism, all of this is used to predict the act of trolling [31].
The act of trolling is the deliberate disruption or annoyance of others online, typically by posting unpleasant or irrelevant remarks or engaging in other disruptive behaviors. Trolling can take many different forms, such as posting offensive or controversial content, starting discussions or flame wars, and harassing or threatening other users. Trolling can have significant effects, as it can inhibit natural dialogue, develop a hostile online atmosphere, and spread false or inaccurate information. Different forms of trolling include spamming (sending a lot of unsolicited or unwelcome messages or content), doxing (publishing personal information about a person online without their knowledge), and swatting (making false allegations to law enforcement). Baiting is the practice of publishing something that is offensive or provocative in order to garner attention or elicit reactions. These tactics are regularly used by trolls to irritate or bother people online.
Since the Internet has been around for a while and people have probably been engaged in disruptive or obnoxious conduct online for almost as long, it is difficult to pinpoint the precise moment that trolling started. However, the term “trolling” was first introduced to characterize this kind of activity in the early 1990s and has since gained in popularity [31].

4. Antisocial Behavior

The third research issue is answered by discussing the use of our suggested method, which is based on the reviewed literature, to identify false news in a sample data set and antisocial behavior.
Antisocial behavior generally refers to behavior that does not follow the conventions of acceptable behavior in a given population. In other words, those that most of society consider unacceptable [32]. Since this issue is multidisciplinary, researchers from different fields define it differently.
In the work of [33], they examined individual forms of antisocial behavior to find out if there are any correlations between them. Using a mind map (Figure 2), they were able to identify individual relationships between forms. Their study explained why a large number of authors confuse certain forms with others. Schematically, these forms are close to each other. An important part of this tool is that it helps with basic visualization to help navigate such a broad topic.
Through gradual research, the definition of antisocial behavior is also being developed [34]. It also varies depending on the region (with changes in the region [35]), and cultural customs, and a very common component is the influence of the political level and moods in society. Internal and external factors are also important factors that influence the development of antisocial behavior. Among the internal ones [36,37], we can include the influence of the psychological state, mainly in the age of the adolescent [38], or include external influence, which includes the social status of the individual in the social structure of society [39,40].
As shown in the mind map (Figure 2), but also by our analysis, there is no uniform definition of terms in the issue. Individual definitions are often confused in the industry and often by other researchers. This inconsistency makes it more difficult to compare individual research in the field.
For example, the authors of the article [41] have collected at least 12 different definitions of fake news. This makes it even more difficult to single out just one definition. On this basis, we could compare this research to find that they all together contain a common feature of all these definitions, that fake news gives the impression of real news and can be demonstrably false and spread with a certain goal. Frequently, this is a result of a monetary offering of a particular benefit. This is contrary to misinformation, which is described as news that looks to be true news but is really based on true news and has been adjusted to fool, while taking into account the cultural customs of each distribution location. If disinformation is disseminated with a defined goal in mind, it may grow and become false news. On the basis of this, definitions from pertinent scientific journals were gathered, providing a guarantee of professional evaluations in this comprehension of ideas and distinctions.
Today, when the popularity of social networks continues to grow, the age structure [42] of potential perpetrators of antisocial acts [39] is also an important influence on development. Research has generally estimated correlations between age and the tendency to believe and act on fake news. The research results [43] show that moderated discussions can detect a maximum of one post from a user out of 20 that in various ways violates the community rules of the social network on which they are on. Realistically, in 2020, they identified 4.28% of posts with antisocial content in discussions that are moderated in some way. However, most of the content available in the online space does not undergo any or only minimal control by an actual moderator. Thus, it is likely that the real proportion of various types of antisocial behavior in the online space is much higher than research shows.
The current knowledge collected from [44] suggests that mathematical models of pandemic propagation could be used as auxiliary models for the spread of antisocial behavior. Taking a closer look at the spread of fake news, as well as the spread of a pandemic, we can observe commonalities. Each has its focus and spreads through contact, and in the case of a fake message, through the contact of users in the online space.

4.1. Community Abuse

A variety of damaging and antisocial acts that negatively affect a community are called “community abuse”. Violence, vandalism, harassment, bullying, and other aggressive behaviors fall into this category. As long as human communities have existed, the idea of community abuse and antisocial behavior has probably been around. People have battled with the negative and destructive conduct of some members of their societies throughout history and have sought solutions. The complicated and multifaceted issue of community abuse and antisocial behavior requires a comprehensive and well-coordinated approach. The following are some of the techniques that have been used to address the issues that law enforcement has: This involves the employment of the criminal justice system and police to hold offenders responsible for their deeds and discourage others from repeating them. Social assistance entails offering assistance and services to people in danger of misbehavior, as well as to those who have been harmed by it in the community [26].
Economic and environmental design involves taking into account how physical environments and communities can exacerbate or lessen the likelihood of abuse and antisocial behavior in the community. It is crucial to remember that no one approach will be sufficient to deal with all instances of antisocial and community abuse. The most efficient method of minimizing and eliminating these hazardous habits is probably to take a thorough and individualized approach that takes into account the unique requirements and conditions of each community [26,27].

4.2. Fake and Biased News

Historically, the concept of fake news has evolved [45]. Its origin can be traced back to the distant past, but scientists began to deal with it only in the 20th century. Observing the rapid development of computing technology, we can see their connection with the increase in the number of fake news reports. A social investigation [46] was conducted among students (15–18 years of age), where the level of willingness to believe fake news was determined. The belief was tested using the Altemeyer test. Research [47] showed the possibility of a correlation between the education research conducted and the willingness to verify a false report. The authors argue that there is a relationship between the quality of education in individual countries and the willingness to believe and spread fake news. The division (into five groups or clusters) can be seen in Figure 3.
The propagation of incorrect or deceptive material masquerading as legitimate news or fake news has probably existed throughout human history. However, the phrase “fake news” has been recently used more frequently, especially in light of the development of social networks and the proliferation of online news sources. Email chain letters and other online disinformation were the primary means by which fake news was disseminated in the early days of the Internet. The growth of social networks has made it easier for fake news to spread quickly and widely, as individuals can easily share information with the networks of their followers with only a few clicks. This has raised concerns about the potential harm that false news could have on how the public perceives events and topics, as well as on democracy and public dialogue in general [5].
Fake news can be difficult to detect, as it is often designed to look like real news and be widely shared on social media and other platforms. One way fake news can be identified is through fact-checking, where information is carefully checked against multiple reliable sources to determine its accuracy. Media literacy education, which helps people become more critical consumers of information, can also help reduce the spread of fake news. There are several strategies to help stop the spread of false information. Fake or biased news can be produced and spread for a variety of reasons. The following are some potential explanations [8,10,11]:
  • Profit: In some cases, fake or biased news may be created and distributed to generate profits through advertising or other means.
  • Influence: Fake or biased news can be created and disseminated to influence public opinion or decision making.
  • Misinformation: Fake or biased news can be created and disseminated accidentally, due to a lack of fact checking or the spread of misinformation.
  • Personal or political gain: Fake, false, or biased news can be created and spread to advance personal or political agendas. Attention: Fake or biased news may be created and disseminated to generate attention or create a sensation. Fake or biased news can have serious consequences, as it can distort the public’s understanding of events and issues and undermine trust in journalism and other sources of information.
  • Fact-checking: Careful checking of information against multiple reliable sources can help identify fake news and reduce its spread.
  • Media literacy education: Teaching people how to critically evaluate the information they encounter can help reduce the spread of fake news.
  • Encouraging Critical Thinking: Encouraging people to think critically about the information they encounter, rather than blindly accepting it, can help reduce the spread of fake news.
  • Promoting credible sources: Highlighting and promoting credible sources of information can help counteract the spread of fake news.
  • Addressing underlying factors: Addressing the underlying factors that contribute to the spread of fake news, such as a lack of trust in journalism or the desire for sensationalized or biased information, can also be helpful in preventing the spread of fake news.
Understanding the context of fake news in the text in question is also crucial to spotting fake news [48]. Users share false information with other users marked with harmfulness, and the method used should not mark this text as spreading fake news, as it is the opposite. When identifying a fake message, we must also take into account the context of the message, the spreader’s knowledge of its falsity, and the goal that he wants to achieve by spreading it. This creates a more complex picture of a multidisciplinary problem that requires a solution with the cooperation of scientific forces from areas such as computer science, mathematics, sociology, psychology, medicine, forensic analysis, and education.

5. Results

The collected results within this work should show the process of collecting and processing data for the purposes of detection with the presentation of actualizations and links within the data. A detection of such messages is important in every area of human life. However, where it has a societal impact is in the area of political struggle. The appropriate use of false news by a politician at the right moment can reverse the results of an election, and thus affect the overall social mood.

5.1. Detection Methods Currently Available

Historically, each site that published textual content had its moderator. As the first, it was most often the author of the website who could delete any content, even from other users. However, with increasing content, such interventions by individuals are beyond the realistic possibility of processing by individuals. That is why they began to switch to the possibilities of cooperating moderators or the application of artificial intelligence to the detection of various forms of antisocial behavior. Currently, we can encounter three basic forms of detection from the point of view of their processing, and it can be presented in the following units:
  • Administrator/moderator—the modification of content on social networks is the best way to detect inappropriate content, as automatic detection is not 100% successful. However, the pressure that is exerted on the personality still remains a problem. These moderators develop psychological problems as they encounter the worst content on these social networks. One of the many is the case study [49], in which the authors examine the impact of harmful content on moderators of social networks. According to other studies, the content is so harmful that the psychological damage to the moderators can be compared to post-traumatic stress in military personnel deployed in real combat.
  • Semi-automatic—this approach is a combination of an approach with the use of an individual as a moderator and a fully automated method of detection. It is necessary to look for an optimal solution between the number of moderators of potentially objectionable content and the automation implemented due to limited human resources, as [50] also informs. Another problem is that exposure to such a large amount of potentially harmful content leads to the possibility of acquiring posttraumatic stress disorder, according to psychologists. This was identified in soldiers deployed in combat conditions. Therefore, it is recommended to use a combination of these two approaches. However, the search for an optimal combination of these two worlds is the subject of further research.
  • Automatic check using algorithms—finding the ideal detection algorithm has been the subject of extensive research by a large number of scientists. However, it often happens that the detection is successful from a percentage point of view. However, the problem arises when calculating the other Evaluation Matrix. The representative of the work with correctly stated statistics during detection is [51]. By expanding this approach to include the part of obtaining content from the online space in the form of unstructured text and subsequent detection, works try to look at the issue more comprehensively. A representative of such work is, for example, the work [52], in which the authors created the MonAnt system.

5.2. Dataset Collection and Analysis

The identification of the policy area was a key criterion for selecting an appropriate dataset for the study, as this is where various types of antisocial conduct are used. Community abuse occurs most often in this area, but fake news also occurs, i.e., the two basic parts that we focus on in the analysis. The second criterion was to require the dataset to be from social networks, preferably from different social networks. Another criterion was the requirement for a labeled dataset.
Subsequently, we decided on the FakeNewsNet dataset also used in [53]. The dataset contains fake and real news data obtained from the political fact-checking site PolitiFact [54] and the site GossipCop. The dataset contains more than 23,000 messages evaluated according to the binary classification of fake and real news.
As we can observe in Table 1, the overall representation of individual types is largely unbalanced. In our research, we focus on analyzing headlines and their comparison with individual types of news. In previous work [55], we identified a defined hypothesis that in the case of fake news, the headline of the article is more important than its content. As an example, they cited a case where only the title was changed in the article, but the text remained exactly the same. Therefore, in the experimental part, we focus on title analysis and on the possibility of detecting the possibility of fake news with such an analysis. Confirming this would reduce the amount of text needed to identify a suspicious message. This would result in a significant acceleration of detection in terms of the need to test a smaller amount of data, as well as financial savings. With the same amount of available hardware, it would be possible to detect a larger amount of data and make more efficient use of the available raw computing capacity.
With all records from the entire FakeNewsNet dataset, we managed to identify only 1536 messages that did not contain a URL link to the source data from which the authors of the messages drew, which is 6.62%.

5.3. Data Preprocessing

The initial stage of our processing in Table 2 was to eliminate stop words, which are words that do not carry relevant information that we need to process. We used a list of English stop words from the standard Python library corpus NLTK [56].
The text of the post title is made up of meaningful words that carry information. However, these words are in different forms, as they were used in the source text, and therefore it is necessary to find the stem of the word for its subsequent analysis. This step in NLP analysis is called Stemming, for which we used the “PorterStemmer()” function from the NLTK library. An example of stemming is given in the work of [57] on the word stem “like”, which he presents in the equation:
{ l i k e s , l i k e d , l i k e l y , l i k ( e ) i n g } l i k e + 0
Subsequently, we used elements of artificial intelligence that inspired us in the article [58,59,60]. A text dataset from the CNN Daily Mail that had been specially modified was used to train the BERT model. We employ a trained BERT model, which extracts data from the text under analysis and condenses its content to the necessary number of words. In this manner, we condensed our summary to a maximum of 16 words. When the title was longer, we used this length. We considered the original title text when the title was less than 16 words long. We translated all the analyzed using the “GoogleTranslator()” function [61], as we found that some messages are written in Chinese and other languages.

5.4. Classification

In the subsequent analysis, we shifted part of the report in the dataset from the PolitiFact pages because they meet the original assumptions that we chose for the study. When calculating the sentiment rating, we used the VADER dictionary (the Valence Aware Dictionary and the sEntiment Reasoner), which was used, for example, in the work of [62]. We used a sentiment analysis based on the function “SentimentIntensityAnalyzer()” and “polarity_scores()” and gave us the overall polarity of each heading. We then visualized this polarity using a projection into the ternary graph.
Real news from the FakeNewsNetpolitifact dataset: We may infer that the projection of different sentiment components is shown on the axis with values for neutral sentiment from the graph of actual news from the PolitiFact website that was included in the FakeNewsNet dataset, which we can observe below. This demonstrates that actual political news is presented in a neutral emotional tone. This finding assumes a number of papers that we came across while conducting our investigation. The ternary graph displays the three normalized basic emotions present in each post in a three-to-one ratio. Individual emotional values range from 0 to 1, creating a scale where 1 denotes the presence of the normalized emotion to its full extent and 0 denotes the minimal or absence of the provided emotion in the message title. The projected normalized values of each individual’s emotional state are represented by the resulting point in the ternary graph (Figure 4).
However, if we take the last component from the sentiment analysis (compound), we get slightly more imprecise results, which are specifically presented in Table 3:
Fake news from the FakeNewsNetpolitifact dataset: When analyzing the intensity of the sentiment using the positive, neutral, and negative components of the sentiment, we obtained the following graph. It can be observed from the graph that practically every headline from the Fake News political report has an emotional component with a certain intensity of negative emotion.
However, if we use the last component of the sentiment analysis (compound), we obtain somewhat less accurate results, presented in Table 4:

5.5. Evaluation Matrix

For the purposes of calculating the Evaluation Matrix, we used the following attribute standards presented in Table 5, such as Accuracy, Precision, Recall, and F1-Score. We define the calculation of these attributes in the equations below. The metrics used are True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN). If we correctly identified the type of result in the classification in the input dataset, we considered the message title as True Positive. If the calculation was on the opposite side of the sentiment spectrum, we considered such a headline to be misidentified, and flagged it as a False Positive.
accuracy = TP + TN TN + FP + TP + FN
precision = TP FP + TP
recall = TP FN + FP
F 1 = 2 ( recall · precision recall + precision )

5.6. Impact of Fake News and Statistical NLP

According to [63], the projected volume of annual NLP analytics’ spending is expected to grow to a total of USD 3.7 billion in 2027. With such a large amount of expenses, the spent funds must be used optimally. This can be achieved, for example, by using a multilingual language model that does not require a repeated optimization phase. An important part is also the collection and evaluation of sensitive data, which partially or completely prevents the correct statistical evaluation of the texts in the dataset. To reduce the necessary costs for the statistical evaluation of NLP methods, it would be appropriate to create a uniform structure and a methodically correct evaluation procedure, which would be implemented once, for all processes. Thus, it is possible to reduce costs by the methods used and by individual users. At the same time, an objective comparison of individual research is possible.
The results of this study’s experimental portion still need to be confirmed using considerably larger data samples collected from various social networking sites for various languages and regions within specific countries, to check the influence of specific factors on the advised method. It would be prudent to confirm the suggested cure at various time intervals to reduce the likelihood that ongoing political disputes will have an impact on news discourse.
The objective of this research is to highlight the historical characteristics, issues, and potentials of recognizing false news as one of the types of antisocial behavior in light of how it interacts with other types. The problem of this behavior is more relevant as the longer the time spent on social networks increases. At the same time, social networks are becoming the exclusive source of news for a certain group of the population.

6. Limitations of the Study

Every research study has its limits and, when reached, its results become debatable. In the case of our research study, method limitations are mainly problems of the size of the examined sample. To draw universally valid conclusions, it is necessary to transfer the proposed method from this article to other datasets, verify the procedure on these datasets, and evaluate the results. The last test is real deployment and verification in practice with real conditions. A limitation of a large amount of research in the field of NLP is the use of multiple languages in the analyzed data. We minimized this limitation with a single-machine translation into English, for which most available tools are optimized. We hope that by gradually removing these limits, we will achieve even more robust research, as well as achieve better results according to individual metrics in the future.

7. Discussion

Community abuse can be addressed with a variety of individual and communal efforts. Education and raising awareness about the dangers of overuse are critical in preventing it. Strong policies must be in place, with implications for those who abuse them, and must be rigorously implemented. Victim recovery and a sense of security can be aided by options such as counseling and legal assistance. Empowering marginalized groups by tackling the main causes of abuse, such as prejudice and discrimination, could minimize their vulnerability to mistreatment. Combating community abuse is an ongoing process that involves the commitment and work of all those affected.
Based on the defined questions, we obtained the following results:

7.1. RQ 1

From the overview of different forms of antisocial behavior, we identified and analyzed an interesting correlation between the extent of the impact and the rate of spreading the news of the individual forms of behavior. In previous research works, the authors focused primarily on the ways in which antisocial behavior is spread from a time point of view. Most people feel that today is the age of misinformation and other forms of such behavior. From our analysis, we found that this feeling of news recipients is also increased by the total amount of information received. In the past, the source of information was mostly newspapers or oral communication. Today, information can be obtained using the Internet practically in real-time. There is an information overload, and thus also a subjective feeling of a greater amount of antisocial content. Due to the originally sought reason for the increase in the number of types of antisocial behavior, the method of its spread has changed (new forms of spread have been added). But its amount has remained practically consistent. In future research, we will focus on higher detection success for this type of behavior. This information can be used to understand the spread of false or misleading content, but also to help experts understand new forms of addiction to social networks in combination with their content, and thus subsequently to set up a more appropriate treatment.

7.2. RQ 2

According to the mind map in [33], the relationship between abused communities and the spread of fake news is predominantly from the point of view of obtaining personal benefit (most often financial). Our analysis of the relationship issue showed significant new knowledge that other reasons appear more often when fake news is transmitted on social networks. These include, for example, gaining a political advantage at a key moment (often before an important election), instilling fear in the general population, or making fun of it. In the analyzed cases, obtaining financial benefit is only a secondary motive for spreading fake news in the past (which is contained in “Antisocial behavior”). However, this finding requires verification in other areas of social topics, since we mainly focused on the political area.

7.3. RQ 3

According to an earlier analysis by [55], a common way to spread fake news is simply by changing the title of the article. The content of the article is relatively irrelevant. In the experimental part of our paper, we aimed to verify this claim and analyzed fake news headlines from the FakeNewsNet dataset [53,64,65]. The result of our analysis is the finding that if we use the proposed procedure with the application of artificial intelligence using the VADER BERT model in combination with the intensity of individual types of sentiment, we will achieve good results. However, if we also apply their common compound value, we obtain worse results. This fact deserves further investigation.

7.4. Open Questions

The research field on antisocial behavior is quite extensive; therefore, it was not possible to capture all the essence of this scientific field in this article. Although the detection of this type of behavior is extremely important, it is still questionable from the point of view of our recent research [66]. When we can observe the influence of this type of behavior on chemical processes in the human brain, this is also a part of our open questions related to chemical processes and reactions. Even after narrowing the search for articles in the last five years on this specific topic, we obtained more than the maximum possible answer of 1000 articles. Based on what we could observe, there are a lot of open questions, but we will single out some of the questions that we consider interesting for further research:
  • Is the spread of fake news a conscious process or is it influenced by subconscious behavior caused by an imbalance of dopamine in the human body?
  • Once a fake news spreader is identified, is it possible to use the evidence found as forensic evidence to provide admissible evidence for law enforcement?
Other research indicates that even after training users about the harmfulness of such behavior, they return to it due to the lack of attention shown by the reduced number of interactions in the online world and the lack of treatment for the resulting addiction. The research area of this article is highly interconnected with other research areas and, for this reason, further research is important, and would help to explain future individual open questions.

8. Conclusions

In our initial investigation, we discovered that community abuse can be harmful to individuals who may have a lower gestational age. In the future, it can manifest itself in a variety of ways, including bullying, hate speech, delimitation, and importunity, and it can have a significant impact on people whose internal and external health is at risk. To stop it and help those it affects, communities must adopt a “zero tolerance” approach to abuse and act aggressively. It is impossible to arrive at a general judgment that applies to all instances of misleading or biased news because each prevalence is distinct and may have various sources and results. However, it is undeniable that distorted or false news can have significant effects, such as changing how the public views particular events and ideas and undermining trust in journalism and other information sources. Therefore, people must be informed and involved in information generation before participating in it. The media and other information providers must strive for accuracy and impartiality in their reporting and take steps to stop the spread of inaccurate or biased information.
In future research, we can recommend checking the degree of correlation between the content of basic emotions in the text of the post and the title of the article. The amount in which they would respond to each other would allow future researchers to confirm or disprove our theory that the title of the social network is more important than the content of the article. Future studies should focus on improving the suggested method for detecting fake news by identifying fundamental emotions in the headlines of stories shared on social networks. We investigatethe possibility of summarizing the content of the title in a different way, as a function with an element of artificial intelligence used for maximum simplification and reduction in potentially possible misunderstandings of the text and associated negative detection. On the other hand, given the prevalence of false news in the online environment, action must be taken against those who distribute it, and forensic proof must be provided to support a just punishment for them within the bounds of the law and of globally recognized norms. It will be necessary to redefine the methodology of Big Data interpretation obtained using the tools of OSINT methods. For the analysis of these data, it is appropriate to create a data lake for the further joint collection of knowledge about fake news and other forms of antisocial behavior.
One of the conclusions and verifications is reflected in the successful application of our proposed approach with direct connection and the application of artificial intelligence of the VADER BERT model with a success rate of 85%. Specific sentiment types were used to optimize the use of financial and computational resources in the detection of antisocial behavior elements on social networks with a positive detection result of 79% recall. On the basis of this, it can be concluded that compliance with the proposed procedure was successfully verified. Perhaps the main outcome of this study is the need to initiate a discussion and an open discourse on a number of levels. Although face-to-face interaction and interactive learning remain the best forms of communication, the rise of social networks has raised concerns about the potential harm that fake news could do to democracy and public discourse in general, as well as to the general public’s awareness of events and issues. We anticipate that young people will engage in more public discourse in the future and we believe that inspiration can be obtained by reading this article and continuing further research in this area.

Author Contributions

Conceptualization, Data Curation, Formal and Data analysis, Investigation, Methodology, Project administration, Validation, Visualization, and Writing—Original draft and editing: I.S.; Methodology, Conceptualization, Visualization, Data Curation, Review—Writing, Editing, Proofreading, Project Administration, and Supervision: P.D.; Funding acquisition and Project Administration: V.V. All authors have read and agreed to the published version of the manuscript.

Funding

The work reported here was supported by the Slovak national project Increasing Slovakia’s Resilience Against Hybrid Threats by Strengthening Public Administration Capacities (Zvýšenie odolnosti Slovenska voči hybridným hrozbám pomocou posilnenia kapacít verejnej správy) (ITMS code: 314011CDW7), co-funded by the European Regional Development Fund (ERDF), the Operational Programme Integrated Infrastructure for the project: Research in the SANET network, and possibilities of its further use and development (ITMS code: 313011W988), Advancing University Capacity and Competence in Research, Development, and Innovation (ACCORD) (ITMS code 313021X329), co-funded by the ERDF, rurALLURE project—European Union’s Horizon 2020 Research and Innovation program under grant agreement number: 101004887 H2020-SC6-TRANSFORMATIONS-2018-2019-2020/H2020-SC6-TRANSFORMATIONS-2020 and by the Slovak Research and Development Agency under the contract No. APVV-15-0508.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used in the form of a dataset are available at reference [67], where the author conditions are the free use of the data in the condition by referring to the source articles [53,68,69]. The source data set is licensed under the CC-BY license, and, according to the “open source” concept, the author shares the source code for updating the data, which can then be utilized with the original source cited. The code used will be provided upon request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Liu, P.; Guberman, J.; Hemphill, L.; Culotta, A. Forecasting the presence and intensity of hostility on Instagram using linguistic and social features. In Proceedings of the International AAAI Conference on Web and Social Media, Palo Alto, CA, USA, 25–28 June 2018. [Google Scholar] [CrossRef]
  2. Saha, K.; Ernala, S.K.; Dutta, S.; Sharma, E.; Choudhury, M.D. Understanding Moderation in Online Mental Health Communities. In Lecture Notes in Computer Science; Springer International Publishing: Berlin/Heidelberg, Germany, 2020; pp. 87–107. [Google Scholar] [CrossRef]
  3. Berrebi, C.; Yonah, H. Crime and Philanthropy: Prosocial and Antisocial Responses to Mass Shootings. Vict. Offenders 2020, 16, 99–125. [Google Scholar] [CrossRef]
  4. Kim, J.E.; Tsvetkova, M. Cheating in online gaming spreads through observation and victimization. Netw. Sci. 2020, 9, 425–442. [Google Scholar] [CrossRef]
  5. Pérez, C.R. No diga fake news, di desinformación: Una revisión sobre el fenómeno de las noticias falsas y sus implicaciones. Comunicación 2019, 1, 65–74. [Google Scholar] [CrossRef]
  6. Gravanis, G.; Vakali, A.; Diamantaras, K.; Karadais, P. Behind the cues: A benchmarking study for fake news detection. Expert Syst. Appl. 2019, 128, 201–213. [Google Scholar] [CrossRef]
  7. Monti, F.; Frasca, F.; Eynard, D.; Mannion, D.; Bronstein, M.M. Fake News Detection on Social Media using Geometric Deep Learning. arXiv 2019, arXiv:1902.06673. [Google Scholar] [CrossRef]
  8. Zellers, R.; Holtzman, A.; Rashkin, H.; Bisk, Y.; Farhadi, A.; Roesner, F.; Choi, Y. Defending Against Neural Fake News. Adv. Neural Inf. Process. Syst. 2019. [Google Scholar] [CrossRef]
  9. Zhang, X.; Ghorbani, A.A. An overview of online fake news: Characterization, detection, and discussion. Inf. Process. Manag. 2020, 57, 102025. [Google Scholar] [CrossRef]
  10. Apuke, O.D.; Omar, B. Fake news proliferation in nigeria: Consequences, motivations, and prevention through awareness strategies. Humanit. Soc. Sci. Rev. 2020, 8, 318–327. [Google Scholar] [CrossRef]
  11. Wu, J.; Liu, Q.; Xu, W.; Wu, S. Bias Mitigation for Evidence-aware Fake News Detection by Causal Intervention. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, Madrid, Spain, 11 July 2022. [Google Scholar] [CrossRef]
  12. Maftei, A.; Holman, A.C.; Merlici, I.A. Using fake news as means of cyber-bullying: The link with compulsive internet use and online moral disengagement. Comput. Hum. Behav. 2022, 127, 107032. [Google Scholar] [CrossRef]
  13. Wadden, D.; August, T.; Li, Q.; Althoff, T. The Effect of Moderation on Online Mental Health Conversations. In Proceedings of the International AAAI Conference on Web and Social Media, Atlanta, GA, USA, 8 June 2020. [Google Scholar] [CrossRef]
  14. Awal, M.R.; Cao, R.; Mitrovic, S.; Lee, R.K.W. On Analyzing Antisocial Behaviors Amid COVID-19 Pandemic. arXiv 2020, arXiv:2007.10712. [Google Scholar] [CrossRef]
  15. Paakki, H.; Vepsäläinen, H.; Salovaara, A. Disruptive online communication: How asymmetric trolling-like response strategies steer conversation off the track. Comput. Support. Coop. Work (CSCW) 2021, 30, 425–461. [Google Scholar] [CrossRef]
  16. Russo, G.; Verginer, L.; Ribeiro, M.H.; Casiraghi, G. Spillover of Antisocial Behavior from Fringe Platforms: The Unintended Consequences of Community Banning. In Proceedings of the International AAAI Conference on Web and Social Media, Atlanta, GA, USA, 6–9 June 2022. [Google Scholar] [CrossRef]
  17. Guy, I.; Shapira, B. From Royals To Vegans: Characterizing Question Trolling On A Community Question Answering Website. In Proceedings of the 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, Ann Arbor, MI, USA, 8–12 July 2018. [Google Scholar] [CrossRef]
  18. Rawat, C.; Sarkar, A.; Singh, S.; Alvarado, R.; Rasberry, L. Automatic Detection of Online Abuse and Analysis of Problematic Users in Wikipedia. In Proceedings of the 2019 Systems and Information Engineering Design Symposium (SIEDS), Charlottesville, VA, USA, 26 April 2019; IEEE: Piscataway, NJ, USA, 2019. [Google Scholar] [CrossRef]
  19. Hamed, S.K.; Ab Aziz, M.J.; Yaakub, M.R. Fake News Detection Model on Social Media by Leveraging Sentiment Analysis of News Content and Emotion Analysis of Users’ Comments. Sensors 2023, 23, 1748. [Google Scholar] [CrossRef]
  20. Powell, A.; Henry, N.; Flynn, A.; Scott, A.J. Image-based sexual abuse: The extent, nature, and predictors of perpetration in a community sample of Australian residents. Comput. Hum. Behav. 2019, 92, 393–402. [Google Scholar] [CrossRef]
  21. Jurgens, D.; Chandrasekharan, E.; Hemphill, L. A Just and Comprehensive Strategy for Using NLP to Address Online Abuse. arXiv 2019, arXiv:1906.01738. [Google Scholar] [CrossRef]
  22. Schooley, B.; Feldman, S.; Tipper, B. A Unified Framework for Human Centered Design of a Substance Use, Abuse, and Recovery Support System. In Advances in Intelligent Systems and Computing; Springer International Publishing: Berlin/Heidelberg, Germany, 2020; pp. 175–182. [Google Scholar] [CrossRef]
  23. Tsuria, R. Get out of Church! The Case of #EmptyThePews: Twitter Hashtag between Resistance and Community. Information 2020, 11, 335. [Google Scholar] [CrossRef]
  24. Shafer, L. Substance Abuse: Avenues for Identity Articulation, Coalition Building, and Support During COVID-19. In Proceedings of the 39th ACM International Conference on Design of Communication, Virtual, 12–14 October 2021. [Google Scholar] [CrossRef]
  25. Mishra, P.; Yannakoudakis, H.; Shutova, E. Modeling Users and Online Communities for Abuse Detection: A Position on Ethics and Explainability. arXiv 2021, arXiv:2103.17191. [Google Scholar] [CrossRef]
  26. Kurrek, J.; Saleem, H.M.; Ruths, D. Enriching Abusive Language Detection with Community Context. arXiv 2022, arXiv:2206.08445. [Google Scholar] [CrossRef]
  27. Banko, M.; MacKeen, B.; Ray, L. A Unified Taxonomy of Harmful Content. In Proceedings of the Fourth Workshop on Online Abuse and Harms, Online, 17 April 2020; Association for Computational Linguistics: Kerrville, TX, USA, 2020. [Google Scholar] [CrossRef]
  28. Bryanov, K.; Vziatysheva, V. Determinants of individuals’ belief in fake news: A scoping review determinants of belief in fake news. PLoS ONE 2021, 16, e0253717. [Google Scholar] [CrossRef]
  29. Salminen, J.; Kandpal, C.; Kamel, A.M.; Jung, S.G.; Jansen, B.J. Creating and detecting fake reviews of online products. J. Retail. Consum. Serv. 2022, 64, 102771. [Google Scholar] [CrossRef]
  30. rurAllure Consortium Route Selection—Promotion of Rural Museums and Heritage Sites in the Vicinity of European Pilgrimage Routes. rurAllure. 2023. Available online: https://ways.rurallure.eu/european-pilgrimage-routes (accessed on 13 August 2023).
  31. Furian, L.; March, E. Trolling, the Dark Tetrad, and the four-facet spectrum of narcissism. Personal. Individ. Differ. 2023, 208, 112169. [Google Scholar] [CrossRef]
  32. Santos, W.S.D.; Holanda, L.C.; Meneses, G.D.O.; Luengo, M.A.; Gomez-Fraguela, J.A. Antisocial behaviour: A unidimensional or multidimensional construct? Av. Psicol. Latinoam. 2019, 37, 13–27. [Google Scholar] [CrossRef]
  33. Hrčková, A.; Srba, I.; Móro, R.; Blaho, R.; Šimko, J.; Návrat, P.; Bieliková, M. Unravelling the basic concepts and intents of misbehavior in post-truth society. Bibl. An. Investig. 2019, 15, 421–428. [Google Scholar]
  34. González, C.; Varela, J.; Sánchez, P.A.; Venegas, F.; De Tezanos-Pinto, P. Students’ Participation in School and its Relationship with Antisocial Behavior, Academic Performance and Adolescent Well-Being. Child Indic. Res. 2021, 14, 269–282. [Google Scholar] [CrossRef]
  35. Moqadam, S.; Nubani, L. The Impact of Spatial Changes of Shiraz’s Historic District on Perceived Anti-Social Behavior. Sustainability 2022, 14, 8446. [Google Scholar] [CrossRef]
  36. Kolla, N.J.; Wang, C.C. Alcohol and Violence in Psychopathy and Antisocial Personality Disorder: Neural Mechanisms. In Neuroscience of Alcohol; Elsevier: Amsterdam, The Netherlands, 2019; pp. 277–285. [Google Scholar] [CrossRef]
  37. Peng, S.X.; Wang, Y.Y.; Zhang, M.; Zang, Y.Y.; Wu, D.; Pei, J.; Li, Y.; Dai, J.; Guo, X.; Luo, X.; et al. SNP rs10420324 in the AMPA receptor auxiliary subunit TARP γ-8 regulates the susceptibility to antisocial personality disorder. Sci. Rep. 2021, 1, 11997. [Google Scholar] [CrossRef] [PubMed]
  38. van Lith, K.; Veltman, D.J.; Cohn, M.D.; Pape, L.E.; van den Akker-Nijdam, M.E.; van Loon, A.W.G.; Bet, P.; van Wingen, G.A.; van den Brink, W.; Doreleijers, T.; et al. Effects of Methylphenidate During Fear Learning in Antisocial Adolescents: A Randomized Controlled fMRI Trial. J. Am. Acad. Child Adolesc. Psychiatry 2018, 57, 934–943. [Google Scholar] [CrossRef]
  39. O’Malley, L.; Grace, S. Social capital and co-location: A case study of policing anti-social behaviour. Int. J. Police Sci. Manag. 2021, 23, 306–316. [Google Scholar] [CrossRef]
  40. Gorsane, M.; Kebir, O.; Salmona, I.; Rahioui, H.; Laqueille, X. Jeu d’argent problématique et responsabilité pénale. L’Encéphale 2021, 47, 43–48. [Google Scholar] [CrossRef]
  41. Baptista, J.P.; Gradim, A. A Working Definition of Fake News. Encyclopedia 2022, 2, 632–645. [Google Scholar] [CrossRef]
  42. Plavén-Sigray, P.; Matheson, G.J.; Gustavsson, P.; Stenkrona, P.; Halldin, C.; Farde, L.; Cervenka, S. Is dopamine D1 receptor availability related to social behavior? A positron emission tomography replication study. PLoS ONE 2018, 13, e0193770. [Google Scholar] [CrossRef]
  43. Park, J.S.; Seering, J.; Bernstein, M.S. Measuring the Prevalence of Anti-Social Behavior in Online Communities. Proc. ACM Hum.-Comput. Interact. 2022, 6, 451. [Google Scholar] [CrossRef]
  44. Govindankutty, S.; Gopalan, S.P. From Fake Reviews to Fake News: A Novel Pandemic Model of Misinformation in Digital Networks. J. Theor. Appl. Electron. Commer. Res. 2023, 18, 1069–1085. [Google Scholar] [CrossRef]
  45. Gelfert, A. Fake news: A definition. Informal Log. 2018, 38, 84–117. [Google Scholar] [CrossRef]
  46. Vajdová, D.; Masaryk, R.; Kostovičová, L. Intervention focused on discerning trustworthy and untrustworthy news in secondary school students. Nekonečno v Psychológii 2018, 87–96. [Google Scholar] [CrossRef]
  47. Lessenski, M. Common Sense Wanted Resilience to `Post-Truth’ and Its Predictors in the New Media Literacy Index 2018. Open Society Institute. (Report March 2018). Available online: https://www.rcc.int/p-cve/download/docs/medialiteracyindex2018_publisheng.pdf/86b2a49b8e61264e22c5f27798b1905b.pdf (accessed on 13 August 2023).
  48. Alghamdi, J.; Lin, Y.; Luo, S. Does Context Matter? Effective Deep Learning Approaches to Curb Fake News Dissemination on Social Media. Appl. Sci. 2023, 13, 3345. [Google Scholar] [CrossRef]
  49. Benjelloun, R.; Otheman, Y. Psychological distress in a social media content moderator: A case report. Arch. Psychiatry Ment. Health 2020, 4, 10. [Google Scholar]
  50. Bhaveeasheshwar, E.; Deepak, G.; Mala, C. ASocTweetPred: Mining and Prediction of Anti-social and Abusive Tweets for Anti-social Behavior Detection Using Selective Preferential Learning. In Innovations in Bio-Inspired Computing and Applications; Abraham, A., Bajaj, A., Gandhi, N., Madureira, A.M., Kahraman, C., Eds.; Springer: Cham, Switzerland, 2023; pp. 552–562. [Google Scholar]
  51. Chalás, F.; Stupavský, I.; Vranić, V. Discussion Manipulation, Language and Domain Dependent Models: An Overview. In Proceedings of the 2023 Zooming Innovation in Consumer Technologies Conference (ZINC), Novi Sad, Serbia, 29–31 May 2023; pp. 136–141. [Google Scholar] [CrossRef]
  52. Srba, I.; Moro, R.; Simko, J.; Sevcech, J.; Chuda, D.; Navrat, P.; Bielikova, M. Monant: Universal and extensible platform for monitoring, detection and mitigation of antisocial behaviour. Behaviour 2019, 10, 17. [Google Scholar]
  53. Shu, K.; Mahudeswaran, D.; Wang, S.; Lee, D.; Liu, H. FakeNewsNet: A Data Repository with News Content, Social Context and Dynamic Information for Studying Fake News on Social Media. arXiv 2019, arXiv:1809.01286. [Google Scholar] [CrossRef]
  54. Politifact. PolitiFact—The Poynter Institute. 2023. Available online: https://www.politifact.com/ (accessed on 6. August 2023).
  55. Higgins, A.; McIntire, M.; Dance, G.J. Inside a fake news sausage factory: ‘This Is All About Income’. New York Times. 2016. Available online: https://www.nytimes.com/2016/11/25/world/europe/fake-news-donald-trump-hillary-clinton-georgia.html (accessed on 13 August 2023).
  56. Project, N. Documentation (NLTK). 2023. Available online: https://buildmedia.readthedocs.org/media/pdf/nltk/latest/nltk.pdf (accessed on 1 August 2023).
  57. Khyani, D.; Siddhartha, B.; Niveditha, N.; Divya, B. An interpretation of lemmatization and stemming in natural language processing. J. Univ. Shanghai Sci. Technol. 2021, 22, 350–357. [Google Scholar]
  58. Lewis, M.; Liu, Y.; Goyal, N.; Ghazvininejad, M.; Mohamed, A.; Levy, O.; Stoyanov, V.; Zettlemoyer, L. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. arXiv 2019, arXiv:1910.13461. [Google Scholar]
  59. Gullbadhar, A. Summarizing Wikipedia Pages Using Facebook’s BART Model in Python. 2022. Available online: https://levelup.gitconnected.com/summarizing-wikipedia-pages-using-facebooks-bart-model-in-python-e9d9d88f51f9 (accessed on 13 August 2023).
  60. Raval, P. Transformers BART Model Explained for Text Summarization. 2023. Available online: https://www.projectpro.io/article/transformers-bart-model-explained/553 (accessed on 13 August 2023).
  61. Baccouri, N. Deep-Translator 1.11.4. Available online: https://pypi.org/project/deep-translator/ (accessed on 13 August 2023).
  62. Wang, Y.; Shen, G.; Hu, L. Importance Evaluation of Movie Aspects: Aspect-Based Sentiment Analysis. In Proceedings of the 2020 5th International Conference on Mechanical, Control and Computer Engineering (ICMCCE), Harbin, China, 25–27 December 2020; pp. 2444–2448. [Google Scholar] [CrossRef]
  63. Proxet. Fundamentals of Statistical Natural Language Processing; MIT Press: Cambridge, MA, USA, 2021. [Google Scholar]
  64. Shu, K.; Mahudeswaran, D.; Wang, S.; Lee, D.; Liu, H. FakeNewsNet: A Data Repository with News Content, Social Context, and Spatiotemporal Information for Studying Fake News on Social Media. Big Data 2020, 8, 171–188. [Google Scholar] [CrossRef]
  65. Oriola, O. Exploring N-gram, word embedding and topic models for content-based fake news detection in FakeNewsNet evaluation. Int. J. Comput. Appl. 2021, 975, 8887. [Google Scholar] [CrossRef]
  66. Stupavský, I.; Dakić, P. Antisocial Behavior and the Dopamine Loop on Different Technological Platforms and Industries: An Overview. In Proceedings of the Eighth International Congress on Information and Communication Technology, London, UK, 24–26 February 2023; pp. 471–481. [Google Scholar]
  67. Shu, K. FakeNewsNet. 2021. Available online: https://www.kaggle.com/datasets/mdepak/fakenewsnet (accessed on 13 August 2023).
  68. Shu, K.; Sliva, A.; Wang, S.; Tang, J.; Liu, H. Fake News Detection on Social Media: A Data Mining Perspective. ACM Sigkdd Explor. Newsl. 2017, 19, 22–36. [Google Scholar] [CrossRef]
  69. Shu, K.; Wang, S.; Liu, H. Exploiting Tri-Relationship for Fake News Detection. arXiv 2017, arXiv:1712.07709. [Google Scholar]
Figure 1. Methodology of research. Source: author’s contribution.
Figure 1. Methodology of research. Source: author’s contribution.
Applsci 13 11719 g001
Figure 2. Antisocial behavior diagram. The presented figure is used with the consent of the original author from the source [33].
Figure 2. Antisocial behavior diagram. The presented figure is used with the consent of the original author from the source [33].
Applsci 13 11719 g002
Figure 3. Cluster division according to the Open Society Institute. The presented figure is used with the consent of the original author from the source: [47].
Figure 3. Cluster division according to the Open Society Institute. The presented figure is used with the consent of the original author from the source: [47].
Applsci 13 11719 g003
Figure 4. The result of using sentiment intensity values in FakeNewsNetpolitifact by emotion. (a) Real News in FakeNewsNetpolitifact by emotion and shows the distribution of basic emotions in real news. (b) Fake News in FakeNewsNetpolitifact by emotion and shows the distribution of basic emotions in fake news from the field of political news. Source: author’s contribution.
Figure 4. The result of using sentiment intensity values in FakeNewsNetpolitifact by emotion. (a) Real News in FakeNewsNetpolitifact by emotion and shows the distribution of basic emotions in real news. (b) Fake News in FakeNewsNetpolitifact by emotion and shows the distribution of basic emotions in fake news from the field of political news. Source: author’s contribution.
Applsci 13 11719 g004
Table 1. Number of reports of real and fake news of the FakeNewsNet repository. Source: author’s contribution.
Table 1. Number of reports of real and fake news of the FakeNewsNet repository. Source: author’s contribution.
RealFake
PolitiFact624432
GossipCop16,8175323
Table 2. The procedure for analyzing the text of headings. Source: author’s contribution.
Table 2. The procedure for analyzing the text of headings. Source: author’s contribution.
ProcedureExample RealExample Fake
Original title text“Donald Trump exaggerates when he says China has `total control’ over North Korea”“Barack Obama Tweets SICK Attack On John McCain, Says He Should Have Died”
Removing stop words“Donald Trump exaggerates says China `total control’ North Korea”“Barack Obama Tweets SICK Attack On John McCain, Says He Should Have Died”
Stemming“donald trump exagger say china `total control’ north korea”“barack obama tweet sick attack on john mccain, say he should have die”
Summary“says china has `total control’ of North Korea”“Barack Obama tweets sick attack on john mccain, says”
Rating`neg’: 0.0, ’neu’: 1.0, `pos’: 0.0, `compound’: 0.0`neg’: 0.507, `neu’: 0.493, `pos’: 0.0, `compound’: −0.8834
The resultNeutral SentenceNegative Sentence
Table 3. Result using compound value in Real News FakeNewsNetpolitifact.
Table 3. Result using compound value in Real News FakeNewsNetpolitifact.
PositiveNeutralNegative
Real PolitiFact7246290
Table 4. Result using a compound value in Fake News FakeNewsNetpolitifact. Source: author’s contribution.
Table 4. Result using a compound value in Fake News FakeNewsNetpolitifact. Source: author’s contribution.
PositiveNeutralNegative
Fake PolitiFact76165191
Table 5. Evaluation Matrix values obtained using metrics for Real and Fake PolitiFact.
Table 5. Evaluation Matrix values obtained using metrics for Real and Fake PolitiFact.
AccuracyPrecisionRecallF1-Score
Real PolitiFact0.740380.870.850.86
Fake PolitiFact0.442130.540.790.64
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Stupavský, I.; Dakić, P.; Vranić, V. The Impact of Fake News on Traveling and Antisocial Behavior in Online Communities: Overview. Appl. Sci. 2023, 13, 11719. https://doi.org/10.3390/app132111719

AMA Style

Stupavský I, Dakić P, Vranić V. The Impact of Fake News on Traveling and Antisocial Behavior in Online Communities: Overview. Applied Sciences. 2023; 13(21):11719. https://doi.org/10.3390/app132111719

Chicago/Turabian Style

Stupavský, Igor, Pavle Dakić, and Valentino Vranić. 2023. "The Impact of Fake News on Traveling and Antisocial Behavior in Online Communities: Overview" Applied Sciences 13, no. 21: 11719. https://doi.org/10.3390/app132111719

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop