Introduction

Web 2.0 is a participatory platform whereby information and the dissemination of information are no longer in the hands of a few. This indiscriminate liberty regarding dissemination of information has led to the circulation of a plethora of content which is authentic, but has also opened the door to ‘questionable content’ such as fake news, misinformation and disinformation. Over the past few years, there has been a significant rise in the circulation of misinformation, disinformation, fake news and other problematic content through the meteoric rise in social media platforms.

Web 2.0 not only saw the rise of social media, but also of blogs, online news portals and media sharing applications, and it coincided with the widespread availability of cheap SIM cards and low-cost smartphones. This led to a paradigm shift in an individual’s role in information dissemination. Individuals, who traditionally primarily played a passive role as consumers of information and not as active producers or circulators of content, can now also play an active role creating and circulating information.

With the paradigm shift, the risk of abuse increased many fold. The indiscriminate access and power brought a significant rise in misinformation, disinformation, propaganda and other problematic content. The Compact Oxford English Dictionary defines misinformation as false or inaccurate information given by someone. Disinformation is defined as “information intended to mislead”. Propaganda is defined as “information that is often biased or misleading, used to promote a political cause or point of view”, while satire is defined as “use of humour, irony, or exaggeration as a form of mockery or criticism”. In this article, all the above-mentioned kinds of information are clubbed together under the umbrella term ‘questionable information’ or ‘questionable content’.

The abuse of technology to create and disseminate questionable information is producing a new form of “collective violence” and “collective victimisation.” The World Health Organisation has defined collective violence as “the instrumental use of violence by people who identify themselves as members of a group—whether this group is transitory or has a more permanent identity—against another group or set of individuals, in order to achieve political, economic or social objectives” (Zwi et al. 2002, p. 215) and the group suffering from the collective violence are collective victims (Vollhardt 2012). Current research on collective violence/victimisation is concerned with the experience, denial/recognition of victimisation, victim identity, collective memories and includes violence caused by war, terrorism, state-perpetrated violence and organised violent crime (e.g. Bagci et al. 2018; Littman and Paluck 2015; Vollhardt 2020).

While research has shown that people see fake news as a bigger threat than violent crime, illegal immigration and even terrorism (Mitchell et al. 2019), there is still no research that discusses how the abuse of technology in the form of questionable information is causing a new form of collective victimisation. Although questionable information might seem relatively harmless at the individual level, it can play a significant role in shaping the thought process of a large segment of society and influence decision making. When it comes to political content or sensitive issues, it could cause serious harm to society and then everyone becomes a victim.

This article will focus on the significance of information in a democratic system and the scope and nature of questionable content. This paper proposes that to address successfully questionable information and collective victimisation, we need to consider its rationale and modus operandi (both the methods and types). This paper will also describe approaches undertaken by countries to meet the challenge of questionable information and their efficacy from the perspective of collective victimisation.

Role of Information in a Democratic Society

Democratic rights, which the citizens of a State acquire by virtue of being citizens, are considered to be paramount for any democratic system to survive. A true form of democracy not only ensures that the people are aware of their democratic rights but also are correctly informed about the obligations and duties which democracy entails (Kuklinski et al. 2000). Only when individuals have the tool of information, can they judiciously and appropriately exercise their democratic rights including but not limited to voting rights.

In the nineteenth century and into the twentieth century, citizens consumed political news primarily through newspapers. Politicians and other political actors relied on newspapers to be their medium to propagate their ideologies or defend their actions. With the development of electronic media in the form of radio and television, political news found a faster and more attractive medium to reach most members of society (Lazer et al. 2018). While society was grappling with the challenges televised journalism was posing, the Internet age dawned upon us and this was further enhanced by the introduction of the Web 2.0 platform upon which social media thrived. Online news consumption is reaching new heights due to the analytics and algorithms of social media to the extent that this form of media is well on track to eventually replace television in general (Nguyen and Western 2006). Social media is playing a significant role in the personal, social, economic and political transformation of individuals and can influence the mental health and decision-making capacity of people.

In the recent past, several existing forms of crime have been facilitated through social media, and new forms of crime have been created which are dependent upon technology such as tampering with computer source documents, identity theft, phishing, online lottery scams, illegal access, data interference and child pornography. Many of these crimes are a technological extension of existing crimes such as stealing computer resources, cheating by impersonation, terrorism and sexual crimes (Broadhurst and Chang 2013; Chang 2017). The theory of victimisation which was developed to address the concerns and issues of victims of crimes has remained static in terms of its scope as it limited itself to individuals and dominantly victims of conventional crimes.

Questionable content appearing on web-based platforms differs from content in traditional media in two primary ways: (a) traceability of the source of information and (b) the limit and extension of circulation. For instance, a news piece which is incorrect and falsified, when circulated through traditional media such as a newspaper or telecast, is easily traced and suitable action can be taken directly against the perpetrators. But in the case of similar misinformation on social media, who is the original source of the story often remains unknown. This added protection shields perpetrators and creates a more favourable environment for those wishing to circulate questionable content. Secondly, the reach of content distributed on social media is less certain than the reach of content distributed through traditional media where TV ratings and print circulation are approximately known. Content on social media has the potential to “go viral” and reach many more people than traditional media. Thus, even though the consumption of questionable content on social media is predominantly individualistic, the ultimate impact is on society as a whole and can cause collective victimisation.

Questionable Content

Questionable content in this context can be crystallised as content that is politically or ideologically motivated online disinformation, fake news, hate speech, online misinformation and foreign encroachment in the domestic affairs of the State, misreporting and misconstrued satire (Shin et al. 2018; Tenove et al. 2018). Such content has the potential to impact individuals and the population collectively by changing the attitude of consumers, creating scepticism towards the electoral process, blocking educated political decision-making, causing political unrest, communal riots, and violence, sabotaging free and fair electoral processes, altering the political landscape, marginalising certain classes or communities and damaging the economy (Brown 2018). The threats to the collective are not theoretical, as the world has already witnessed events such as the Pizzagate Incident (Persily 2017), Russian interference with the 2016 U.S. Presidential election (Marvel 2019) and a wave of disinformation originating in China fed onto Taiwanese Internet domains, seeking to interfere in local and national elections (Wong and Wen 2020).

India, although a country with relatively limited Internet penetration, has a significant number of people using social media and the spread of misinformation is extensive. The dissemination of questionable content has caused communal violence, lynching and innumerable incidents of violence against particular groups of people in India, as well as influencing the 2019 election (Arun 2019; Roozenbeek and van Der Linden 2019). Fake news, online misinformation and disinformation regarding COVID-19 (see below) occurred to such an extent that it moved the issue from being just a health pandemic to also being about communal tensions and religious conflict (Ellis-Petersen and Rahman 2020). Some of the questionable content was, on the face of it, absurd and yet many people believed it (Sengupta 2020). The nature of the platform makes it extremely difficult to curb. These forms of victimisation viewed through the prism of standard principles, embodied in constitutional law, human rights law, criminal law, the basic tenets of democracy, the UN Charter and international law, constitute a violation of India’s domestic law as well as international law.

In the space of a few months in early 2020 in India, there were a number of cases of questionable content regarding COVID-19, targeting different political parties and religions and which had an impact over the collective even though they were accessed individuallyFootnote 1:

  1. (a)

    An audio clip claimed that a vendor with a certain religious background was spreading COVID-19. The perpetrator produced a 6 min and 42 s audio clip, within which he suggested that a vendor was selling vegetables at a low price with the ulterior objective of spreading COVID-19. The audio clip was examined by the fact-checking organisation ‘Boom’ and was found to be fake (Alphonso 2020). It is apparent the perpetrator intended to use the audio clip to create communal hatred in the time of the pandemic. Individuals received the message in their personal space, but the insecurity it created within the victims potentially could have given birth to a collective distrust in vendors based on their religious background and even violence if not tackled in time. Many would not be aware of the follow-up fact-checking and would continue to live with the misinformation and the prejudice it fuelled.

  2. (b)

    During the time of COVID-19, a photo of Switzerland’s Matterhorn mountain lit with the Indian flag was tweeted, claiming that it was lit in the name of hope after the leader of the government supplied hydroxychloroquine tablets (HCQ—a drug for the treatment of rheumatoid arthritis and believed by some scientists to lessen the symptoms of COVID-19). The tweet went viral and was shared widely via Twitter and Facebook. A fact-checking organisation in India, altnews.in, found that, although the image of the Indian national flag had been projected on the Matterhorn, the claim shared along with the image that this happened after India supplied HCQ tablets to the country was false. The projection on that day was to express solidarity with Indians in the fight against COVID-19 and was one of a series of flag projections intended to be a sign of hope as the world battled the novel Corona Virus (Kinjal 2020a, b). While the tweet in itself seems harmless, it had the intended effect of encouraging Indians to feel an ‘exclusive’ and positive emotion about the nation and the performance of the political party in government.

  3. (c)

    Over twitter, a picture demonstrating social distance practice during COVID-19 at Mizoram, a state in northeastern India, received many likes. However, Boom (India) & Boom Myanmar ran verification of the picture, and it was found that the picture was not of Mizoram but of Kalaw, a hill town in Myanmar’s Shan State (Nabodita 2020). While some would argue that by encouraging social distancing, its accuracy was unimportant as it had a positive influence over people. However, it was fake news and it might have had the perverse effect of causing additional COVID-19 cases due to a false sense of security based on a belief that safe practices were protecting the community.

  4. (d)

    The impact of questionable content is strongly felt when popular individuals, political personalities or individuals circulate and share the information. For example, a video from Bijnor, a district in the Indian state of Uttar Pradesh, depicted an elderly fruit seller belonging to a minority faith, accused of sprinkling urine over bananas to be sold. Eminent political figures and media personnel fuelled the circulation by circulating the video. It did not take much longer for the video to get viral. It was later found, and verified by Bijnor police, that the elderly fruit seller had only washed his hands with water from the bottle and did not sprinkle urine as claimed in the video (Jha 2020).

The above incidents are illustrative of the power of questionable content and the damage it can cause to individuals and societal harmony. This damage can be brought on very quickly by the participation of entertainment or sport celebrities and political leaders as potential ‘super spreaders’. The rationale for targeting such individuals is simple, as their re-posting is understood by their supporters as an endorsement and this will give a significant boost to the circulated content, even if it is questionable content.

Gossip and rumours have undoubtedly existed since the invention of languages; however, the invention of the Gutenberg printing press in 1440 enabled precise and rapid reproduction of books, dramatically reducing their cost and increasing their availability, and thereby also increasing the scope for the circulation of misinformation and disinformation (Posetti and Matthews 2018). The advent of the participatory web interface with its indiscriminate access to information has again significantly increased the opportunity for misinformation and disinformation and the speed at which it circulates. To access social media, all that an individual needs is a workable network and a device supporting the social media applications or website and then the individual is good to go. This indiscriminate access has been far more widespread than the understanding amongst the population about the nature of the technology. With no qualification necessary to use and access social media, the perpetrators and victims of questionable content are separated by no more than a click (Chang 2012).

The industry of misinformation and ancillary activities we argue can be regarded as a new type of collective violence and is generating a new form of collective victimisation where individuals are not even aware that they are victimised. We consider here the rationale and motivation behind questionable content, and then the characteristics of questionable content.

Rationale and Motivation Behind Questionable Content

While any individual with a device which supports social media platforms potentially can be a perpetrator, perpetrators tend to be certain entities with particular objectives such as political entities, extra-political entities, extremists (Ben-David and Matamoros Fernández 2016) and individuals or a group of individuals with nefarious motives. There are websites and portals which are entirely dedicated to the production of fabricated and manipulated information that operate under a name which is deceptively similar to that of a legitimate news organisation (Allcott and Gentzkow 2017). India is periodically a victim of such websites, as was evidenced by the website “viralinindia”, which was shut down on account of abusing information prior to the 2019 general election. However, such entities are hard to shut down permanently as it is easy to re-emerge in some other form (Usha 2019).

While perpetrators capitalise on the insecurities, prejudices and limited education of the victims and on the channel algorithms, the victims are unaware of and are not alert to the motives of the perpetrators. Unwittingly, they aid and assist the perpetrators to (i) polarise the population for or against a particular cause, (ii) evoke emotions among the population and cloud independent and rational judgement, (iii) spread conspiracy theories and infuse distrust in the existing knowledge base, (iv) troll and infuse an existential crisis in an individual or even a group, (v) deflect blame and target another, create a parallel narrative and (vi) impersonate (Roozenbeek and van Der Linden 2019). The perpetrators engage in all the above-mentioned strategies either for a pecuniary benefit or ideological validation (Allcott and Gentzkow 2017; Silverman and Singer-Vine 2016). Most media attention has focused on the use of these strategies by foreign agencies interested in domestic politics. During the 2020 Taiwanese Presidential election, questionable content was put into circulation. The questionable content did not stop with fake news and sought to manipulate public opinion by spreading misinformation (Kuo 2019; Lee and Blanchard 2019). Having regard to China’s claim that Taiwan is a part of China, the interference by an international entity must be viewed as potentially threatening Taiwan’s security and the coveted principle of self-determination.

Characteristics of Questionable Content

The modus operandi along with the content of questionable content helps us to determine and identify the actor, the rationale and also the intended target. Various forms of questionable content are crafted to attain different objectives and impact individual behaviours or attitudes differently. Collectively, they are often termed as a semantic attack, as the questionable content tends to adversely impact the semantics of information (Kumar and Geethakumari 2014). The factors which differentiate between legitimate information and questionable information are certainty, accurateness, comprehensiveness and deceptiveness.

A semantic attack is directed towards individual users of social media and is crafted in such manner which awakens the insecurities of individuals or reaffirms their existing belief (Allcott and Gentzkow 2017). In a nation with a heterogeneous and diverse population, the impact of miscommunication and like events could be tremendous. A nation-state with diverse religious practices, cultural heritage, socio-economic standards and educational disparity, and with the substantial reach of social network-enabled devices in the hands of such a diverse population, is prone to be a victim of insecurities. Social media is a platform which provides an insight into the lives of others and individuals who previously had limited knowledge about the lives or thoughts of others can now access them with the click of a button. Social media has led to a significant detrimental effect in the psychological condition due to comparison of self over social media with other participants (Vogel et al. 2014). The comparisons which stem out of insecurities are not limited to lifestyle comparisons but also intellectual comparisons. People engage to influence others by showcasing their intellectual abilities and ideological bent, which often is guided by intolerance and rigidity to accept a diverse outlook leading to extremism and polarisation (Jost et al. 2018). The propagation of questionable content is further aided by the speed of its circulation and uncertain geographical location of the source.

Styling and the text of questionable content are crafted in a particular manner depending on the type of questionable content such as satire, fake news, disinformation, propaganda and misinformation and on its audience. The style of questionable content differs from real news on several counts. Fake news is often crafted with a longer and striking title, or heading, which attracts immediate attention, the vocabulary used is simpler, with limited use of technical words so that even a reader with limited education or intellectual abilities is not discouraged from reading it. Furthermore, the presentation of the content is colourful, capitalised and dramatised to grab the attention of the potential target (Horne and Adali 2017).

When it comes to content, articles are shorter in length compared with real news and there are fewer punctuation marks and quotes which lowers the possibility of tracing the content back to an authoritative figure. There is greater use of adverbs, pronouns and redundancies. The content also prefers using self-referential terms such as ‘I’, ‘We’, ‘You’ and ‘Us’ (Horne and Adali 2017). Such self-referential terminology behaves as if the content directly speaks to the reader or on behalf of the reader; consequently, the reader feels connected with the message and messenger.

Satirical websites are producing content some of which can cause similar damage as fake news and misinformation when its reception is without a context. The wide range of actors also includes websites engaging in the production of a mixed format of news, with a certain portion being true and other portion fabricated thereby creating a cloud in the judgement of people at large (Allcott and Gentzkow 2017). India has been a victim of delivery of content from a website which engages in the dissemination of satire from eminent political figures leading to a grant of authenticity to the satire, at least in the mind of some. For example, a leading politician resorted to quoting from “fakingnews”, a portal which declares itself as a satire and humour website, and he was subsequently re-tweeted by the official political handle of the political party forming government. (Chaudhuri 2020). The device of questionable content is equally exploited by the opposition in the Indian parliament as they use this mechanism to question the credibility of the governing political party. Opposition parties in the Indian parliament have often attempted to malign the image of the Prime Minister by questioning his lifestyle and using visuals which are either wrongly dated or wrongly contextualised (Kinjal 2020a, b).

Collective Victimisation

Questionable content in India frequently has elements such as religious intolerance, people with a certain political affiliation violating the law, photos and news related to celebrities, outrageous claims regarding the performance of government and international accolades received by India. Most of the content is supported with photographs from unconnected events (see examples from BoomLive, https://www.boomlive.in/fake-news). The questionable content capitalises on the strong religious sentiments of the majority, or political sentiment, which itself is connected to religious affinity, or with a little description and an image, which aids people with limited education to interpret or celebrities, who are often revered as a god by a large section of India’s population.

It is not that every individual who is accessing social media is bound to fall in the trap laid by questionable content. The most vulnerable are individuals with limited education or awareness about the medium and scope of the medium, those who lack an objective outlook or have had experiences which have had an impact on their psychological condition making them either insecure or prone to seeking validation of existing prejudiced ideology (Allcott and Gentzkow 2017; Silverman and Singer-Vine 2016). Having an education does not guarantee protection from questionable content. However, it can be argued that a limited education (including digital literacy) plays a significant role in the victimisation of individuals as it enables them to access social media but does not enable them to discern or make rationale choices in favour of real information. A lack of awareness and skills to identify fake news from real news enhances the circulation of questionable material. The insecurities upon which the questionable content capitalises were also in evidence in Taiwan during the elections in 2018 and 2020, as well as during the COVID-19 pandemic.

The Individual as Unwitting Perpetrator

To socialise, individuals do not need to leave their homes anymore, they can reach out to anyone and everyone through the algorithms of social networking. An individual sitting in the comfort of their home can spend hours over a social network, reading, watching, writing, contributing content. However, the significant change is that they now also are engaged in sharing the content they consume. While the individual presumes that they are acting on their own in their private domain and are not involving others, the moment their activities involve sharing, the impact goes beyond them individually. With the act of sharing, an individual actively enters the realm of algorithms of the social network, and the shared content has the potential to have an impact beyond the individual, extending to the individual’s social network and then beyond to the social networks of the individual’s social network growing exponentially until it eventually impacts society collectively, or at least a certain portion of it. When the information circulating is real and genuine, the impact is not necessarily adverse; however, unchecked questionable content undoubtedly has an adverse impact on people collectively. Except for the perpetrators, who introduced the questionable material, all the other individuals who consumed it and actively shared it without verification could be considered collective victims of questionable content.

Often, when questionable content comes from someone in their contact list or from friends and family, the propensity is to believe in the content and it is often misconstrued as authentic and genuine as opposed to content received from an unknown person. Recipients of questionable content often do not undertake a fact-finding exercise or analyse the information from an objective standpoint to ascertain its reliability; they are prone to trust their friends and family over social media unless there exists an ideological difference with the actor. With this trust reposed upon the contact and network, the recipient turns into actor and feels the need to share and inform others in their contact group thus becoming a victim as the recipient and also inflicting injury to others.

Acting Individually, Impacting Collectively

The individual sharing questionable content has an impact on the collective by virtue of changing the attitude of consumers towards a particular issue or by creating general indifference towards an election by generating a certain amount of scepticism and distrust (Persily 2017). Irrespective of the end goal—pecuniary benefit or ideological reach—questionable content creates a “blanket of fog” which conceals and cloaks authentic information and creates confusion over what to believe or to fall prey to the content circulated. The idea of “educated political decision” is subsequently lost leading to a situation where voters have been exposed to incorrect information influencing how they cast their votes or whether they vote at all. When this results in the election of individuals who would not otherwise have been elected, it can be regarded as collective victimisation as it engulfs the majority of the population (Persily 2017).

The experience of the USA in 2016 suggests that social media can play a vital but not decisive role in communicating electoral news and that the average American voter did not just believe any fake news; however, they were likely to believe stories that favoured their preferred candidate (Allcott and Gentzkow 2017). India, the world’s largest democracy, has also fallen victim to questionable content circulated over web-based social media platforms. Oxford Internet Institute (2019) suggested that “the proportion of polarizing political news and information in circulation over social media in India is worse than all of the other country case studies we have analysed, except the US Presidential election in 2016”. And according to Oxford Internet Institute (2019), a data collected in February to April 2019, 2 months right before the 2019 general election, showed that both Bharatiya Janta Party (BJP) and Indian National Congress (INS) shared a substantial amount of news on Facebook that they classified as “divisive and conspirational content”, i.e., junk news and information. The potential of the questionable content impacting a collective is not only limited to political events as was seen during the COVID-19 pandemic.

Regulating ‘Questionable Content’ in the Digital Age

Crimes and forms of crimes have been ever-evolving and the Internet age has created both new opportunities for crime and new crimes. The big question that lies before the government of each state is how to regulate information systems, and specifically questionable content. Government-imposed regulations can be a double-edged sword as regulations can eliminate or restrict the flow of questionable content, while at the same time can potentially act as a legally sanctioned mechanism to gag real news and ultimately violate media independence, freedom of information and the right to free speech. It is a difficult issue to balance as India saw when the Indian Government was about to enforce a rigorous law suspending the accreditation of journalists propagating questionable content but soon froze it owing to protests from the media. In India, questionable material (rumours) circulated deliberately over WhatsApp, a popular platform for text and media exchange over a smartphone, has resulted in several incidents of lynching (Arun 2019). The Indian Government felt that the onus was on WhatsApp to stop the acts of lynching. Through notices to WhatsApp, the government put pressure on the company to address the problem by preventing the messages from spreading. WhatsApp responded by installing a “forwarded” label to tell the reader that the individual sharing the content did not create it. While web-based platforms such as WhatsApp are undoubtedly being used to amplify and target hate speech, if the aim is to limit the incitement to violence, then other factors that contribute to the production and promotion of questionable content such as the context, and the roles of leaders and local police, need to be addressed (Arun 2019). A holistic approach is needed for regulation including the identification and application of penal laws against the perpetrators. Equally, those vulnerable to spreading questionable content need to be made cyber security aware to protect themselves from the content and from spreading the content (Chang and Coppel 2020).

In April 2019, the US Law Library of Congress published a report on initiatives taken by a few countries from different regions to counter the menace of fake news. The uniform issue in the study was the role and impact of questionable content in the fair and free election process (The Law Library of Congress 2019). A similar study was done by www.poynter.org, with selected countries representing various regions (see https://www.poynter.org/ifcn/anti-misinformation-actions/). Both studies reveal that countries have undertaken three approaches: (a) steps by government to monitor, assess and assist in the reduction of questionable material over social networking sites; (b) steps closely resembling the sanctions and strict measures against questionable content and (c) steps involving elements of awareness to control collective victimisation. However, there is not a uniform approach by the nation States.

Countries are attempting to address the growing challenge of questionable content through different measures including monitoring, imposing a sanction, conducting awareness programs and demanding accountability. However, regulation is not a simple answer as was seen in Malaysia where legislation in 2018 faced heavy criticism for its broad definition of ‘fake news’ and was also examined for potentially being oppressive and regressive (The Law Library of Congress 2019). A similar stance was taken in Israel where there has been growing apprehension within the political opposition about excessive governmental control over information systems that could lead to violations of the essential right to information and freedom of speech (The Law Library of Congress 2019).

In both India and Myanmar, the Internet in parts of the country was shut down to stop the dissemination of information, ostensibly for security reasons but also to limit awareness of the situation on the ground. This can be seen as the gagging of the right to speech and an excessive imposition that violates not only human rights but also adversely impacts the economy (Aung and Moon 2020; Kiran 2020). In 2018, the Indian government proposed to penalise journalists for publishing and propagating fake news; however, the proposal was withdrawn amid protest and claims of interference by the Prime Minister’s Office (Dutta 2018; Khalid 2018).

Rather than impose a repressive regulatory approach, the Taiwanese Government has adopted a “humour over rumour” strategy to counter questionable content. In order to provide timely and correct information, the government uses humourous memes to provide information. By mocking government officials themselves (e.g. the meme with the Premier showing his rear saying “We only have one butt” (see Image 1) to encourage people not to panic buy toilet paper during the COVID-19 pandemic) or using a “spokesdog” (rather than a spokesperson) to communicate its public messages (see Image 2). These messages successfully attract people’s attention in a timely manner and effectively cut back the dissemination of questionable content.

Image 1
figure 1

We only have one butt. Source: Ministry of Economic Affairs, Taiwan

Image 2
figure 2

Timing to wash hands by the Spokesdog. Source: Ministry of Health and Welfare, Taiwan

In another alternative to government regulation, fact checking organisations, such as altnews.in (India). boomlive.in (India) and mygopen.com (Taiwan), investigate and identify questionable content. However, they are not always perceived as being independent. Furthermore, social media platforms and applications that curb questionable content not only assist governments but also run the risk of becoming—or appearing to be—unaccountable agents of the government in determining what is acceptable content.

In addition to the abovementioned, certain other innovative measures have been developed by the participants of the dotcom world (Chang and Grabosky 2017). One such innovation involves an online game which enables the individual players to play the role of questionable content producer, and through this role-playing, the player gets psychological training to identify techniques used to produce such content (Roozenbeek and van Der Linden 2019). Regulation will never be enough to protect the population from questionable content and there needs to be a focus also on “hardening” the target. Cybersecurity awareness training programs equip an individual with the ability to discriminate between real news and questionable content form part of the armoury. One example of such effort is CyberBaykin (see https://www.facebook.com/CyberBayKin/), which was created to raise awareness about cyber safety and risk in Myanmar (Chang and Coppel 2020).

Conclusion

It is evident that questionable content over social media in the form of fake news, misinformation, disinformation, propaganda and misconstrued satire have become a menace to reckon with. It is also acknowledged that human rights relating to freedom of speech and the right to information are threatened. Effective regulation of the world of questionable content will not be possible unless all measures such as monitoring and sanction are aided by awareness and accountability measures.

One of the key barriers in need of resolution to successfully regulate questionable content in the information system is the lack of an acknowledgment that questionable content is collectively victimising the nation’s population. The primary challenge to such an acknowledgement is due to the limited construct of the identification of ‘victim’. Traditionally, the subject of a criminal offence is considered to be the victim, and barring a few circumstances like war, genocide and similar acts, it is an ‘individual’ who forms the subject matter of victimisation. Often, psychological damage is not considered as victimisation as evidencing the criminal act is difficult. The challenge of extending the idea of victimisation to a collective is a notch higher, in terms of difficulty.

A question may arise, even after acknowledging that questionable content is playing an adverse role in the electoral process, why this is not categorised as collective victimisation of the nation’s population? The answer might lie in the lack of scope to provide compensatory privilege to a collective, which is the essence in the study of victimology. Besides, there exists another significant rationale for not categorising a nation’s population as a collective victim: the authority or the political force shouldering the responsibility of regulating and addressing the collective victimisation may have taken advantage of the menace of questionable content. However, that remains a subject for later study. Nonetheless, to make progress, we as a community need to appreciate and accept at the outset the concept of collective victimisation of a nation’s population resulting from questionable content, before we try to make inroads to resolve and address the problem.