Taiwan’s Public Discourse About Disinformation: The Role of Journalism, Academia, and Politics

ABSTRACT Experts have ranked Taiwan as the number one country regarding the exposure to disinformation. This assessment is not surprising as many exposed disinformation cases can be linked to Chinese state-aligned actors but also domestic political actors. Academic researchers, journalists, and the civic tech community have played an essential role in the fight against disinformation in Taiwan and the emergence of misinformation studies as a new research field. While disinformation in Taiwan is a major recurring issue, the “Western” debate within academia and journalism has taken a critical turn regarding the assumed effects of disinformation. Our study focuses on this potential disconnect between the international and the Taiwanese debate about disinformation. With automatic and manual content analysis, we evaluate what role academics and journalism play in the public discourse and what part of this debate reaches the largest audience. We show how Taiwan’s public misinformation discourse has evolved vis-a-vis the international discourse, what role misinformation studies play in this discourse, what part of the discourse reaches the widest audience, and what parts of the discourse could be problematic.


Introduction
When the 2019 Varieties of Democracy report was published, it ranked Taiwan as the number one democracy regarding the exposure to disinformation from foreign governments (Walsh 2020).This conclusion is not surprising as Taiwan's sometimes dysfunctional media system, in combination with the so-called China factor, is a relatively long-standing problem (Fong, Wu, and Nathan 2021;Wu 2016).The V-Dem project's ranking based on expert evaluations is also, to a certain extent, supported by the exposed cases of disinformation that can be linked to Chinese state-aligned actors but also domestic political actors (Lin and Wu 2019).Researchers but also journalists have uncovered many instances of dis-and misinformation cases that involve content farms abroad, possible Chinese interference, but also domestic astroturfing attempts in the form of paid online commenters on social media (Kuo 2019;Lee et al. 2020;Lin and Wu 2019;Liu, Ke, and Xu 2019).
And while for most uncovered disinformation, it is impossible to determine whether official Chinese state-affiliated groups or amateurs are behind these efforts, there is a general agreement among experts that China has increased its influence operations in recent years (Lee et al. 2020;Templeman 2020).Due to these specific circumstances, Taiwan has been identified as a particular case by a delegation from the European Parliament who visited Taiwan in November 2021 and described Taiwan as a "testing ground for how China will operate elsewhere" (European Parliament 2021, 2).Domestically various actors such as academic researchers, journalists, as well as the civic tech community (Yen 2020) have played an essential role in the fight against disinformation and the emergence of misinformation studies as a new research field (Hu 2018(Hu , 2021)).
While disinformation, especially in the form of foreign interference, has been an international focal point for researchers since the Ukraine crisis in 2014 and the US presidential election in 2016 (Khaldarova and Pantti 2016;Saurwein and Spencer-Smith 2020), there has been a critical turn in the debates about the impact of disinformation campaigns.This critical turn is the starting point for our study.While disinformation is a recurring issue in Taiwan, the "Western" debate within academia and journalism has taken a critical turn (Bernstein 2021).Scholars such as Jungherr and Schroeder (2021) have called the prominence of disinformation in the public discourse a so-called "moral panic" as empirical findings have shown that disinformation is a somewhat limited problem in the US context.Claire Wardle (2020) echoes this assessment and describes how a so-called "disinformation industrial complex" was established after the US election, and journalism might have "overcorrected on foreign influence."Furthermore, fake news accusations as a political practice were used by former president Trump against different media outlets and political opponents (Lischka 2019), and by covering them, the media might help to undermine trust in institutions (Scheufele and Krause 2019).We are thus interested in whether such a critical turn is also warranted in the Taiwanese context.There seems to be a disconnect between the international and the Taiwanese debate about disinformation.Our study is specifically interested in domestic debates about disinformation in the media.We do this by analyzing the public debates about disinformation in 25,740 Taiwanese news articles published between 2016 and mid-2021 in four major media outlets in Taiwan, covering the political spectrum and different types of outlets.With an automated content analysis combined with manual coding and qualitative analysis of representative cases, we identify the issues around disinformation and what actors are involved in these debates.Eventually, we answer the question of whether the discussion about disinformation can be best described as "moral panic" and how this puzzle of the disconnect between the international and the domestic debate can be explained.We will first briefly discuss the international academic literature focusing on disinformation and the role of the above-described critical turn.We will then introduce the local context and briefly summarize the existing local scholarship before presenting our analysis of Taiwan's public disinformation discourse.

International Misinformation Research
While the role of disinformation in journalism and politics is not a new phenomenon (Mejia, Beckermann, and Sullivan 2018), it gained a lot of public and scholarly attention in the wake of the 2016 US Presidential Elections (Allcott and Gentzkow 2017;Farkas and Schou 2018).Overall, the research focusing on disinformation is interdisciplinary.For example, much conceptual work has been done in journalism studies and communication science (Egelhofer and Lecheler 2019;Tandoc, Lim, and Ling 2018;Wardle and Derakhshan 2017).On the other hand, computer scientists have mainly contributed with new methods that should help to automatically identify "fake news" (Shu et al. 2017) or socalled bots that automatically spread disinformation on social media platforms (Cresci 2020).Another strand of research, especially within social science, employs experimental designs to test the effectiveness of fact-checks (Margolin, Hannak, and Weber 2018) or media literacy interventions (Guess et al. 2020).With the Harvard Misinformation Review, there exists now even a journal specifically focusing on the issue of mis-and disinformation.
The term "fake news" has been used by many researchers in the aftermath of the 2016 US election (e.g., Allcott and Gentzkow 2017;Guess, Nagler, and Tucker 2019) as this research mainly focused on a subset of media outlets that were classified as untrustworthy and responsible for spreading false information.However, it is challenging to define fake news clearly (Tandoc, Jenkins, and Craft 2019).Especially in public debates, the term is often not used to describe false content as a journalistic genre and instead used as a label used by political actors to discredit the media or attack political opponents (Egelhofer et al. 2020;Egelhofer and Lecheler 2019;Farhall et al. 2019;Lischka 2019).Scholars have recognized this issue early and developed different conceptual clarifications.For example, Wardle and Derakhshan (2017) introduced a new framework that distinguishes between mis-, dis, and mal-information.Furthermore, they propose distinguishing between the intentional and unintentional sharing of false information, an approach that is also proposed in a similar fashion by other scholars (Egelhofer and Lecheler 2019;Freelon and Wells 2020;Tandoc, Lim, and Ling 2018).We, therefore, use misinformation as an umbrella term in the empirical part of the paper as, in most cases, we do not have sufficient information to clearly label them as disinformation, thus the intentional dissemination of false information.Still, if cases can be clearly described as disinformation or any other more specific form, we will explicitly mention it.

Critical Turn
While there seems to be a general agreement in the literature about disinformation, thus the sharing of false information to cause harm, as a crucial issue nowadays, we argue that there has been a critical turn in the academic and public debate.The critical turn is less about disinformation in general but centers more on the question of what role social media plays and the effect of disinformation.The most critical voice in this debate is Joseph Bernstein (2021), who concludes, by referring to the assumed effects of disinformation, "that the Establishment needs the theater of social-media persuasion to build a political world that still makes sense, to explain Brexit and Trump and the loss of faith in the decaying institutions of the West."Jungherr and Schroeder (2021) have a similar perspective on the issue of digital disinformation and describe the prominence of the issue in public but also academic discourse potentially as a moral panic, "given the limited empirical evidence for the actual reach and effects of disinformation in highincome democracies" (4).Claire Wardle (2020) also has a critical perspective on the debate about disinformation in the US context and describes the creation of a so-called "disinformation industrial complex" in the aftermath of the 2016 US Election.She argues that the media has overreacted regarding the foreign interference threat and the impact of Russian disinformation during the election.However, in contrast to Jungherr and Schroeder (2021), she still sees disinformation as a major issue in the US, just with a different focus on the role of domestic actors.David Karpf (2019) also presents a critical assessment of the effects of disinformation during the election and describes the conflicting debate between "digital media researchers who see genuine cause for alarm" and "political science researchers who see a hype bubble forming and want no part of it."Overall, while there is agreement that Russia did try to influence the US Election in 2016 and operations such as the Russian-sponsored Internet Research Agency (IRA) reached a diverse political audience in the US with their sockpuppet accounts on Twitter (Freelon and Lokot 2020), authors such as Benkler, Faris, and Roberts (2018) see the impact of these operations as limited.This conclusion is also supported by Bail et al.'s (2020) study assessing the influence of users exposed to IRA accounts on Twitter.Regarding the reach, research also presents the problem in the context of the US as a limited problem (Guess, Nagler, and Tucker 2019).All these researchers have an overall critical perspective on the current debate about disinformation.
Another strand of research mainly focuses on the public debates about disinformation and how labels such as "fake news" are used to accuse opponents or in general opposing opinions as wrong or false.Farkas and Schou (2018) discuss the term "fake news" as a floating signifier that is used in public "discourses to critique, delegitimise and exclude opposing political projects" (303).Different studies support their argument.For example, politicians such as Donald Trump use fake news as an accusation to attack traditional media (Lischka 2019).Using "fake news" discourse to discredit political opponents or the media has also been used by Australian politicians (Farhall et al. 2019) and is also a common occurrence in the European context (Egelhofer et al. 2020).
Lastly, there is also scholarship with a more general critical perspective on journalism that argues along the same line as Bernstein (2021).For example, Tischauser and Benn (2019) critically discuss journalism's notion of objectivity and highlight how there have always been legitimate truth claims from marginalized communities competing with mainstream journalism's truth claims.Gutsche (2018) also echos this criticism of mainstream journalism and shows in his analysis that journalism in the UK and US has used the "fake news" crisis to strengthen its authority and, by doing so, distracti from journalism's underlying problems and defend its position against the so-called "Fifth Estate media" which is mainly "comprised of bloggers and columnists" (Berkowitz and Schwartz 2016).Overall this perspective assumes that mainstream journalism's fight against disinformation is not a purely altruistic endeavor but also an opportunity to further its own legitimacy and authority.
Most of the abovementioned scholarship focuses on Western democracies.The question now is whether the same conclusion holds for a different context and, more broadly speaking, whether findings from Western democracies can be globally generalized.For example, focusing on sub-Saharan Africa requires a different perspective that considers the nuances and local context (Mare, Mabweazara, and Moyo 2019).We argue that even if we focus on Western democracies, we have to consider the geopolitical situation of a country and the local political culture.For example, the threat and potential exposure of disinformation are different for Baltic countries or Ukraine (Khaldarova and Pantti 2016) that are at the doorstep of authoritarian countries than for the typical Western democracy usually discussed in the literature.While Taiwan is not a Western country, it is a (direct) democracy with a free media system that faces constant threats from an authoritarian country (Hartnett and Su 2021).We will briefly describe Taiwan's local context and the existing local scholarship within misinformation studies.

Taiwan's Local Context: The Media and Misinformation Research
Nowadays, Taiwan is a direct democracy with a highly commercialized free media system that with its strong commercial broadcasting companies resembles the US system rather than the typical public service system found in Europe (Hu 2017).Politics in Taiwan is dominated by national identity as a cleavage issue that divides the political party landscape.In general, Taiwanese politics can be divided into the rather China-friendly panblue camp, mainly represented by the Kuomintang (KMT), and the pan-green camp led by the Democratic Progressive Party (DPP), whose stance can be best described as antiunification (Achen and Wang 2017).Thus, Taiwan's political system, as well as the media system, cannot be understood without considering the so-called China factor, which is, according to Wu (2016): the process by which the PRC government utilizes capital and related resources to absorb other countries and 'offshore districts' ( jingwai diqu, such as Hong Kong) into its sphere of economic influence, thereby making them economically dependent on China in order to further facilitate its political influence.( 430) The China factor is also the root of the so-called "second wave" of democratization movements between 2012 and 2014 (Rawnsley and Feng 2014).The most well-known movement during that time was the Sunflower Movement (Ho 2019) in 2014.However, the internationally less prominent Anti-Media Monopoly movement is more relevant in this context.The conflict between the Want Want China Times Group (owner of China Times) and different activists is at the core.The conflict originated in 2009 when the pro-China Want Want Group carried out its first media acquisitions.One hundred forty-nine communication scholars condemned Want Group for harming freedom of speech and journalism (Chang 2013).This was not the last time civil society and the Want Want Group clashed.In 2012 the Anti-Media Monopoly movement emerged due to Want Want Group's new media plans to buy newspapers (for example, Apple Daily) and broadcasting companies.The discontent of the movement mainly evolved around the harmful effects of commercialization and Want Want Group owner's strong business connections to China.The background of the movement was quite diverse, with people from academia and civic groups, student organizations, as well as media unions.Eventually, the movement was successful as Want Want's planned media acquisition was stopped by the National Communication Commission (NCC) which went in the aftermath even further by drafting in the aftermath of this incident the Anti-Media-Monopoly Act (Rawnsley and Feng 2014).
Considering the conflict between civil society, regulators, and media owners, it is not surprising that media bias plays an important role in Taiwan.Hsiao (2006) surveyed the audience of the four leading newspapers in Taiwan, and by checking the political affiliation of the participants, she found that political bias exists in three of the analyzed four media outlets.Both Want Want's China Times and UDN are pan-blue news media outlets, while Liberty Times (LTN) is a pan-green newspaper, and Apple Daily was identified as relatively neutral.Other studies come to a similar conclusion analyzing print media (Dzwo and Lee 2010) or broadcasting channels (Liu 2009;Lo and Huang 2010).Overall, media bias potentially plays a crucial role in Taiwan and reflects the general divide within politics.The China factor also affects media bias in Taiwan.In 2019 the Financial Times published an investigative story that uncovered that the editorial management of CTiTV and China Times, which are both owned by the Want Want Group, "take instructions directly from the Taiwan Affairs Office, the body in Chinese government that handles Taiwan issues" (Hille 2019).
Scholars in Taiwan have used different methods to analyze misinformation.Lin (2020) found different forms of "fake news" mixed with misinformation in Taiwan.Wang (2020) showed that fake news is indeed correlated with voters' news judgment and voting decisions.However, Lin (2020) argues that the reason that fake news has such a large dissemination effect in Taiwan is completely different from the US.As both Lin (2020) and Wang (2020) studied the 2018 Taiwan Local Elections, they indicated that China's media reported much false information regarding the Taiwanese election.They argue that Taiwan's mainstream media regarded China's state-owned or nationalistic media as ordinary news media, failed to verify the authenticity of the news, and amplified false information shared on social media that has potentially its origin in China.
Besides this pure academic work, there are also various local NGOs that are actively doing misinformation research.For example, the NGO Doublethink Lab, where academic researchers are involved, analyzed China's information Operation during Taiwan's 2020 election (Lee et al. 2020).Another example is the civilian grassroots organization Information Operations Research Group, where academics with a background in computer science or political science regularly publish their analyses about misinformation (e.g., COVID-19; Wang et al. 2021).
Some scholars who were active participants in the Anti-Media Monopoly movement also focus on misinformation research.Hu (2018), for example, discussed the pros and cons of fact-checking organizations in the US context before providing suggestions for Taiwan's fact-checking organizations in an academic article and becoming a founding member of Taiwan FactCheck Center (TFC).Besides the TFC, there are other citizen-led fact-checking organizations in Taiwan.One of them even relies purely on crowdsourced checks (Su and Li forthcoming).Lo (2018), the chairperson of Taiwan Media Watch and a co-founder of TFC, indicated that misinformation had been an old social structural problem, but social media has made it more complicated.While misinformation researchers have identified many relevant cases, local scholarship has also highlighted the structural elements such as media ownership structures and journalistic standards (Hu 2017), the political culture (Rauchfleisch and Chi 2020), and the China factor (for an overview, see Fong, Wu, and Nathan 2021) that all amplify the problem.
Overall, academics with different disciplinary backgrounds work in misinformation research.Most of them are also active in society, whether as activists, regulators, members of NGOs, or just as experts in the media.In general, there is a consensus that misinformation is a severe problem in Taiwan.The assessment of the V-Dem report, where Taiwan was classified as the number one country regarding disinformation, also reflects this.However, there are still ongoing debates about what elements are mainly causing the problem.

Research Questions
Our study focuses on this potential disconnection between the international and local debates.We thus want to analyze whether the above-mentioned problematic aspects can also be observed in Taiwan by analyzing the public discourse about misinformation.As discussed in the prior section, the China factor makes Taiwan a unique case.We expect that it also affects the media discourse about misinformation.As we have shown, scholars who work on misinformation also played a prominent role in different protests.Therefore, we are interested in the following research questions.
First, we expect a broad range of issues, domestic politics, cross-strait relations, and international issues covered in the public misinformation discourse.However, we expect differences between media outlets in Taiwan because of potential bias in the coverage.We also expect that the so-called China factor (Chang 2013;Hille 2019) potentially influences the prevalence of specific topics in the coverage.
RQ1: What general themes drive the public misinformation discourse in Taiwan?Second, as discussed in the beginning, the US election triggered international attention.Therefore, we want to analyze whether the debate about misinformation has always been a purely domestic debate vis-a-vis the international coverage of misinformation.Additionally, we also expect differences due to the China factor, which might impact the volume of articles in certain media outlets mentioning China, especially as one of the media outlets might be directly influenced by the Chinese state (Hille 2019).
RQ2: What role do China and other countries play in the public misinformation discourse?
Third, as academics play an active role in the fight against misinformation in Taiwan (e.g., Hu 2018; Lee et al. 2020), we are interested in their role in public discourse about misinformation.At the same time, we might find politicians as central actors in the misinformation discourse as they use "fake news" accusations to attack political opponents, a trend described by prior research in other countries (Egelhofer et al. 2020;Farhall et al. 2019).
RQ3: What role do different actors play in the public misinformation discourse?Lastly, as we analyze in the prior RQs only the supply side of media coverage, we also focus on misinformation discourse's demand side.In addition, we want to analyze which covered actors and issues receive the most attention on Facebook, one of the most important social media platforms regarding news consumption in Taiwan (Lin 2019).
RQ4: What form of public misinformation discourse reaches the largest audience on Facebook?

Data and Methods
To answer the research questions, we rely on articles from Taiwan's three largest national newspaper groups as well as Taiwan's most popular online-only outlet (25,740 articles in total).We wanted to ensure that the most popular outlets covering Taiwanese media's political spectrum were selected.Liberty Times (n = 13,220) is considered a rather pan-green newspaper, as previous studies have shown (Hsiao 2006;Lo, Wang, and Hou 2007).As discussed in the prior section, China Times (n = 3,686) is a pan-blue media and clearly has a China-friendly stance due to the newspaper's ownership (Hsiao 2006;Wu 2016).Apple Daily (n = 7,729) is Taiwan's largest tabloid newspaper.Even though Apple Daily was classified as a politically neutral media in a prior study (Hsiao 2006), with the founder Jimmy Lai sentenced by the current HK government for pro-democracy protests (BBC News 2021), it undoubtedly has a China-critical leaning which is closer to the stance of the pan-green camp.Lastly, we added ETtoday (n = 1,105) as an online-only outlet that has, according to Alexa (2021), the highest traffic of all Taiwanese news platforms in June 2021.Usually, this outlet is not considered in studies focusing on political issues in the Taiwanese context.However, the outlet also publishes many political articles besides focusing on entertainment news.We used several keywords connected to dis-and misinformation (see appendix 1 for an overview: Fake news, false information, misinformation, disinformation, etc.).All articles were directly searched on the homepages of each media outlet and then downloaded.We kept all articles published between the beginning of 2016 and July 2021, as this time frame includes the US election in 2016, the local election in 2018, and the Taiwanese presidential election in 2016 and 2020.
We decided to rely on automatic content analysis to identify the broader topics covered by the public discourse.We estimated a topic model with the stm package in R (Roberts, Stewart, and Tingley 2019).This method identifies topics by analyzing which words often appear together in the same articles.The number of topics has to be defined in advance for these kinds of models.Before running the topic model analysis, we had to preprocess the data.Chinese text has first to be segmented.We used the jieba package in R with a list of manually added names of politicians and organizations (this ensures that names are not segmented).We then used a manually extended stopword list to exclude words that do not add much context information.We tried different k (number of topics between 20-140) and, after manual inspection, decided to use a model with 30 topics as it yields a good compromise in our context by identifying rather broad topics but still capturing differences that are of interest to us.We then manually validated the model with a word intrusion test (93.33%correctly identified intrusion words) with R package oolong (Chan and Sältzer 2020). 1 The labels for the topics were chosen by inspecting the words with the highest probability for each topic as well as by checking articles that indicate a high probability for a given topic (see Figure 1).We relied mainly on a dictionary approach for our second research question focusing on the question of whether an article also makes references to China or other countries (international).We first collected country names in Chinese, then modified them with regular expressions for matching purposes.Since our focus is on differentiating the article category, which should be domestic or international, we combined country names as a list called "international."If an article contains any keywords, we classify it into the corresponding category (for more details and a manual validation of the method, see appendix 2).If no country could be matched, an article would be counted as domestic.We thus used three categories: domestic (no reference to China or other countries), China (makes a reference to China), or international (a reference to other countries but not China).
For the third research question about the role accusations play in the discourse, we relied on named entity recognition to extract actors in texts with accusations.While not common in journalism studies, it is a method that has been successfully used in studies identifying sources in the news coverage (Mellado et al. 2021).We first segmented each article with the library ckiptagger (Li, Fu, and Ma 2020).After that, we use another module of the library to label the part-of-speech tagging.We then filter to keep proper nouns only and get our results.
Two authors that are native speakers and know the local political context well then manually coded the type of actor covering all named entities mentioned at least five times in our sample.This includes 2,267 (15,675 in total) unique named entities that cover 81% of all named entities mentioned in the articles.We started with an initial pretest to evaluate the intercoder reliability that covered a random sample of 5% (n = 65) of all named entities that are mentioned at least ten times.This initial test already indicated an acceptable Krippendorff's alpha of 0.7.We then discussed the cases with disagreement and increased our manually coded NE sample to 2,267.Both coders coded an additional random sample of 100 NE during the final coding phase.During this phase, Krippendorff's alpha reached 0.81.The cases with disagreement were all named entities with few mentions in articles.
Lastly, we download over CrowdTangle all public Facebook posts with the related metrics (e.g., shares) that include the URL of one of the articles in our sample.We then counted how many shares each URL received in total on public Facebook pages or groups for each.All statistical models were estimated as Bayesian regressions with weakly informative priors with the R package brms and considered the nested structure of articles published in different media outlets. 2

RQ1: Topics Covered by the Media
Our topic model analysis shows that most coverage about misinformation covers clearly political topics (0.77), whereas a smaller share covers celebrities and consumer topics (0.23), but often still with a connection to a political issue.This finding is also supported by the number of articles published in given months.Overall, articles spiked around presidential elections or national and local elections.A first increase could be observed around the 2016 US presidential election 2016 (see Figure 2).However, a major shift happened later during the 2018 Taiwanese local elections.The attention peaked with 1800 articles in December 2019 before the 2020 presidential election.Public discussions about misinformation thus receive the most attention during major political events.Some of the topics have a clear connection to the China factor (e.g.,China propaganda or China and Taiwan sovereignty), whereas various topics directly refer to local politics (e.g., Domestic party politics).A specific topic also covers media regulation and the NCC's role (e.g., NCC).We could also identify a specific topic covering the Anti-Extradition movement in HK.
We also compared the prevalence of topics for each media outlet.We selected eight topics for which we expected differences between the four media outlets (see Figure 3).For highly partisan issues, apparent differences between media outlets can be observed.For example, the China Times has a higher prevalence than all other outlets for a topic (DPP cyber-army) focusing on the connection between a person running a cyber army and the DPP.In the typical article covering this issue in the China Times, KMT politicians are cited with accusations against the DPP for using cyber armies -a specific form of astroturfing (Kovic et al. 2018) common in Taiwanese domestic politics (Hu 2017).On the other hand, the HK Anti-extradition Movement topic has a higher prevalence for LTN and Apple Daily.
An interesting case is the topic about media regulation and the NCC.While the high prevalence of this topic for China Times as a media outlet that belongs to the same media group that owns CtiTV, a channel that the NCC fined for fact-checking failures (Lin 2019) and did not give them a broadcasting license anymore because of the problems with the ownership structure, the high prevalence for ETtoday is surprising, at least from a political perspective.Both had critical coverage about the NCC but with highly different framings.For example, ETtoday had a critical article that raised concerns about the freedom of speech by citing academic experts and refraining from political accusations.The China Times, in contrast, framed the NCC more as an authoritarian government's tool for suppressing dissidents.
The Kansai Airport case is another issue that should be highlighted.While all four media outlets reported over an extensive period about this issue, the China Times had overall a higher prevalence for this issue.The Kansai airport incident started with disinformation disseminated by a Chinese state-funded online media outlet shared in the domestic online forum PTT, picked up by various media outlets without fact-checking (Hartnett and Su 2021;Rauchfleisch and Chi 2020).This news triggered much criticism directed at a Taiwanese diplomat stationed in Japan who eventually committed suicide.The China Times attacked in their coverage the DPP by quoting KMT politicians that claimed the government was covering the truth as the DPP kept saying the diplomat was harmed by the fake news even though the diplomat had not mentioned the pressure caused by fake news in his suicide note.They used the spin that the DPP-led Ministry of Foreign Affairs had put too much pressure on the diplomat and would thus be the actual cause of the suicide, not "fake news."Some of the topics also appear together.For example, in China Times articles, KMT politicians accused the DPP of using cyber armies to protect a prominent DPP politician, which allegedly led to the Kansai Airport Incident.

RQ2: Domestic and International References in the Coverage
We complemented the analysis of issues with an analysis of country references in the coverage.We created three categories: domestic (41%), China (41%), and international without China (18%).The overall share of articles indicates China's important role in the public debates about misinformation in Taiwan.Both Apple Daily (41.1%) and LTN (44.5%) have in almost half of the articles in our sample a reference to China which is clearly more than expected (see Figure 4).The China Times, in contrast, has a lower than expected share of China articles (30.7%), and instead, with half of the articles focusing purely on Taiwan, a stronger focus on the domestic arena.ETtoday also has a lower than expected share of China articles (28.1%) but a stronger than expected focus on international issues (25.7%) in comparison to the other outlets (all under 20%).
Additionally, we analyzed the overall geographic focus over time (see Figure 5).While at the end of 2016 and during 2017 the share of articles with an international focus was rising, the share of monthly articles with a domestic or a focus on China increased.This trend becomes even more evident when the absolute number of articles per month is considered.Up to 2018, all three categories get an almost equal share.The absolute number of articles with an international focus stays the same, whereas the number of articles focusing on the domestic arena or mentioning China increases.China and domestic focus are almost equal over time.However, during local election times, the domestic focus takes the lead.

RQ3: Actors
For the analysis of the actors mentioned in the public discourse about misinformation, we relied on an automatic method to extract all actors and manual content analysis to assign them a specific category.Our analysis shows that domestic politicians and parties are the most mentioned actors in the public debate about misinformation, followed by international politicians and parties (see Table 1).Journalists and media also play a crucial role.Academic experts are mentioned less, with a share of only 1.4%.
Regarding the mentioned actor, we also analyzed the difference between media outlets.The results reflect the differences we observed in the location analysis.The observed share of international political actors is higher than the expected value for ETtoday and Apple Daily (see Figure 6).On the other hand, the observed value of domestic political actors is higher than the expected value for China Times.Perpetrators are more often mentioned in the China Times, which can be explained by the outlet's strong focus on the topic DPP Cyber army, which almost always mentions the cyber army as a perpetrator.

RQ4: Reach of the Public Misinformation Discourse
For our last research question, we focus on the reach of articles.We estimated a Bayesian negative-binomial regression with varying intercepts for media outlets (see Table 2).We included the eight topics we used for RQ1, the binary China variable from RQ2, and from RQ3 the binary variables academic actors, celebrities, journalists and media, and perpetrators.Our results show that if articles cover a more specific contentious issue such as the NCC, the DPP cyber-army, or the China and Taiwan sovereignty topics, they have a higher  chance of receiving shares than the more general political issues (government).An exception is the former mayor of Kaohsiung and populistic KMT presidential candidate Han Kuo-yu.Furthermore, articles referring to China also have a higher chance of receiving shares.Last but not least, while perpetrators or celebrities do not lead to a substantial increase in share, academic actor is a substantial predictor of the number of shares an article receives on Facebook.

Discussion
Our analysis shows that the 2016 US Presidential Election triggered debates about misinformation in Taiwan before the debate became more localized during the local election in 2018, the Taiwanese presidential election in 2020, as well as during the COVID-19 crisis.
We also identified differences regarding the topics that were covered between media outlets in the debates about misinformation.They indicate a particular media bias as outlets are more likely to cover issues that are aligned with their political stance.This tendency is also reflected in references to China, where we also identified differences between media outlets.Furthermore, the power struggle between political actors and institutions is visible in the covered topics and identified actors.As shown in the literature review, these issues and power struggles are not new (Rawnsley and Feng 2014) but received a push with the debates about disinformation in Western democracies.This also shows that a more historical perspective that broadens the scope for disinformation research would help develop a more critical perspective, as Tischauser and Benn (2019) do in the US context.In addition, future research could use such a sociohistorical perspective for young democracies such as Taiwan.
Our findings are partly in line with observations in other countries, as disinformation and fake news accusations are also used in Taiwan to discredit political opponents (Farhall et al. 2019;Egelhofer et al. 2020;Lischka 2019).Still, Taiwan is different from the typical Western country regarding influence operations.The Kansai airport incident illustrates this.Disinformation targeted at Taiwan can potentially have severe consequences (Wang 2020;Hartnett and Su 2021).In most cases of successful disinformation campaigns, structural issues play an important part, such as the failure of the media system during the Kansai airport incident.In the case of the Kansai airport incident, it was the TFC that intervened with a fact-check eventually.Fact-checking as an intervention plays a different role in countries like Taiwan, which face influence operations from authoritarian countries (Khaldarova and Pantti 2016).
While not at our analysis's core, it is crucial to consider the cultural differences between the typical Western case and Taiwan.In Taiwan, we could not observe what Gutsche (2018) identified in his qualitative discourse analysis in the UK and US.First, even though there are prominent and popular daily newspapers in Taiwan, they don't hold long and prestigious authority as The New York Times or The Guardian do (for more than a century), as the lifting of martial law and freedom of the press happened only 35 years ago in Taiwan.Second, trust in journalism is as in the US, also extremely low in Taiwan (Lin 2019), and journalists, in general, are aware of the problems.A large number of journalists in Taiwan claim to face pressure within news organizations to align and adapt their reporting at times (Huang and Lin 2019).Especially in crises such as the Kansai airport incident, newspaper editorials framed the issue along with their political leaning, which conveniently distracted from the media's role in this crisis (Rauchfleisch and Chi 2020).
Moreover, while in the typical Western case, the so-called "Fifth Estate" plays the watchdog if traditional media fails, in Taiwan, NGOs and fact-checking organizations mainly fulfill this role.Like in the Western case, a "disinformation-industrial complex" (Wardle 2020) emerged that plays an important role.However, while we should critically assess in the future how this complex develops and whether its actors exert their power in a less altruistic way, the focus and funding of such organizations initially make sense in Taiwan as threats can become existential in a crisis.What makes Taiwan also unique is that attempted foreign influence on media triggers social movements, as the example of the Anti-Media Monopoly movement shows (Chang 2013;Rawnsley and Feng 2014).The successful general influence of social movements as a unique cultural aspect in Taiwan as a young democracy could be used as a conceptual dimension in future comparative research.
Lastly, while Taiwan resembles the US on a media system level, cultural factors make it clearly different, as the NCC debates illustrate.For example, there is definitely a stronger push for regulation and interventions from the state and society compared to the US.Thus future research could focus more on cultural differences between countries that are usually not captured by comparative macro-level media system frameworks (Humprecht, Esser, and Van Aelst 2020).Furthermore, future comparative research should also focus on Taiwan in the context of Asia-Pacific countries, as these countries are usually missing in comparative journalism research.
With over 25,000 articles covering misinformation in total and often hundreds of articles per month in each media outlet, there is the danger of overuse of the different terms, which could backfire and undermine the media's credibility (Egelhofer et al. 2020).While we also identified good and nuanced articles, often with academic experts as sources or even as authors of articles, many articles report accusations between political opponents.The coverage can thus be best described as a never-ending burglar alarm (Zaller 2003).Moreover, while Taiwan indeed faces external pressure due to the China factor and other influence operations, the overuse of the terms makes citizens numb with the consequence that when a crucial new case is uncovered, that message might get lost in a sea of political accusations between political opponents.We do not have the data to make claims about which stories were consumed individually, but we know that more contentious stories with political accusations are, on average, shared more often than stories about a real case.
However, we also saw that stories that mention academic experts are shared more often than stories without an academic expert, meaning academic voices have a certain reach when they are cited.The Taiwanese case also shows that different stakeholders are involved in misinformation research, and as the political stakes are high combined with great uncertainty, it makes sense to see misinformation studies through the lens of post-normal science.While not a typical science discipline concerning communication, the field could learn from other post-normal issues such as climate change (Brüggemann, Lörcher, and Walter 2020) or CRISPR (Brossard et al. 2019).Future research could focus on this perspective.
While our study could identify clear trends and patterns, there are also several limitations.First, our study does not include any broadcasting media.Future research could focus on differences between articles and videos.As we opted for scale in our study, we had to sacrifice depth.Future research could use qualitative methods to analyze in more detail specific discourses about misinformation in the media or follow Su and Li's (forthcoming) approach and interview the involved actors in the fight against misinformation.Notes 1.The topics for which the word intrusion test failed were those with the lowest overall prevalence.2. We used 4 chains with 4000 iterations in total and 1000 warmup iterations for all of our models.All chains converged, and Rhat were all 1.

Disclosure Statement
No potential conflict of interest was reported by the author(s).

Figure 1 .
Figure1.Overall prevalence for all 30 topics with the words that have the highest prevalence within a topic.Left: Chinese original; Right: English translation.

Figure 2 .
Figure 2. The number of articles per month.Major political events are labeled.

Figure 3 .
Figure 3. Point estimates for eight different topics based on Bayesian regression models.95% credible intervals are shown.

Figure 5 .
Figure 5. Top: Monthly share of articles with a specific geographic focus.Bottom: Monthly number of articles with a specific geographic focus.

Table 1 .
NE actors identified in our sample.

Table 2 .
Bayesian Negative-binomial regression analysis with number of shares on Facebook as outcome variable.IRRs are shown with 95% credible intervals.IRR 95% CIs that do not include 1 (larger means increase, smaller decrease in shares) are in bold.