Academic entrepreneurship: A bibliometric engagement model

Academics are becoming accustomed to growing demands on ‘ performance ’ as universities place increasing emphasis on quanti ﬁ able research outputs. Despite the rapid ascendancy of bibliometrics, limited empirical research has considered the de ﬁ nitions of “ research performance ” employed by institutions, and subsequent academic responses. Drawing on exploratory data collected from 58 university-based colleagues in 23 countries, supplemented with the personal experiences of authors, this manuscript explores how institutions utilise bibliometrics, and how scholars adapt. Findings demonstrate a signi ﬁ cant number of mechanisms utilised by institutions to assess research performance, postulating the emergence of forms of ‘ academic entrepreneurship ’ , characterised by more and less ethical patterns of manipulation. A conceptual model of bibliometric engagement is presented, with implications for tourism and cognate disciplines. © 2021 The Author(s). Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).


Introduction
Facilitated by rising technical capabilities, quantitatively driven management mindsets, ambition in many universities to be leading research institutions, and the emergence of competitive academic cultures, bibliometrics are permeating the modern corporate university with mounting evidence of growing relevance to academic life (Martín-Martín, Orduna-Malea, & López-Cózar, 2018).Institutions compile and feature bibliometric information to attract students or claim standing and reputation in research and scholarship across national and international rankings (Johnes, 2018).Academic journals highlight impact factors as indicators of scientific weight.Individual journal articles are ranked by diverse standards, including downloads and citations, with an evolution to increasingly diverse score systems developed by publishers to capture media, administrator and scholar interest (Sugimoto et al., 2017).Academics are continually rated and ranked on research achievements through bibliometric evaluation that may be voluntary or imposed (Ryan, 2020).Approaches to bibliometric assessment are evolving constantly and diversifying as universities increasingly emphasise quantifiable research outputs to build and sustain relevance.
In tandem with the proliferation of statistical methods to benchmark the research performance of institutions, departments and individual scholars, widespread adoption of bibliometric assessments by universities is attracting increased criticism (Cantu-Ortiz, 2017).Pervasive imperfection in the peer review processes which underlie bibliometric statistics is one pressing concern (Rowlands, 2018).Apprehension and uncertainty also derive from a limited understanding of how different types of performance are measured and assessed.Indeed, bibliometric systems are veritable 'black boxes', functioning through opaque al-gorithms and data collection techniques which produce significantly different results across platforms.Citations for an individual researcher or specific journal article, for example, may vary significantly between Google Scholar, ResearchGate, SiteSeer, Scopus, and Web of Science (Tahamtan & Bornmann, 2019).Given the complexity surrounding measurement and diversity of available approaches and tools, the drivers and recipients of rankings and ratings are not always clear; whether a complex construct such as 'quality' can even be adequately captured by currently popular bibliometric indicators is questionable (Rowlands, 2018).Nevertheless, bibliometrics are increasingly prevalent and authoritative, creating imperatives for researchers to 'perform'.
Attendant pressure to maximise research output quantity and quality in today's universities has spawned various forms of 'academic entrepreneurship'.Entrepreneurship usually refers to establishing and expanding a business by taking on financial risks to generate profit.Academic entrepreneurship in the current literature refers to academics functioning in this conventional entrepreneurial sense, as through involvement in commercial spin-offs or consultancies (Hayter et al., 2018).Our manuscript innovates by extending the term to include strategies developed out of the wish to thrive or the need to survive in a competitive intellectual environment.Motivations for academic entrepreneurship may thus include personal material or non-material gain, as much as social standing or endurance in the assessment frameworks established by employers, governments, or corporations.
Scholarly inquiry on academic bibliometrics has emphasised specific actions and strategies such as 'grantsmanship', successful collaboration in research teams, and balancing co-responsibilities of teaching and research (Ryan, 2020).However, concerted efforts to empirically assess the overall 'landscape' of academic entrepreneurship and subsequent implications for the advancement of knowledge and the academic profession have been limited.Incentives to maximise research output, for example, create parallel disincentives to follow methodologically and ethically rigorous, and often time-consuming, research protocols.Resultant malpractice, notwithstanding potential individual gain, calls into question the validity and utility of the 'knowledge' and 'contribution' of the research, which undermines confidence in the field's integrity.Beyond the imperfections in peer review processes, studies have also revealed a 'dark side' of bibliometrics, as through h-index and impact factor manipulation (Bartneck & Kokkelmans, 2011;Chorus & Waltman, 2016;Yang et al., 2016).In response, concerns for the integrity of research and scholarship across disciplines have been expressed, with negative implications for the fundamental principles which underpin knowledge creation (Hayashi & Mitchell, 2019).
This research addresses an important gap in the literature by identifying how academics in tourism engage with bibliometrics.In achieving this objective, this manuscript delves into patterns of academic entrepreneurship and the factors that account for these patterns, based on a selective online survey of active tourism academics at various career stages in different countries.Findings are augmented with the authors' personal experiences to add analytical depth.The primary contribution of this research is a conceptual model that articulates and reticulates the identified forms of bibliometric engagement by institutions and researchers, with implications presented for tourism and cognate disciplines.

Theoretical background
Bibliometrics have gained increased traction in science policy and management, with particular advances in the domain of research evaluation (Bornmann & Marewski, 2019).Bibliometrics are essentially a quantitative analysis of publications, measuring document properties through relevant statistical methods (Godin, 2006).Subsequently, bibliometrics have evolved as a popular technique to measure research outputs (Hood & Wilson, 2001).While periodic monitoring of research through bibliometrics has been applied to measure the evolution of specific disciplines, common approaches also assess the scientific contribution of authors, journals, or specific works, the impact of scholarly publications, patterns of authorship, and processes of scientific knowledge production (Nederhof, 2006).Bibliometrics have emerged as an established research field, especially in the science-based disciplines.Bibliometric researchers have developed various methodological principles to collect data, with specific methods such as citation analysis, social network analysis, co-word analysis, and content analysis, as well as text-mining applied in existing studies (Leung et al., 2017).

Engagement with bibliometrics in tourism
Bibliometrics are now commonly used to understand trends, particularly in substantive bodies of knowledge (Moyle et al., 2020).In tourism, an initial era of predominately descriptive topical or journal content analyses (e.g., Lu & Nepal, 2009;Xiao & Smith, 2006) has given way to a plethora of more diverse and complex analyses reflecting the field's maturation (López-Bonilla & López-Bonilla, 2020;Merigó et al., 2020).New foci include evolving theoretical foundations, key concepts, methodological approaches and/or key author identification across the field as well as within specific subdomains such as ecotourism (Khanra et al., 2021;Shasha et al., 2020), medical tourism (Virani et al., 2020), smart tourism (Johnson & Samakovlis, 2019), and halal tourism (Yagmur et al., 2019).
Given the exponential growth of bibliometric approaches in tourism, it is important and timely to reflect on how bibliometrics are used to evaluate research performance, and more specifically on how tourism scholars have responded to increased institutional pressure to perform.Published analyses may be used by featured authors as evidence of status in research performance exercises, especially if bibliometric analysis is focused on highly ranked journals (McKercher, 2005).Citation counts have been identified as a critical indicator of research performance among tourism scholars (McKercher, 2008), serving as benchmarks to inspire increased productivity through potential manipulation of bibliometrics.In this regard, Mulet-Forteza et al. (2019) note that it is critical to assess "who is achieving better results in terms of authors, universities and countries", while Zopiatis et al. (2015) explore the publication strategies of 44 prolific tourism scholars.The importance of ensuring early publication in top tier journals to build and sustain a research career was identified as a critical strategy in the latter study.
Despite these initial applications, knowledge on how bibliometrics are applied to evaluate research performance in tourism remains scant.This constitutes a significant literature gap, given the growing demand for practical and valid methods of research evaluation, including metrics focused on publication and citation data (Praus, 2019).However, such bibliometric indicators are controversial, with a lack of clarity surrounding the processes for evaluating research performance and benchmarking institutions across disciplines (Jappe, 2020).Indeed, scholars have questioned the validity of existing ranking systems given widespread bias and over-simplification (Frenken et al., 2017).Despite disagreement on which data, methodology and interpretations are most robust, university ranking systems are both criticised and applauded.The emergence of complex bibliometric systems to evaluate research performance has heightened global competition, with governments often leveraging ranking systems to support policy interventions for funding public institutions (Antonakis, 2017).Governance processes within institutions, including boards, university councils and committees, rely on bibliometrics, which influence opinions, decisions and actions (Maassen, 2017).In effect, bibliometric proliferation has created a single global academic system, with homogenous university structures and objectives arguably neglecting nuances across disciplines, countries, and specific university missions as articulated in strategic plans.
Because bibliometrics have become engrained in university cultures, it is imperative to reflect on how academics engage with such indicators to bolster their research performance, since all bibliometric manipulation can fundamentally undermine the validity of research outputs.Conventionally, the term 'academic entrepreneurship' refers to a university 'spin off' or an institutional transfer of research, development or technology to facilitate innovation or commence commercial operations (Wood, 2011).Accordingly, the academic operates simultaneously as an intellectual actor and entrepreneurial actor, with clear conflict of interest potential.Due to pressure to build and sustain relevance in our rapidly transforming society, universities and their academic staff must become increasingly strategic in approaches to academic entrepreneurship.Dominant conceptualisations of academic entrepreneurship, nevertheless, need to better account for the entrepreneurial behaviour of individuals embedded with institutions.Empirical studies assessing bibliometric engagement in tourism are scarce, and provide limited examples of behavioural responses which could constitute academic entrepreneurship.Benckendorff (2010) focused on co-authorship patterns, articulating various network metrics that measure author and institutional collaboration.Finding that multiple authorship had become more prevalent, the study embraced the positive outcomes of collaboration, noting that co-authorship increases citation potential.Strandberg et al. (2018) also note a co-authorship trend, with substantial increases in triple-authored articles indicating the 'sharing' of publications to maximise individual output.Koseoglu and King (2021) touch on this issue in their exploration of authorship structures and collaboration networks in tourism, and underlying operational, strategic and tactical approaches.Although the formation of scholar 'cliques' was discussed, the research did not explore the core bibliometric drivers which underpin this entrepreneurial behaviour.Subsequently, the implications of techniques such as 'free rider' co-authorship were discussed only tangentially.
In co-authorship and other forms of academic entrepreneurship, in-depth empirical work is lacking which explicitly considers how bibliometric pressure to perform encourages ethically questionable behaviour unconducive to knowledge advancement (McKercher, 2018).Temptation to select research projects that can be completed quickly to generate publication outputs also detracts from an ethic of 'slow and mindful' tourism research.This can generate outputs which are high in statistical rigour but theoretically and conceptually weak.Questionable citation practices also exist, with Wang et al. (2016) and Moyle et al. (2021) both identifying a high instance of knowledge-inhibiting citations in analyses of specific tourism-related journal articles.This indicates carelessness with sourcing as a consequence of rush to maximise research outputs.Incorporating sources in manuscript revisions as suggested by reviewers (whether relevant or not), is another common practice of bibliometric manipulation familiar to most academics but not engaged sufficiently as a research topic to understand prevalence or underlying motivation (Fong & Wilhite, 2017).

Method
Given that comprehensive empirical assessment of tourism's bibliometric obsession is lacking, this research investigates how tourism scholars engage with bibliometrics, thereby unearthing the entrepreneurial approaches to improve scores on research performance evaluation criteria as implemented by their respective institutions.In addition, this research will capture the diverse mechanisms which underpin the implementation of bibliometrics by institutions to assess research performance.The structures and outcomes of bibliometric assessments were studied from a constructivist grounded theory perspective, where reflexive researchers actively engage with and interpret data, drawing on prior knowledge of the subject area to provide insight into relevant issues (Charmaz, 2021).Matteucci and Gnoth (2017) note that few tourism researchers acknowledge the grounded theory branch applied, instead drawing, without sufficient justification, from multiple approaches.As articulated in the constructivist approach, the literature review was conducted before and during the data collection; comparisons combine with observations to explain emergent results.Furthermore, research questions were initially kept broad, crystallising as more respondent data emerged.Flexibility allowed for the open exploration and analysis of inductive data to extrapolate emergent patterns into conceptual categories (Charmaz & Thornberg, 2020).
To solicit data from implicated tourism scholars, two layers of information were gathered and integrated to form a comprehensive picture of the tourism bibliometric landscape.First, tourism colleagues were selected to participate in the research based on core criteria of personal network membership (to maximise likelihood of response) as well as diversity in institutional country setting and career phase.Given the grounded theory approach, there was no reason to believe that more comprehensive sampling would have yielded substantively different results.Initially, 62 colleagues working in 18 countries were emailed immediately following Christmas in 2020.Respondents were thus expected to have more time to reflect on and engage with the subject during the holiday period, further mitigating non-response potential.The invitation also assured anonymity, making clear that names would not be revealed at publication.
The first of two open-ended questions solicited respondents' understanding of the bibliometric climate within their institutions, and their personal impacts.The second question probed into observed or personally practiced strategies for improving bibliometric scores.Responses varied from short notes claiming that bibliometrics had no relevance, to extensive discussions of bibliometric uses, implications and strategies.The cumulative response of approximately 10,600 words, based on 39 responses received after two weeks, was evaluated by exploratory analysis following data cleaning and preparation.Initial codes were developed using an individual approach to evaluating the accumulated data without prior discussion between co-authors.This yielded three distinct but complementary structural schemata.Subsequent exchanges strengthened findings by implementing intermediate and advanced coding stages which involved re-assessing, substantiating and discussing the initial codes, toward their optimal amalgamation (Junek & Killion, 2012).This process concluded when all authors felt confident that no relevant aspects had been overlooked.
Common themes identified in the material may indicate theoretical saturation at the macro scale, though this is less likely at the individual strategy level (Strauss & Corbin, 1998).Indeed, the authors noted that the sample may not have reached theoretical saturation, since new themes still appeared in the last-received responses.Accordingly, a second invitation round to 60 additional colleagues occurred in February 2021.Geographical coverage was increased by including colleagues in regions not adequately represented initially, and by approaching academics not personally well-known by the authors.This yielded an additional 19 responses from five more countries.The material (about 7700 words), evaluated in a process replicating the first-round initial codes, produced some new results which required minor modifications to the initial coding schemata and emergent themes.Table 1 summarises all 58 respondents by institution country, gender, and career stage.
To complement the macro structure, the emic experience of the authors, who represent early, mid and late career phases of academic engagement in tourism and collectively have 80 years of involvement in academic publishing as authors, reviewers and editorial board members, was added.In the advanced coding stage, researcher reflexivity, critical to qualitative research, demonstrates awareness of potential influences the researcher has on the respondents and project, while recognising in turn how the research experience affects the researchers (Crossley, 2019).Reflexive accounts range from modest disclosure statements to indepth explorations on how subjectivity inevitably impacts on the research.Within the constructivist paradigm, reflexivity is not a problem to be reduced or overcome but an essential element in knowledge co-creation.Critical reflection derived from personal observations during the planning, conducting and writing of the research generated an ongoing, recursive relationship between researcher and the topic which generated high levels of reflexivity among the authorship team (Braun & Clarke, 2020).Some additional processes which emerged from this critical reflection are linked to dark manipulation, however, and to reveal these means to use material collected in sometimes confidential exchanges with informants.To illustrate the relevance, we opted also to incorporate our experiences in editorial boards.
Results are structured and discussed with regard to their implications for scholarship.Do metrics drive scientifically questionable behaviour, in a self-reinforcing process that shapes the metrics?Which interrelationships and power dynamics exist between the actors involved in the bibliometric process?And what implications does this behaviour have for tourism scholarship?An overarching objective is to extend the contours of "academic entrepreneurship" to include a sometimes morally ambiguous set of practices required to survive and thrive in a contemporary academy driven by bibliometrics.A supplement details, in four tables, the results in non-hierarchical format, providing overviews of performance measures (Table 2), performance metrics use by different stakeholders (Table 3), strategies to improve performance metrics (Table 4) and additional researcher strategies to improve performance metrics (Table 5) (Tables 2-5, see Supplement).Findings are also transferred into two conceptual models providing interrelationship overviews (Figs. 1 and 2).
Limitations of this research include the lack of representation of tourism scholars based in Latin America, Africa and South Asia.Assumptions of theoretical saturation, accordingly, are challenged.It is also possible that the medium of email to solicit information encouraged response brevity and caution, so that future research should entail more intimate interviews with a broader sample of informants to corroborate and expand the present results.

Most respondents (coded individually below as [R1…58]
), dominantly full-time tenured or tenure-track faculty, regard metrics as increasingly important both institutionally and professionally.Their institutions are seen as under growing pressure to increase their overall and/or discipline-specific national and international rankings, as highly publicised in influential international annual reports such as the Times Higher Education World University Rankings and QS World University Rankings.Universities also increasingly depend on external funding, which has become a means of 'bibliometric' assessment in its own right (see Table 2).However, there was a clear distinction between countries proactively pushing the use of bibliometrics (e.g.Australia, UK), and others less enthusiastically engaged (e.g.USA, Japan).According to one respondent, "I can really see the difference between the Australian bean counting approach and the American freedom to publish wherever and whenever" [R4].The proactive countries in particular encourage institutional competition to enhance national scientific prestige, measuring performance through countryspecific schemes such as Excellence in Research Australia (ERA), and incentivising strong performance of individual universities with higher disbursements of research funding.From the professional perspective, most respondents noted institutional, faculty and departmental pressure to increase the quality and quantity of their own individual research performance, with most pressure felt by full professors.

Institutional bibliometric engagement with faculty
Fig. 1 provides a synoptic description, based on the negotiated amalgamation of the three author schemata, of how institutions engage with their academic employees to increase research quality, and how individuals respond to this engagement (see also Supplementary Table 2).Journal publications were by far the most mentioned vehicle through which both institutional and individual research performance are assessed.Research quality was overwhelmingly associated with high percentage and quantity of publications in 'high-tier' peer-reviewed journals as measured by ranking schemes such as Journal Impact Factor (IF), and affiliated or independent rating schemes such as the Q Index and Australian Business Deans Council (ABDC) journal quality list.
Of growing but secondary importance, and mostly for individual scholars, was research impact as measured by the citation counts deriving from those publications.Google Scholar was the most frequently mentioned citation reporting mechanism, although some universities are focused on more 'exclusive' platforms such as Scopus and Web of Science.Several respondents noted the increasing importance of reporting and rewarding 'real world' impact, with altmetrics such as media reports or Twitter 'likes' compiled by several implicated universities to measure such impact.Assessments can also take unconventional forms.One respondent from an East Asian university stated that "the usefulness of research outcome is not only measured on [conventional] bibliometric score, but mainly by evaluation from local communities, where researchers are involved in product development, operation and management.The locals actually order the university to help with solutions to their current operational problems, and they evaluate the contribution of the researchers" [R7].
In describing how institutions communicate to faculty their attempts to improve research quality, responses fell into three distinct categories.First, some institutions require minimum performance standards over a stipulated period to retain employment.Far more prevalent, however, are institutions which reward high performance with tenure and promotion, or incentives such as teaching buy-outs, outstanding research awards, or bonus payments.Third, some institutions encourage scholars to apply for grants to fund open article access, set up their Google Scholar profile, or initiate international and interdisciplinary research cooperation to stimulate higher research outputs.While the first two categories of communication entail formal mechanisms that presumably facilitate the necessary stages of compilation, assessment, decision-making and reward/punishment, respondents were often skeptical about the attendant processes.One respondent remarked that bibliometrics are often used "crudely in staff performance evaluations, but… with poor understanding, [and] only when it suits the person doing the evaluation" [R54].Others noted that it [bibliometrics] "doesn't have any effect on any individual researcher" [R44] and that there are "no central dictates about whether schools should take any action" [R2].These findings suggest that the relevance of bibliometrics varies between countries and universities, depending on the weight placed on bibliometric evaluations in national or university-specific assessment cultures.

Individual faculty engagement with bibliometrics
Despite such indications of confusion and frustration, respondents revealed numerous and often creative strategies for optimising their research performance.As stated by one respondent, "You create a system that is based on numbers and people will learn how to play the system" [R12].These diverse strategies to increase high-tier publications and citation counts may be logically organised through a four-T structure recognising sequential engagement by topic, team, target and traffic (Fig. 1).'Topic' considers how the theme of a paper can be manipulated to achieve publication and citation aspirations.Several respondents described how they focused on review or opinion/summary papers, regarded as 'easier' to get published and as having more citation potential.Others noted a focus on important or timely topics, and the development of expertise in one area, with assumptions that an 'expert's' papers are more likely to be published and attract high citation counts.The importance of a catchy title to attract reviewer and reader interest was also noted, and sometimes encouraged by editors.One informant emphasised the importance of publishing a series of related articles, "the idea [being] that when one searches for, say, 'tourism advertising', you would find a series of articles which we wrote, not just one or two [R1]".
'Team' refers to the individuals implicated in the production and publication of the manuscript.Numerous respondents cited their or others' attempts to join publication cartels or partner with prominent co-authors already enjoying performance success.PhD students were also regarded as good co-authors, being strongly incentivised to publish in high-tier journals.For a few respondents at universities where author order was deemed important, lobbying for or otherwise ensuring first authorship was a relevant strategy.Others indicated the perceived publication privileges enjoyed by journal editors and editorial board members.Conference networking was mentioned as a useful strategy for meeting potential co-authors and citers, while several respondents noticed the existence of citation cartels.Self-citation was commonly identified as an effective way to increase personal citation counts, with clear strategic intent indicated by a UK-based academic who stated that he "heard colleagues externally speak about purposefully increasing their individual h-index by strategically self-citing publications so that borderline publication of theirs contribute to raising their score" [R52].However, doubts about the associated ethics of self-citation were expressed in several comments, including; "personally, I find it at time hard to determine when (not) to include self-citation; too little and you self-plagiarise, too many and you unjustifiably self-cite" [R26].
'Target' describes the strategies relevant to where scholarly papers are submitted.Almost universal among respondents were intentions or at least aspirations to submit these to high impact journals with greater likelihood of higher citations.According to one respondent, "I think it is widely recognized that publishing in a 'lessor journal' is not the most clever if you are concerned about citations [R1]".Publication in open-access journals was also mentioned as a way of generating more readers and hence citations.Submitting to non-tourism journals and journals based in the scholar's own university was also perceived as a less resistant pathway to get published.An informant claimed that "it is much easier to get into an A or A* hospitality journal than a similar tourism journal, so some (many?) so-called tourism academics are publishing in the hospitality field [R12]".'Traffic', finally, is directly relevant only to enhancing citation counts of achieved publications.A majority of respondents observed at least one of many elicited strategies, which included publicity of research outcomes through diverse social media or through article links in email signatures.According to one scholar, "today's young scholars are very good at promoting their work through social media and other channels" [R11].It is also becoming normative to upload pre-publication article copies to institutional repositories and widely accessible platforms such as ResearchGate.Several respondents stated that they would send copies of articles to authors prominently cited within those works.A common observation was reviewers requesting authors to cite works by the reviewer.Some researcher strategies in Fig. 1 were derived only from the emic author experiences (see also Supplement).These partially pertain to associated strategies not directly linked to researchers, such as those by journal editors to heighten the status of their respective journals.For example, the authors have witnessed editors approaching board members to encourage pro bono lobbying, such as soliciting reviews for inclusion of the respective journal in the Financial Times (FT50) list of most relevant business journals.Universities, through their staff, have asked the authors to participate in QS university rankings.Publishers encourage open access publication, suggesting in emails to authors that downloads and citations will increase dramatically for papers downloadable for free.Open access-based outreach attempts may be strategically supported by universities.

Dark manipulation
Various 'dark manipulation' strategies emerged from the respondents, although there was general reluctance to deal with such grey areas of bibliometric engagement.As stated by one scholar, "as for strategies to improve their score, no, I do not know about this; in my 'circle' they do not talk about this" [R37].Another stated that "I am not aware of [academics] manipulating the metrics in more sneaky ways thoughbut perhaps I am just being naïve [R20]".More commonly, such strategies emerged from the knowledge of the authors.For instance, one editor revealed offers of money and sex in return for accepting a paper, while one of the authors of this article has been offered money in return for ghost-authoring a paper.
Freeriding is probably a more common strategy where one or more authors do not contribute significantly to a paper, but are included to return a similar favour, to strengthen a personal relationship, to recognise assistance such as funding, or for some other reason.To suppress competitor publications, reviewers of manuscripts or research applications may reject manuscripts of suspected rival authors, according to one editor-informant.The authors have also frequently encountered patronising, condescending or insulting language in reviews, intended presumably to dissuade positive editorial decisions.Particularly negative comments may also be directed at editors in separate "confidential comments to the editors".Regarding citation counts, frequently encountered practices include deliberate retention of invalid sources in one's Google Scholar profile, and omitting reference to competitors' publications from manuscripts.

Discussion
Responses show that metrics are perceived as having great and expanding relevance for universities and their tourism researchers.Very significant attention is paid to quantifiable performance, and strategic considerations to improve bibliometric scores were reported by most respondents.The value unit underlying all ratings and rankings is the peer-reviewed journal publication and its performance as measured by outlet status, citation numbers orincreasingly -media attention.Accordingly, manipulation to boost publications and metric scores is common among universities/departments and researchers, leading to the creation of the opportunistic "academic entrepreneur" as a rational strategy for surviving and thriving in the modern entrepreneurial university.Sometimes, this entrepreneurship involves ethically repugnant strategies to boost performance.These dark strategies, at least in the research context examined here, may be best understood as a continuum of 'light' to 'dark' manipulation in which all the strategies can potentially be implicated depending on execution.While some of these strategies have been identified previously, such as 'lazy citations' (Teixeira et al., 2013;Moyle et al., 2021), or excessive conference attendance to boost careers (Hopkins et al., 2016), the range identified in this research suggests an insufficiently explored darker side.

The web of power
As bibliometrics have permeated universities, it is important to understand why these developments have been readily accepted, and often embraced.As the introductory discussion shows, there is an evolution in the prevalence of performance metrics ultimately driven by corporate interests.This includes the erosion of academics in senior decision-making positions in favour of representatives from corporations where profit maximisation and competition are privileged over traditional academic virtues such as collegiality and 'mindful research' (Bok, 2003).Regarding research gatekeeping, publishers have similar vested interests in producing high-rated journals that supposedly attract the most influential publications, measured in the relevance of scientific discovery or public interest.Journals with high impact factors are must-have subscriptions for libraries that determine their value for publishers.Attempts by publishers to use their quasi-monopolistic positions to raise the access fees paid by universities have already resulted in various conflicts, with many universities refusing to maintain subscriptions.
These power relationships are illustrated in Fig. 2. Publishers exert most of the power and pressure in the system, as they control peer-reviewed publication and thereby influence universities, departments, and researchers, and, potentially, governments.This is because universities need access to papers and are ranked on performance.Departments will also seek to perform well within the metric environment established by publishers, for instance to attract additional funding from universities or research organisations.This has given rise in recent years to an entire economy around bibliometrics, with publishers offering bibliometric assessment services, and developing new metrics.These services around rankings and ratings can be readily sold to universities that may wish to or feel forced to compete in the global marketplace for reputation and students.Digital platforms such as ResearchGate offer similar services, and serve as a marketplace, for instance to post job advertisements.Accreditors use bibliometrics to award 'licenses to operate', which can take several years, and constitute a complex power relationship in itself.It is these private and public entities that are the ideological engines of neoliberal assessment practices imposed on researchers.
Publishers and platforms depend on researchers to produce science.Publishers contacted informants to highlight performance and standing, touting benefits of open access publication through higher downloads and citations.Others were approached to be editors, who form editorial boards and gain peer recognition as well as power over paper acceptance decisions.Open access was mentioned, but there was little awareness of this business model's push and pull factors.Here, platforms offering to post research for free allow researchers to make their work more accessible, and to become visible in the marketplace of metrics.A general insight is that while science is increasingly metrics-based, power over assessments is with publishers and platforms, gatekeeping entities working on the basis of obscure algorithms but salient commercial motivations.Yet, it is researchers who generate the very currencypublications -that sustains the system.Cynicism about this system was voiced by respondents who articulated the difficulties in obtaining tenure and the dominant role of bibliometrics in building research careers in an increasingly competitive global environment.As the web of power illustrates, researchers are at the core of the bibliometric environment yet seemingly powerless to influence its development or prevent their own exploitation.
In this system, there are different motivations for researchers to participate.On the most fundamental level, individual researchers may be coerced into evaluations.In the contemporary university, performance metrics are critical for employment, promotion and tenure, funding and resource allocation, even research grant applications.As responses indicate, researchers may still be able to ignore some of these pressures in some universities/departments, though there are likely costs in career advancement.For researchers performing well under given sets of bibliometric descriptors, there may be good reason to welcome assessments, as achievements will be recognized and potentially rewarded.

A Darwinian landscape
These findings may be interpreted through evolutionary theory.'Success', on an evolutionary level, can result from aggression as well as cooperation, both of which can advance goals of 'survival' (Dawkins, 1976).These linkages to evolutionary concepts have been outlined by economic geographers, also in the context of tourism (Ma & Hassink, 2013).For our purposes, it may be argued that at the macro level, evolutionary processes are evidenced by the global oligopoly of publishers; research has established that the five most prolific publishers accounted for more than half of published papers in 2013, with evidence of further market concentration (Larivière et al., 2015).There is also evidence that publishers defend their 'territory' aggressively through growth strategies involving acquisition of competitors, development of publisher-dependent services related to bibliometrics, or changes in paper access prices.These are strategies reflecting on evolutionary principles of success based on aggression.
Patterns of aggression and cooperation are also evident in respondent strategies.Publication cartels are forms of cooperation which benefit all members since the same paper is added to each co-authors resumé.However, as co-authors contribute unevenly to the paper, co-authorship benefits are greatest for those contributing the least.Yet, even for the greatest contributors, the strategy may still reward, as including authors increases the chance of being invited to join a future paper, of remaining at the core of networks, or of being considered 'useful', with future chances for reciprocal altruism.Free-riders in this research thus resemble the 'subtle cheats' identified by Dawkins' (1976: 244): "who appear to be reciprocating, but who consistently pay back slightly less than they receive".When bibliometric assessments ignore author numbers, this strategy can be successful.Virtually all identified strategies can be meaningfully interpreted through evolutionary theory.While Dawkins (1976) explained 'survival' strategies from the gene perspective, he outlined parallels to the organism.Evolutionary strategy can also include the establishment of memes.Hence, evolutionary strategies can apply to researchers, and perhaps even institutionsand the global academic systemas represented by universities, publishers and other actors.
Extending this rationale, strategies to improve performance can be interpreted as reactions.Bibliometrics is about competition based on score comparison, a form of pressure to which individuals react.As they are not actively involved in the design and uses of bibliometrics, and as these systems increase performance pressure, fear is a logical outcome.Humans respond to fear through what Bracha (2004) designates as freeze, flight, fight, fright or faint.'Freeze' is a response of hypervigilance, the careful monitoring of the environment.In the context of bibliometrics, there may be parallels in being alert to new pressures and changes in institutional settings.Several responses can be interpreted accordingly, as some answers indicated keen awareness of ongoing changes and their personal relevance, in some cases to gain advantage over less vigilant colleagues.'Flight' is an escape attempt, as through academic exiting, accepting teaching-intensive positions, or transitioning to administration.There are also various examples resembling strategies to 'escape underperformance', such as by publishing in low-barrier open-access journals, engaging in salami-slicing, or relying on other output maximisation strategies.
'Fight' refers to acceptance of the bibliometric challenge by performing well.Many of the strategies mentioned by respondents would fall into this category, such as joining strategic alliances, editorial advancing or pursuing prominence in digital platforms.'Fright' refers to immobility, a form of passive observation of developments in hopes of the threat passing by.For instance, senior faculty members may ignore their performance in evaluations, knowing that they will leave academia before universities will have translated disincentive policies into action.It often takes many years for universities to move from policy formulation to performance assessments and disciplining of (underachieving) staff.Finally, 'faint' represents a strategy to signal that one is not a threat.Bracha (2004) referred to faint as a survival strategy of women and children in violent conflict.In the context of bibliometrics, this was evident in some of the answers claiming unawareness of bibliometric advancements, perhaps most notably represented by repeated responses of "maybe I am naïve" [R24], "I'm probably completely bibliometrically naïve" [R36], and "perhaps I am just being naïve" [R20].These notably all came from senior scientists at leading universities.
This leads to the question of more aggressive strategies to advance bibliometric scores.Dark manipulation, from evolutionary viewpoints, also makes sense if the competition's success can be limited, or if others are similarly inclined.To reject manuscripts related to one's field (as a reviewer) reduces the chances for others to collect citations, i.e., it is a mechanism for eliminating or suppressing competition.This strategy can be enhanced through pompous or poisonous comments serving the goal of further discouraging the competition, while very negative personal comments to the editor can increase the chance of a rejection without giving authors a chance of responding.
In many cases, the ethics of particular practices will be situational.A suggestion to cite a reviewer's paper may be a genuine attempt to improve a manuscript, or a strategy to increase citationsor both.In many situations, the response will also determine how the system evolves.Freeriding on publications can only happen if willing partners allow it to happen, and depending on perspective, this may represent symbiosis or parasitism.For example, junior researchers may profit from co-publishing with well-known senior colleagues, whose names may garner greater attention for the paper.The junior colleague may thus see this as a form of symbiosis, even if they wrote the paper.In contrast, an outsider may consider the process to be parasitism, as it is unethical to join papers to which one has not made substantial contributions.
There are various implications of these findings for scholarship.On the most fundamental level, these concern academic integrity.As Richard Dawkins (1989: xxiiii) acknowledged more than 30 years ago, in the preface to the second edition of The Selfish Gene: "I recently learned a disagreeable fact: there are influential scientists in the habit of putting their names to publications in whose composition they have played no part.Apparently, some senior scientists claim joint authorship of a paper when all they have contributed is bench space, grant money and an editorial read-through of the manuscript.For all I know, entire scientific reputations may have been built on the work of students and colleagues!"As our findings confirm, academic integrity is indeed questioned, and sometimes undermined, by bibliometric developments.More positively, bibliometrics drive ambitions to perform; more negatively, they foster forms of academic entrepreneurship that have questionable outcomes for academia and science.What scholars driven to perform will do in extremis to achieve academic ascendance is evident in the 2005 stem cell research scandal.Here, a 'ground-breaking' paper in Science, published by 24 South Korean and one US author, was identified as fraudulent.According to Resnik et al. (2006: 101) an investigation concluded that the US author did not deserve to be an author on the paper, "since his only contribution to that paper was [to recommend] that the authors hire a professional photographer to take a photo of the dog".Yet, the US author also received US$40,000 in honoraria, while failing "to ensure that each coauthor had approved the paper".Resnik et al. (ibid.: 102) emphasise that "[…] the stakes were high.If the research had been sound […] it would have brought money and glory to South Korea and could have earned [lead author] a Nobel Prize".
While such high-level academic fraud is apparently rare, this paper revealed questionable strategies for advancing bibliometric goals, pressure to perform arising from power differentials between researchers, publishers and universities.Genuine reviewer attempts to advance knowledge by recommending citation of their own highly relevant publications can easily lead to ethically compromised manipulations when that same reviewer also insists on citing tangential reviewer publications.Findings, therefore, invite discussion on whether bibliometrics serve their originally designated goals.Performance metrics can have positive outcomes, but not all results are equally desirable.This may be best understood chronologically; pre-1970s, bibliometrics were barely relevant.This changed in the period 1970-2000, when early performance metrics were developed and the field was transforming into a science.The early 2000s saw intensified engagement, but it is only since 2010 that a diverse engagement stage was entered, with publishers greatly expanding their use of performance metrics and universities increasingly adopting these metrics.Performance assessments are now all-encompassing at some universities, favouring grant revenue over academic outputs, limiting academic freedom, and undermining core scientific principles of diligence and co-operation.This anticipates a continued evolution of researcher manipulation strategies, including dark ones, to confront growing pressure in the academic system.

Conclusion
As a claim to international standing and reputation, institutions are increasingly drawing on bibliometric data to assess research performance.However, there is a limited understanding of how research performance is measured and assessed by institutions, including consequent responses of scholars.Pressure to maximise research output quantity and quality has generated more or less ethical forms of academic entrepreneurship to build and sustain research careers.Traditional conceptualisations of academic entrepreneurship are limited in scope, usually referring to involvements in commercial spin-off companies or consultancies.This manuscript expands the notion of academic entrepreneurship to encapsulate scholars who consider taking intellectual risks by manipulating bibliometrics to build and ultimately sustain superior research performance, potentially at the expense of producing valid knowledge.
Evolutionary theory is used as the lens to interpret the findings, with a shift toward emphasis on profit maximisation and competition over traditional academic virtues such as collegiality and mindful research offered as a potential explanation for the proliferation of bibliometrics to assess research performance.Basic manipulation for survival was evident in respondent strategies, with publication cartels offering benefits to all authors despite one or two completing most of the work, with the authors who complete a majority of the writing benefitting from continued reengagement in the authorship network as a notion of reciprocity.Other basic concepts within evolutionary theory are offered as potential explanations for the behaviour, with examples of how scholars freeze, flight, fight, fright or faint in response to institutional pressure to perform.While the integration of bibliometric data inspires ambitions to perform, an unhealthy obsession can foster different dark forms of academic entrepreneurship with questionable outcomes for academia and science.
A central conclusion of this research is that bibliometric assessments have implications for academia.Findings confirm that evaluations and measurements increase pressure to perform and compete, in a system that builds on thoroughness and cooperation.Developments are driven by neoliberal entities in the 'web of power', specifically the publishing houses, that have created dependency and disempowerment structures forcing researchers to maximise output while limiting their options to critically engage with the system.Results have demonstrated that academics are universally compliant with the web of power, willingly or reluctantly.Senior academics, who may be expected to be at the forefront of a critical resistance, often successfully navigate developments as a result of careers beginning in pre-bibliometric times.They may also master diverse strategies as their understanding of bibliometrics has co-evolved.Accordingly, they may have less interest in reform, while simultaneously serving as role models.Junior scholars, on the other hand, are socialized into the web of power, and seek to progress their careers within the system.They are thus unlikely to challenge its condition, fearing repercussions.Their impotence is evidence of a wider power imbalance, because faculty members have little scope as individuals to challenge the system; to do so, entails great personal risk.Those who cannot compete in the system may thus rather move to teaching or administration, avoiding the dangers of research, or quit academia entirely, thereby further decelerating impetus for reform.

Toward a virtuous resistance movement?
This bleak prognosis notwithstanding, the authors do see a path forward based on those respondents who mentioned the increasing importance attached by their institutions to socially relevant research outcomes.The goal in tourism should be to privilege such research over the dominant knowledge-for-knowledge-sake model by, for example, positioning social utility as a manuscript review criterion at least as important as sound theoretical foundations.More radically, social engagement and its outcomes for communities should be regarded in promotion and tenure decisions at least as much as peer-reviewed output, which, typically, seldom translates into tangible real world benefits (Brauer et al., 2019;Thomas & Ormerod, 2017).This requires scholar assessment protocols where community and other non-academic stakeholder feedback is as or more important than raw citation counts, and which incorporate accordant social utility altmetrics.Put in evolutionary theory terms, a shift to social utility equates to adjusting the socio-economic system so that acquiescence to existing norms around bibliometric manipulation no longer confers a higher probability of surviving and thriving.
Among several factors suggesting that this aspiration is neither naïve nor unrealistic is the latent power of researchers, who, it is worth reiterating, are entirely responsible for the raw material, i.e., knowledge creation, upon which the system depends.Collective efforts to elevate social utility in review processes and to focus on field projects rather than peer-reviewed publication, should they be enabled, could profoundly challenge the perverse parameters and reticulations in the prevailing web of power by adding communities as an empowered eighth stakeholder group.Such recalibration is already at least implicitly endorsed by university mission statements, which almost invariably focus on their central role as social betterment agents (Cortés-Sánchez, 2018); senior administrators, accordingly, are morally obligated to support this reform.Some universities moreover, as our results indicate, have already opened this door and deserve applause for doing so.What can realistically eventuate from a larger effort, and not within a distant timeframe, is a new model of the 'enlightened academic entrepreneur' focused on making the world a better place.
Inspiration is provided by the growing 'resistance literature' in other fields.Feminist geographers, for example, challenge the patriarchal foundations of the corporate university by endorsing an ethic of caring within a slow scholarship philosophy that privileges the freedom to think (Mountz et al., 2015).Another perspective, from legal studies, emphasises reclaimed agency through mindset recalibration.Dispirited faculty are encouraged to collectively recognise their enormous latent power as knowledge-makers and re-identify also as change-makers.This approach rejects the inevitability of the neoliberal model and sees universities as amenable to reform; academics must therefore identify what needs to change and why, while acknowledging the good in the system and making constructive suggestions for reform (Heath & Burdon, 2013).Enlightened academic entrepreneurs may thus seek debate with administrators about assessment frameworks, challenge publish-or-perish mentalities, and focus on 'real impact' research.Gonzales (2015), from an education perspective, regards such 'mundane' acts as crucial foundations for higher levels of resistance, along with quasi-seditious acts such as eschewing highly competitive grant schemes or meeting only the minimum applicable standards of research output.Our results confirm geographic diversity in the corporatising proclivities of universities, but tourism scholars now have at their disposal diverse ethical resistance options to choose from, and must engage them accordingly.

Funding details
N/A.

Disclosure statement
No disclosure statement necessary

Declaration of competing interest
None.
Career stages: early (post-graduate or PhD), mid (Assistant Professor, Associate Professor, or equivalent), late (full professor, active or retired).

Individual faculty engagement with bibliometrics Dark manipulaƟon
Fig. 1.Bibliometric engagement in tourism.