Reaching and implementing the best available knowledge in wildlife biology

Wildlife biology is an applied discipline, where research results are to varying degree incorporated to accepted knowledge However, several factors appear to increase the divide between research results, and knowledge and implementation. First, there is an exponential increase in number of published papers, in part driven by misguided reliance on publication record for assessing scientiﬁc competence. The shear number of publications risk diluting knowledge through salami-slicing or simply making it diﬃcult to ﬁnd relevant publications. This development could be problematic for the future of wildlife biology, and it has made complex statistical analysis of already available data more proﬁtable than the ability to generate new data in robust ﬁeld studies. Research results are conceptually diﬀerent from knowledge and need to be evaluated in a post-publication process to become knowledge. Secondly, the formulation of research questions has become restricted by the way research is organised and funded. A shift from a focus on individual performance to research groups could encourage the development of more complex research questions that are better suited to advance the knowledge of wildlife biology. Funding agencies and research institutions need to think beyond current norm of a three year funding package. Thirdly, deﬁning knowledge as the result of a post-publishing evaluating of research publications would facilitate knowledge transfer between researchers and practitioners. It is well established that a two-way transfer of knowledge and experience is beneﬁcial to both researchers and practitioners but the low return on academic recognition for such eﬀorts is currently simply too low, which positively discourages researchers from dividing limited resources into such activities. Only academic institutions and funding agencies together can make the signiﬁcant changes needed as have begun in other disciplines. However, researchers also need to embrace existing and emerging initiatives such as Open Sceince, FAIRdata, CREDIT authorship to accelerate change. Abstract Wildlife biology is an applied discipline, where research results are to varying degree incorporated to accepted knowledge However, several factors appear to increase the divide between research results, and knowledge and implementation. First, there is an exponential increase in number of published papers, in part driven by misguided reliance on publication record for assessing scientiﬁc competence. The shear number of publications risk diluting knowledge through salami-slicing or simply making it diﬃcult to ﬁnd relevant publications. This development could be problematic for the future of wildlife biology, and it has made complex statistical analysis of already available data more proﬁtable than the ability to generate new data in robust ﬁeld studies. Research results are conceptually diﬀerent from knowledge and need to be evaluated in a post-publication process to become knowledge. Secondly, the formulation of research questions has become restricted by the way research is organised and funded. A shift from a focus on individual performance to research groups could encourage the development of more complex research questions that are better suited to advance the knowledge of wildlife biology. Funding agencies and research institutions need to think beyond current norm of a three year funding package. Thirdly, deﬁning knowledge as the result of a post-publishing evaluating of research publications would facilitate knowledge transfer between researchers and practitioners. It

is well established that a two-way transfer of knowledge and experience is beneficial to both researchers and practitioners but the low return on academic recognition for such efforts is currently simply too low, which positively discourages researchers from dividing limited resources into such activities.Only academic institutions and funding agencies together can make the significant changes needed as have begun in other disciplines.However, researchers also need to embrace existing and emerging initiatives such as Open Sceince, FAIRdata, CREDIT authorship to accelerate change.

The need for knowledge
Good decisions are based on knowledge (Romesburg 1981).Knowledge isa belief that is justified , true , and hasproper evidence (Goldman, 2020), scientific research is accepted as an objective process to refine or change beliefs and increase knowledge (Bird 2007).The research process is governed by a set of recognized institutionalized norms and methods (Mulkay 1976;Merton 1979;Nosek et al. 2015).A line of activities that generates a hypothesis, makes a prediction and sample observations, and finally analyses the results.A thorough understanding of current knowledge and its context is required to think critically, question accepted theories, evaluate alternative explanations, and publish results in a recognized format and outlet (Dellsén 2018).However, advancing knowledge also requires a post-processing of research results in relation to previous experience and knowledge to become widely accepted and recognised as new knowledge.Recent decades have seen a dramatic increase in research publications, and we propose that this has congested the growth of knowledge.The quality assurance system for the process from research publications to knowledge has become someone else s problem, which has further increased the uncertainty of what is the best knowledge to be implemented in wildlife management.We suggest that there is an acute need for norms and principles for the post-publication evaluation of research beyond peer review in wildlife biology.In medical disciplines for example, the Swedish Agency for Health Technology Assessment perform independent assessments and evaluate the scientific evidence supporting both new and established measures within health, medical, dental and social services.The agency scrutinizes the research published on a specific topic.Sometimes, based on the available evidence, it is not possible to determine whether an implementation is effective, and knowledge gaps, or scientific uncertainties are identified, for example see Ludwigson et al. (2023).

Generating widely recognized and accepted knowledge
The number of publications and where they are published, not their content, have long been the most important indices of scientific competence used by academic institutions and research funders.The drive and ability to publish are now essential qualities for a successful academic career and to compete for research funding.The growing number of Ph.D. students (Sarrico 2022) is only going to increase competition for academic positions and the production of publications for the sake of 'competitions sake'.As scientists, we are trained to design studies that take into account the relevant successes and failures of the prior studies and the evidence within them (Clarke 2004).However, when searching the literature to see what has gone before, it is now all too easy to become overwhelmed by the number of articles.Even efficient searching can yield an unmanageable number of publications because of the large number of studies published.In addition, an increasing number of articles contain little novel knowledge and often too little information to relate to other findings or reproduce the results (Baker 2016;Oza 2023).In addition, the publication bias toward the successful publication of 'positive' results makes assessing "successes and failures" next to impossible."In a highly competitive era, it seems that in the quest for high publication rates and funding, researchers lose sight of the original aim of science: to discover a truth about nature that is transferable to other systems" (Betts et al. 2021).An important question is whether we train students and early career researchers enough to think about the big picture of which they are a part, not only of how many articles their name can be found on the author list.We believe that any wildlife researcher should be able to explain how his or her most important scientific publications have contributed to the knowledge of wildlife biology.
Wildlife management should be based on the best available knowledge, and published work represents our current understanding.Each study represents a piece of the evidence base and adds or subtractsweight for a particular idea or hypothesis.A search for the keyword "wildlife" in articles registered on the Web of Science between 2003 and 2021 shows an exponential increase of 10% per year, and more than 5 000 articles was published in 2021.It is of course impossible for any researcher to read even the abstracts of all wildlife-related articles that are published annually.The rate of increase in wildlife research articles and the shear number of publications could risk the growth of wildlife knowledge (Ioannidis 2005), e.g., dilution of knowledge through splitting results from a single research project into several smaller publishable units (salami-slicing), and increasing the plate from where to cherry pick results, or simply the greater risk of missing relevant findings.Open Science combined with improved search engines and artificial intelligence (AI) or large language models (LLMs) may offer opportunities to address these issues and rapidly summarize numerous publications.However, among other issues (see below), it is dependent on the widespread implementation of Open Science and FAIR principles and access to and efficacy of reliable and unbiased archiving and search tools (Ma et al. 2021).The peer review of manuscripts before publication reduces, but hardly stops, the risk of results from poor studies being published, especially with the huge number of manuscripts circulating among journals, the rise of predatory journals and journals with poor editorial processes.The problem of declining standards in published studies is partly driven by and exacerbated by a decline in reviewer recruitment and review quality (Fox, Albert, and Vines 2017;Zaharie and Seeber 2018).In addition to the issues of the need to produce papers, salami slicing results, and a collapsing (and questionable) peer-review system, we are also facing manuscripts and publications that have been produced by a digital algorithm (Abd-Elaal, Gamage, and Mills 2022).It is time to develop norms and tools for the post-publication evaluation to gain the best and unbiased knowledge.
The capacity of AI systems to gather and summarise information available in digital format is rapidly increasing (Marshall and Wallace 2019) and could improve the post-publication processing of research results.However, this is hardly sufficient, and we argue that the human capacity to critically reflect and discuss is still required to advance knowledge.Field experience is also an important contribution to the post-publication evaluation in wildlife biology.Wildlife education needs to focus on getting students out of the lecture theatre and laboratory, from in front of computers, and into the environments and landscapes they are studying, collecting and analysing real data, and applying ecological inference.When you have spent months freezing, wet, and tired trapping and radio-tracking animals, collecting vegetation samples, or counting scats in the landscape, you begin to understand the limitations of different methods, study designs, and data.In conclusion, knowledge about wildlife biology is underpinned by good causal inference and insights into biology.The last decades have seen an increased focus on new excellent and useful statistical methods (Gelman and Hill 2006;Hobbs and Hooten 2015;McElreath 2020, Kery andRoyle 2020), but the ability to collect and validate data has become less valued and important.The ability to perform advanced and complex statistical analysis is hardly more important than understanding ecological theory, the ability to design robust field studies, or good field skills.One explanation for this development is probably the increased drive to publish and the short-term nature of many research projects due to limited funding and insecure tenure.The increased pressure to complete degrees and theses within stipulated time, usually 3-4 years in the UK and Europe, will, in most cases, make fieldwork impossible.We firmly believe that this is a dangerous development that neither promote nor reward creativity.
The potential for commissioned systematic reviews could be considered as a post-publication strategy to promote knowledge growth in wildlife biology and identify knowledge gaps.The need for different reviews has long been recognised in medicine (Grant and Booth 2009;Peters et al. 2015), and the aim and type of review have to be carefully considered.Initiatives such as Conservation Evidence, which provides an information resource designed to support decisions about how to maintain and restore global biodiversity, are interesting.However, reviews of reviews cease to be informative, and reviews cannot replace original research and the generation of new data (Fidler et al. 2017).Reviews are optimally made by independent agencies at academic institutions that are commissioned by the responsible authorities and follow transparent protocols.
We suggest that many of the complex questions in wildlife biology and management require larger projects that run for a longer time over a larger geographical scale than at present to obtain relevant new knowledge (May 1999).There is a risk that the formulation of research priorities and questions has become restricted by the way research projects are generally organised and funded, often one researcher -one project typically funded for three years, or as a Ph.D. project that should generally be completed within three to four years and, in Scandinavia at least, result in two to three research articles.An increased focus on research groups rather than individual researchers could encourage and enable the development of more complex and larger-scale research questions that are better suited to advance the knowledge of wildlife biology.Large projects with extensive fieldwork and cooperation with stakeholders are time-consuming and expensive; however, wildlife research is hardly more expensive than many other research projects in the natural sciences.Laboratories with high-tech laboratory equipment and supercomputers are costly (and quickly outdated), but their cost is hidden in equipment grants and justified in high-profile opening ceremonies and press releases.Maybe there is a lesson to be learned here.
There are both good and poor experiences in setting up such large, often interdisciplinary projects.It would benefit students if they could draw on the experience of senior researchers, not least in terms of how to plan and collect data from the field.Funding agencies are generally positive, but individual researchers can be quite reluctant because it lacks academic recognition and can be associated with fewer or slower production of publications.It can reduce the flexibility and availability of resources for the individual researcher because of the need to adopt broader aims and adjust to other researchers and stakeholders.Researchers may also lack the experience and (administrative) support required to write and manage large-scale interdisciplinary projects.It is unclear what degree of bottom-up and top-down initiatives are most efficient in such a process (Okamura 2019;Kluger et al. 2020.Academic leadership comes in different styles and managing scientists and the scientific process is different from managing industrial production (see Alvesson and Sveningsson 2003, and references therein).Academic leadership should be a recognized quality and different from working with budgets, attending meetings, and overseeing administrative routines.A day-by-day leader of creative work should ". . .display support for subordinates and their work by monitoring progress efficiently and fairly, consulting with them on important decisions, supporting them emotionally, and recognizing them for good work" (Amabile et al. 2004).

Reaching and implementing the best available knowledge
Wildlife biology is often associated with contested knowledge on the implementation of science-based management interventions (Hodgson et al. 2019(Hodgson et al. , 2022)), and the conflicting interests of different interest groups (Woodroffe, Thirgood, and Rabinowitz 2005).Policymakers and managers cannot be expected to understand the details of ecological theory, complicated statistical models, or even tables in the results section in published papers, but they are often interested in understanding and discussing implications when new knowledge emerge.Wildlife research alone is seldom sufficient, and it is important to identify which parts of a conflict are due to lack of knowledge and its implementation, and which require negotiations and political decisions (Thompson and Tuden 1959;Heberlein 2012).Wildlife researchers must be careful not to claim that their research can produce win-win outcomes that will resolve and make conflicts disappear.Otherwise, without social or political movement or will, there is a likelihood that implementing new knowledge is just as likely to be contested as implementing existing knowledge.
The frustration of dissemination of knowledge outside the research community is well known, se for example Sutherland et al. (2004), Laurance et al. (2012), andWalsh et al. (2019).We believe that a part of this problem is the confusion between research results and knowledge (Lavis et al 2003;Hulme 2014), and the solution is not to popularize research results to be transferred to practitioners.With this view, the exponential increase in research publications will make dissemination even more frustrating.Facilitating dialogues between researchers and practitioners can result in a two-way transfer of knowledge and put recent research results in perspective of present knowledge and its implementation (Esselin and Ljung 2006;Roux et al. 2006).However, researchers often argue against dividing limited resources into such activities, the low return on academic recognition for such efforts is simply too low.Arlettaz et al. (2010) identified a lack of involvement by conservation researchers in the implementation of concrete conservation actions and argued for a conceptual paradigm shift in which researchers should turn conservation science into conservation action.Publishers, funders, and promotion boards too often fail to credit researcher input outside of publication metrics, and this is a systemic error that cannot be reduced to the responsibility of a single researcher or research group (Zaharie and Seeber 2018).
Implementation of the best available wildlife knowledge is a much wider responsibility than that of the university sector, and it is necessary to involve the groups that have experience in implementing knowledge (Lavis et al 2003;Dilling and Lemos 2011).Scaling up research findings to full-scale management policies is problematic and may result in unexpected surprises (Goddard, Dougill, and Benton 2010).A researcher can, of course, have personal opinions and preferences, and advocate for a certain policy, but it must be clearly stated that this is an opinion rather than a scientific consensus on best available knowledge.Not doing so makes the researcher a stealth advocate cloaking personal preferences as science instead of the Honest Broker , who tries to present the range of options and their scientific support (Pielke 2007).
In North America, there is a model of wildlife management and conservation that contains a system of outreach and extension staff that educates and engages citizens and wildlife professionals.New knowledge is implemented in wildlife management through a multidisciplinary integration and participatory management from stakeholder groups,"Wildlife management is the guidance of decision-making processes and implementation of practices to purposefully influence interactions among and between people, wildlife, and habitats to achieve impacts valued by stakeholders."(Riley et al. 2002).In Europe, there are agencies and NGOs that should have an interest in supporting the process to reach and implement the best available knowledge, but it should be better organized than at present.

In Conclusion
Wildlife biology is a highly applied discipline, and wildlife biologists require field experience to acquire the knowledge required for wildlife science.It is therefore worrying to see how Ph.D. programs reduce the norm for time to completion and introduce incentives for those who deliver a thesis on time.For example, in Norway, a Ph.D. student is not expected to spend more than 36 months on research and courses.
For wildlife biology to deliver against societal needs and to remain relevant, there need to be a move away from publication metrics to embrace a more holistic assessment of researchers and research worth.We encourage wildlife biologists and funding agencies to engage constructively with ongoing initiatives in this area.We argue that, as is happening in other fields, the obsession with publication metrics and short-term funding severely undermines the growth of knowledge and its applications.The first step is to recognise the critical need for post-publication review and synthesis.
Many wildlife researchers are positive to synthesize knowledge to facilitate management decisions, but norms and quality criteria for such efforts need to be developed further.However, the responsibility of a postpublication evaluation of research articles cannot be left to the individual wildlife researcher.It requires support and recognition from independent academic institutions, which are in an optimal position to secure the process from research results to the best available knowledge and its dissemination to wildlife biologists (Maassen and Stensaker 2011;Bowen and Graham 2013).Wildlife knowledge should not only be a commodity for the research community.
We suggest that for wildlife biology to address many of the more complex applied problems, funding agencies need to embrace larger projects to undertake longer-term and wider (geographic) scale studies and better support interdisciplinary studies.This would also stimulate critical discussions on which research questions are interesting, novel, and relevant in addition to being feasible and ethical.However, for the potential benefits to be realized, such a change must occur alongside a move away from publication metrics as the principal means of ranking researchers and institutes.