Characterizations of COVID-19 risk: review and suggestions for improvement of current practices

Abstract To handle the risks related to coronavirus and the COVID-19 disease, governments worldwide have adopted different policies and strategies. These policies and strategies build on various approaches and methods to assess and convey the risks. This paper looks more closely into these approaches and methods. We review and discuss practices in four countries (Norway, the UK, the US and Sweden), focusing on the approaches, methods and models used to assess and describe the risks related to COVID-19. The main aims are to present some current thinking, reveal differences and suggest areas for improvement. The paper concludes that current practices can be enhanced by incorporating ideas and approaches from contemporary risk science, particularly in relation to how to treat uncertainties and reflect degrees of knowledge.


Introduction
Risk has been extensively referred to in governmental press briefings and documents providing policy and guidance concerning the ongoing pandemic. However, what the concept actually means is rarely clarified. When a definition is provided, risk is mainly referred to as the combination of consequence and probability/likelihood (e.g. FHI 2021b; SAGE 2020). Probability, then, is seen as a key concept for understanding and assessing risk, but this concept is not given a proper interpretation. Furthermore, several of the approaches introduce a measure of confidence in the knowledge base supporting likelihood judgments, yet no reflections are provided concerning how this confidence level affects the overall judgment of risk. Other approaches rely on statistical modelling and analysis to express risk, for example by estimating the number of fatalities in the coming month (e.g. SAGE 2021a). Statistical analysis is a fundamental and well-established tool for assessing and describing risk; the degree to which this approach is suitable for the present case can, however, be discussed. With considerable uncertainties related to the phenomena studied, as in the early stages of the pandemic, the accuracy of the models, as well as the quality of the input data used, can be questioned. The models are based on many assumptions that are to varying degrees justified. Current practices highlight the results' dependencies of the models and their assumptions and input data, but judgments of the strength of the information and knowledge supporting the analysis and its results are not commonly reflected as an integral part of the risk characterizations.
How to conceptualize and describe risk is not straightforward, and this is also the case for risks related to the COVID-19 pandemic. However, risk science literature can provide guidance when it comes to which aspects need to be addressed to ensure a prudent risk characterization. This literature presents frameworks that clarify the interrelationships and interpretations of all the key risk-related concepts, including uncertainty, probability and knowledge. A focal feature of contemporary perspectives on risk is the importance of including reflections on the knowledge dimension. For example, how can we adequately evaluate the results of the statistical analysis, if the strength of the supporting knowledge is not properly addressed?
A closer look at the current approaches used to describe the pandemic risk suggests that there is room for improvement, specifically with regard to the following issues: • The fundamental understanding of the risk concept and how it should be described • The meaning of a probability in a risk assessment context • How the aspects of uncertainty and knowledge are reflected, and their relation to risk The aim of the present paper is to show how the conceptualization and description of the pandemic risk can be improved by drawing on contemporary risk science. To illustrate the discussion, we use information and practices from four countries: Norway, Sweden, the UK and the US. When referring to contemporary risk science and risk science knowledge in the following, a main reference is documents produced by the Society for Risk Analysis (SRA 2015(SRA , 2018 and related supporting literature. The SRA documents have been developed by a broad group of senior risk analysts and scientists, with input from members of the society. Risk science, according to this body of literature, can be defined as the practice that provides us with the most justified beliefs that can be produced at the time being on the subject matter covered by the risk field, covering concepts, principles, approaches, methods and models for understanding, assessing, characterizing, communicating and managing risk. Using this framework as a main reference, the above examples point to some of the areas in which improvements can be made.
The paper is organized as follows. Section 2 provides an outline of the method, including the rationale for the selection of countries. In Section 3, we review the current approaches for assessing and conveying risks related to COVID-19 in the selected countries, following up the above discussion. In Section 4, we discuss these practices and provide suggestions for how they can be improved in view of what we consider contemporary risk science. Finally, Section 5 provides some conclusions.

Method
In the coming sections, we review and discuss some of the current approaches and methods applied to describe and convey risks related to the COVID-19 pandemic, focusing on information and practices from governments and public health authorities in four countries: Norway, the UK, the US and Sweden. The selection of countries is based on three main criteria: 1. Relevant information and documents related to the assessment and management of risks regarding COVID-19 are available and accessible 2. The approaches and methods used are of particular interest from a risk science perspective 3. Contextual conformity The latter criterion is intended to ensure that the selection of countries displays a high degree of conformity with respect to economic, health, socio-cultural, political and structural factors that could have an influence on how risk is assessed and described.
Many countries satisfy these criteria. However, the purpose of the review is not to give an all-inclusive presentation but to address some of the approaches and methods used to assess and convey the risks related to . For the purpose of the present discussion, it therefore suffices to consider some selected countries. Based on an overall assessment of criteria 1), 2) and 3), Norway, Sweden, the UK and the US were chosen. The selected countries have economic, health and socio-cultural backgrounds that are broadly similar. For example, all four countries have high-income economies with well-developed healthcare systems, and they have all experienced significant challenges in managing the pandemic, including high infection rates and pressure on healthcare services. All four countries are democratic nations, with different forms of government. While three of the countries are constitutional monarchies (Norway, Sweden and the UK), the US is a constitutional federal republic. However, while Norway and Sweden have a highly centralized governance structure, the UK and the US have more decentralized systems where responsibility for handling the COVID-19 pandemic was largely devolved to regional and local governments, resulting in a more fragmented and varied approach to risk assessment and characterization within these countries.
The approaches and methods used in the selected countries are of particular interest from a risk science perspective for several reasons. Firstly, they represent a diversity of practices used to assess and describe the risks associated with the pandemic, including statistical analysis, epidemiological modelling, scenario analysis, various forms of risk matrices and traditional risk assessment techniques. Furthermore, the four countries illustrate different ways of generating and presenting risk-related information. In Norway and the UK, the assessment and characterization of COVID-19 risk mainly took place in documents that were made publicly available. While similar documents were produced to some extent in the US and Sweden, these countries also relied on expert statements as a primary source for describing and conveying the pandemic risk.
The review will focus on what the present authors consider to be the prevailing approaches and methods for assessing and describing risk related to COVID-19. In this study, we draw information from a sample that comprises both public documents and reports, and statements made by public health experts. Given the different approaches taken by each nation in organizing and structuring their pandemic response, as well as the different sources through which the risk characterizations were produced and presented, sourcing similar documents from similar organizations in each of the four nations is not feasible. While the sampled information may have been primarily intended for a specific target audience (for example policymakers or the public), we will argue that conflating them does not undermine the main purpose of our paper. Our aim is to compare and evaluate the different approaches used to assess and describe the risks related to COVID-19. Defining clear target audiences or purposes for the information is challenging, as different stakeholders may use the same information for various purposes. For instance, risk assessment reports produced by public health authorities may have served as input to policy-making but were also accessible to the general public, who could use them to inform their own decisions. Similarly, expert statements made through public channels such as press conferences and interviews could have had an impact on both decision-makers and the general public. As the COVID-19 pandemic has demonstrated, information concerning risk must be accessible and understandable to a wide range of audiences with varying degrees of expertise.
When it comes to selecting which sources to focus on, our aim is to consider the origin of the risk assessment/risk characterization. For example, if a public health expert makes a statement containing a characterization of COVID-19-related risks, we distinguish between risk characterizations that reference a risk assessment from a report or document and those that are based on the expert's own assessment. In the latter case, the source considered is the report or document that is being referenced, while in the latter case, the source of the risk description is the expert statement. Although we recognize the element of subjectivity in the chosen selection, we believe that the collected sample of documents, reports and statements covers what is broadly considered to be the main sources of information concerning the pandemic risk in the countries studied.
In Norway and the UK, governments and health authorities mainly present risk through documents containing risk assessments and statistical analyses. In these countries, the main body of documents has been produced by the Norwegian Institute of Public Health (FHI) and the Scientific Advisory Group for Emergencies (SAGE), respectively. While the FHI is a government agency under the Ministry of Health and Care Services, SAGE is an independent advisory group that reports to the UK government's Chief Scientific Adviser.
In addition to SAGE, the public health authorities in the UK, including Public Health England (PHE), Health Protection Scotland (HPS), Public Health Wales (PHW) and the Public Health Agency in Northern Ireland (PHA), have been responsible for providing evidence-based advice to government and public health professionals, as well as to the general public.
In the US, the leading national public health institution is the Centers for Disease Control and Prevention (CDC). The CDC is responsible for monitoring the outbreak of the pandemic in the US, collecting and analyzing data, and reporting the information to the public and policymakers. In addition, the President of the United States established the White House COVID-19 Response Team, consisting of experts from various fields, including health care, homeland security and communication. The team was responsible for providing regular updates on the pandemic's status, offering guidance on measures to prevent the spread of the virus, and coordinating the federal government's response to the pandemic.
The Public Health Agency of Sweden (Folkhälsomyndigheten or PHAS) is a government agency under the Ministry of Health and Social Affairs. During the pandemic, they have published regular updates on relevant statistics, as well as other documents and reports to inform the public and government about the potential impact of the pandemic and guide response strategies.
In addition to the information provided by public health authorities and institutions, the pandemic has birthed so-called celebrity scientists (Warren and Lofstedt 2022), giving individual officers a prominent role in conveying the risks to the public. Dr. Anthony Fauci, public health expert and director of the National Institute of Allergy and Infectious Diseases (NIAID), has been a central figure in providing guidance and information in relation to the pandemic outbreak in the US. Similarly, state epidemiologist at the Public Health Agency of Sweden (Folkhälsomyndigheten), Anders Tegnell, has served a leading role in describing the current status and development of the coronavirus disease in Sweden.
Although public health officials had a prominent role during press conferences and interviews also in Norway and the UK, the statements made by these experts were mainly references to contents in underlying documents by FHI and SAGE, respectively. This stands in contrast to Sweden and the US, where the experts' own assessments formed a larger part of the basis for the risk characterizations. Given the review's aim to focus on primary sources of information for assessing and characterizing risk, statements from experts in Norway and the UK are omitted, as the risk characterizations conveyed in the statements are considered covered by the documents included in the analysis of these countries.

Review of current practices
In the following section, we take a closer look at the methods and approaches used to assess and convey the risks related to COVID-19 in the four countries: Norway, the UK, the US and Sweden. The review and discussion will be centred around the main issues put forward in the introduction section, focusing on how key concepts such as risk, uncertainty, probability and knowledge are understood and expressed.

Norway
To support policy and decision-making, the Norwegian government and health authorities have drawn on risk assessment reports provided by the Norwegian Institute of Public Health (FHI). Since January 2020 and up to the present date (April 26, 2023), 36 risk assessment reports have been produced (FHI 2022c). Upon examining the collection of documents, we found a gradual development in the content of the reports, particularly with respect to how risk is understood and described. In the first risk assessment, published on January 28 th , 2020, the term 'risk' is referred to throughout the document, yet without specifying what the concept means or how risk is measured. However, the use of the term suggests an underlying interpretation of risk as the probability of an undesirable event/outcome, typically a severe form of disease or death caused by COVID-19 (FHI 2020a). In the reports published from February 2020 and onwards, risk is explicitly defined as the product of probability and consequence, each of which is assigned a value of 'small'/'low' , 'moderate' or 'large'/'high' for a set of specified events. For example, a specified event could be 'an increase in imported cases of infections to Norway' , with the probability assessed to be 'moderate' and the related consequences 'moderate' , resulting in an overall judgment of risk as 'moderate' (FHI 2020b). What probability means in this context is not clear, as the term 'probability' is simply defined as the probability that an event will occur.
As of December 27, 2020, the risk assessments have been further extended to include a measure of confidence in the knowledge base supporting the assessment. Analogous to the judgments on probability, consequence and overall risk, the confidence measure is described using the qualifiers 'small' , 'moderate' or 'large' . For example, considering the specified event that 'the English variant and the South African variant are more infectious' , the overall risk assignment is 'moderate/high' , and the level of confidence in the supporting knowledge is assessed to be 'large' (FHI 2020c). The degree to which the measure of confidence influences the overall judgment of risk is not specified. However, it is stated that, for cases where the confidence in the knowledge foundation is considered to be small, no conclusion on risk is made.
In the risk assessment from May 28, 2021, another component, 'scope' , is introduced. The term is stated to denote the likely size of the specified event, and assessed using the same qualifiers as the components above (FHI 2021b).
Recent risk assessment reports do not provide an explicit definition of the risk concept. Although risk is frequently referred to in the reports, the concept is mainly used in terms such as 'hospitalization risk' and 'death risk' , indicating an interpretation of the concept as an expression of likelihood. Furthermore, a distinction is made between 'individual risk' and 'societal risk' . The FHI does not provide a clear interpretation of what the two risk categories represent but relates them to different types of consequences. The former type of risk ('individual') is described by referring to the severity of the consequences, given that an individual is infected. For example, it is stated that the risk of hospitalization if an individual is infected with the Omicron variant, is 0.31% (FHI 2022a). Considerations of 'societal risk' are mainly related to consequences such as the effects of COVID-19 on the capacity of healthcare services and the performance of critical infrastructure, as well as the impact on business and the economy.
When the overall risk picture is presented, recent reports by the FHI no longer include judgments on the confidence in the supporting knowledge base. Furthermore, they have abandoned the use of the 'low' , 'moderate' , and 'high' qualifiers to assess the likelihood and overall level of risk, relying mainly on an evaluation of the potential consequences and their 'scope' , to express the risk (FHI 2021c(FHI , 2022a(FHI , 2022b.

The UK
In its response to the COVID-19 pandemic, the UK government has mainly been guided by advice from the Scientific Advisory Group for Emergencies (SAGE). The SAGE committee is responsible for providing decision makers with unified scientific advice, based on the body of knowledge produced by its subgroups. These subgroups consist of experts within a range of different disciplines, whose role is to obtain and evaluate evidence concerning specific aspects of the ongoing pandemic and feed their consensus conclusions to SAGE. One such subgroup is the Scientific Pandemic Influenza Group on Modelling (SPI-M), consisting mainly of infectious disease modellers. The work produced by the SPI-M has involved providing 'weekly consensus estimates of the growth rate and the reproduction number as well as short-and medium-term projections' (Brooks-Pollock et al. 2021, 2). Furthermore, the SPI-M 'responds to policy-specific questions, for example, exploring the likely impact of support bubbles or contact tracing and producing scenarios prior to policy changes, like reopening schools or entering and exiting from lockdown' (Brooks-Pollock et al. 2021, 2). The input from the SPI-M subgroup includes a combination of statistical analysis and consensus statements on the future trajectory of the pandemic. The latter type of input involves qualitative descriptions of potential consequences, some of which contain judgments on probability. The nomenclature used to express these probabilities follows the so-called 'Probability Yardstick' , introduced by the UK Professional Head of Intelligence Analysis (PHIA. 2019). This tool represents a scale of probabilistic terms, each of which refers to a specified probability range (e.g. 'unlikely' refers to a probability range of 25% − 35%). The interpretation of probability underlying these ranges is not clarified.
In their consensus statement from December 7, 2021, the SPI-M introduces a range of narrative scenarios to describe potential Omicron epidemic trajectories. The scenario likelihoods are discussed by questioning: 'What makes this unlikely?' , followed by arguments for and against. A set of specified consequences are provided for each scenario, mainly focusing on the dominance of Omicron compared to other variants (SPI-M-O 2021b).
Another subgroup contributing to the collection of scientific advice is the Environmental and Modelling Group (EMG). In May 2020, SAGE published a paper prepared by the EMG, providing guidance on the use of risk estimation to inform risk assessments for individuals and organizations in relation to the ongoing pandemic. The document refers to risk as a concept with two dimensions; likelihood and severity of harm (SAGE 2020). Furthermore, risk assessment is described as a process with two distinct stages: risk estimation ('How big is the risk of what to whom?') and risk evaluation ('Are the risks tolerable?'). The document does not prescribe a specific methodology for how to measure or describe risk. Rather, it serves as a general framework, providing some overall guidelines and principles for risk assessment. For example, it is stated that 'risk estimation can be qualitative, semi-qualitative, or quantitative: the appropriate approach depends on the nature of the risk, the degree of uncertainty in evidence about that risk, and who is doing the risk assessment' (SAGE 2020, 1).
In addition to publishing documents generated by its subgroups, the SAGE committee holds regular meetings, in which they evaluate the current collection of evidence. The minutes from these meetings contain an update on the present situation, as well as advice on response measures based on the assessment of evidence. The probability statements in these documents are not consistent with the PHIA Probability Yardstick. Furthermore, some of the statements are supplemented with a measure of confidence. For example, it is stated that 'Analysis from PHE [Public Health England] and PHS [Division of Public Health Services] shows some early signals that the delta variant may be associated with increased risk of hospitalisation compared to the alpha variant (low confidence)' (SAGE 2021b, para. 6). No explicit interpretation of this measure is provided.
The UK Health Security Agency (UKHSA) provides risk assessments specifically directed towards the emergence of new variants of the virus. The assessments are based on a risk assessment framework developed by the PHE (Public Health England), in which risk is presented using a colour-coded matrix (PHE 2021). The matrix lists a number of so-called 'indicators' (e.g. infection severity, transmissibility, immunity, etc.). The range of consequences associated with each indicator is specified in a qualitative scheme. Using the colours, green, amber and red, the consequences are coded according to severity. For example, for the transmissibility indicator, the specified consequences range from 'No demonstrated person-to-person transmission' (green) to 'Transmissibility appears greater than the wild-type virus' (red). No reference is made to the probability/likelihood associated with the consequences. The indicators are assigned a status based on current evidence/data. To assess the quality of the underlying evidence/data, a measure of confidence is used. Based on a set of specified criteria (e.g. expert consensus, amount of previous experience etc.), the level of confidence is graded using the qualifiers 'low' , 'moderate' and 'high' .
In addition to this, the Devolved Nations (Scotland, Wales and Northern Ireland) each have their own public health authority, and took differing approaches in describing and conveying COVID-19 related risks. For example, the HPS (Health Protection Scotland) and the PHA (Public Health Agency Northern Ireland) have produced regular statistical reports with the purpose of providing data and analysis to contribute to the evidence base around the outbreak (see e.g. PHS 2020; PHA 2020). Public Health Wales has published a series of reports that draw on and present international evidence, experience, measures and approaches to inform policy action and response (PHW 2020). The approaches used in these nations have in common that they focus primarily on reporting historical data and observations in relation to the COVID-19 pandemic, while assessments on the future development of the pandemic are addressed to a limited extent.

The US
The Centers for Disease Control and Prevention (CDC) is the national public health agency of the United States, serving as a lead authority on the public health response to the COVID-19 pandemic. The main task of the agency is to provide science-based guidance to support decision-making and policy development in relation to the ongoing pandemic. Activities performed by the CDC include disease surveillance, forecasting and mathematical modelling. The output generated by these activities constitutes the evidence base for the CDC's recommendations on prevention, mitigation and intervention strategies. To assess the future development of the pandemic, the CDC relies on both short-term and long-term forecasting. The former approach involves producing point forecasts of the number of deaths, hospitalizations and cases in the coming four weeks, 'using different types of data (e.g. COVID-19 data, demographic data, mobility data), methods, and estimates of the impacts of interventions (e.g. social distancing, use of face coverings)' (CDC 2020b). The point forecasts are accompanied by prediction intervals (e.g. 95%, 50%), to 'characterize uncertainty which point forecasts are unable to characterize' (Ray et al. 2020, Methods section).
On a more long-term scale, the epidemic trajectory is described using a scenario approach. For example, the potential development of the Omicron variant is assessed by generating four scenarios, each representing varying degrees of immune evasion and transmissibility. The scenarios are described as 'plausible' (CDC 2021).
Research, guidelines and recommendations provided by the CDC are reviewed and discussed by the White House COVID-19 Response Team. As well as coordinating and overseeing federal response efforts, the team holds regular press briefings to provide the public with status updates on the ongoing pandemic, including a summary of the current recommendations and available scientific data. The status updates include numerous statements referring to 'risk' , in which the term is used as a synonym for likelihood/probability. Examples of such statements are 'unvaccinated people are at 14 times greater risk of dying from COVID-19 than people who are vaccinated' (White House 2021a, para. 17) and 'the risk of hospitalization or death was 65% lower among Omicron compared to Delta, and the risk of intensive care was 83%' (White House 2022, para. 57). Furthermore, some of the statements convey the risks related to the pandemic trajectory by referring exclusively to the potential for undesirable consequences. An example is the following statement, given at a briefing on December 17, 2021: 'For the unvaccinated, you're looking at a winter of severe illness and death for yourselves, your families, and the hospitals you may soon overwhelm' (White House 2021b, para. 8).
In addition to guidelines and data provided by the CDC, the US government's policy and response in relation to COVID-19 has been informed by advice from scientific experts. Among the most prominent is Dr. Anthony Fauci, who, in addition to serving as one of the lead members of the White House COVID-19 Response Team, was appointed Chief Medical Advisor to the President in January 2021. As well as providing government officials with guidance to support decision-making, Fauci has played an important part in how the risks and uncertainties related to the pandemic are conveyed to the public. Figuring in numerous interviews and press conferences, Fauci has been a key source of information on the status and development of COVID-19, particularly focusing on outlining the scientific evidence underlying the adopted policy and recommendations.
In his messages to the public, Fauci has made several referrals to the term 'risk' , yet without providing an explanation of what the concept represents. A closer look at the statements indicates a notion of risk as probability/likelihood. For example, during one of the first press briefings in the early stages of the pandemic (March 5, 2020), Fauci stated that 'when you look at the country as a whole, the risk of getting infected is low' , and 'the risk of getting infected when you have community spread in a certain area, is a bit higher, quantitatively I'm not sure' (Reuters 2020). Thus, risk, in this context, is seen as the probability of getting infected with the coronavirus disease. However, the concept is also used to express the probability of experiencing severe consequences associated with infection: If you're a young, otherwise healthy individual, the risk of you requiring any kind of medical intervention is low. And we know that from data from China, and from recent data from Korea and recent data from Italy. And that is about 80% or more of individuals who get infected, will do well without needing any medical intervention. (Reuters 2020) (3.1) Although the underlying interpretation of probability is not explicitly stated, the statement above suggests an interpretation in line with a relative frequency, referring to an estimate of the fraction of people in a population who will experience the specified consequence (i.e. require medical intervention).
Among the hallmarks of Fauci's communication strategy is providing clear messages to the public, based on scientific evidence and data. In an interview with NBC in June 2021, Fauci stated that 'all of the things that I have spoken about, consistently from the very beginning, have been fundamentally based on science' (Breuninger 2021). However, there have been cases where information provided to the public has evolved over time, despite the scientific evidence remaining relatively unchanged. Perhaps the most well-known example is the herd immunity case, where Anthony Fauci initially stated that between 60 and 70% of the population would need to acquire resistance against the coronavirus in order to achieve the protection of herd immunity. However, the estimate was gradually increased to 70 -75% and, further, up to 80 -85%, before Fauci finally stated, 'I think the real range is somewhere between 70 to 90%' (McNeil Jr. 2020, para. 12). In an interview with the New York Times, Fauci 'acknowledged that he had slowly but deliberately been moving the goal posts. He is doing so, he said, partly based on new science, and partly on his gut feeling that the country is finally ready to hear what he really thinks' (McNeil Jr. 2020, para. 6). Notably, 'he had hesitated to publicly raise his estimate because many Americans seemed hesitant about vaccines, which they would need to accept almost universally in order for the country to achieve herd immunity' (McNeil Jr. 2020, para. 9).

Sweden
In Sweden, the Public Health Agency (PHAS) has served at the forefront of the pandemic response. In accordance with its official mandate as an expert authority with responsibility for public health issues at a national level, the agency has been afforded a key role in assessing and conveying information related to the COVID-19 pandemic to government officials and the public. Among the main responsibilities held by the agency are gathering and analysing data, conducting risk assessments and discussing the potential development of the pandemic, as well as providing recommendations on measures and interventions to help mitigate and control the spread of the virus (Folkhälsomyndigheten 2021a).
In one of their approaches for assessing the overall risk of COVID-19 in Sweden, the PHAS refers to a five-point risk scale, where risk is described using the labels 'very low' , 'low' , 'moderate' , 'high' and 'very high' . For example, on March 2, 2020, the agency published a risk assessment in which the risk of COVID-19 spreading in Sweden was judged to be 'moderate' (Folkhälsomyndigheten 2020a). The PHAS states that the assessment is based on information provided by the World Health Organization (WHO) and the European Centre for Disease Prevention and Control (ECDC), as well as reported cases in Sweden. The underlying criteria for assigning risk levels is, however, not specified. Furthermore, although the assessment is said to be guided by information from the WHO and the ECDC, the initial risk judgments presented by the PHAS were not aligned with those made by the WHO, only coinciding with the best-case scenarios from the ECDC (Pashakhanlou 2022). The risk assessment has remained unchanged since March 10, 2020, at which point it was raised to 'very high' (Folkhälsomyndigheten 2020b).
In addition to the risk assessment scale, the PHAS provides regular updates on the future development of the pandemic, using a scenario analysis approach. The scenarios describe three potential trajectories in the coming months, referred to as scenarios 0, 1 and 2. Produced at the government's request, the scenarios are intended to serve as support for policy and decision-making. The scenario development is influenced by assumptions concerning contact frequency, virulence and susceptibility to the disease. No likelihoods are attached to the scenarios. However, recent reports refer to scenario 2 as the 'least likely' of the three scenarios (Folkhälsomyndigheten 2021b(Folkhälsomyndigheten , 2021c. According to its official mandate, the PHAS 'does not have the authority to pass laws and can only provide guidelines and recommendations on how various actors should behave within its area of expertise' (Pashakhanlou 2022, 3). Hence, 'the Swedish government has no legal obligation to follow PHAS' instructions and may disregard their advice' (Pashakhanlou 2022, 3). However, 'In the case of the government's response to COVID-19, the Prime Minister as well as other Cabinet members stated early on in the process that they would take advice from the experts and the agencies' (Pierre 2020, 482) and, thus, 'the operative and strategic decisions regarding the Swedish strategy to the pandemic were made at the public agency level (by experts) and not at all by the government' (Zahariadis et al. 2023, 14).
Among the most prominent of these experts is Anders Tegnell. As state epidemiologist, Tegnell has become a figurehead of the Swedish response, serving as a key spokesperson for the PHAS. Through numerous press conferences and interviews, Tegnell has kept the public informed about the status and development of the pandemic, as well as providing the supporting rationale for the agency's current guidelines and recommendations. Several of the statements made by Tegnell concerning the potential trajectory of COVID-19 refer to judgments of risk and likelihood, yet these concepts are not given a clear interpretation. Upon analysing Tegnell's statements across time, we observe a gradual shift in how risk is conveyed. Initial statements suggest a notion of risk as 'negligible' or even 'non-existent'; for example, in an interview with Aftonbladet in the early stages of the pandemic, Tegnell referred to the risk of the disease spreading to Sweden through imported cases as 'completely non-existent' (Fernstedt 2020). Similarly, in relation to the potential for a second wave of infection, he stated that the risk of experiencing a scenario similar to the first wave 'no longer exists' (Aftonbladet 2020). Tegnell later adjusted his perspective to acknowledge that risk exists, although considering it to be very low; in an interview from October, 2020, he refers to the risk of COVID-19 as comparatively lower than that of walking across a pedestrian crossing in a city (Kullberg 2020). Along with the continuing course of the pandemic, Tegnell's statements indicate a more cautious approach in conveying the risk. In a column written for Dagens Industri, Tegnell refers to it as 'difficult' , if not 'impossible' , to predict the development of the coronavirus pandemic (Tegnell 2020). A similar notion is found in an interview with Sveriges Radio, where Tegnell expressed a reluctance towards attempting to describe the pandemic risks, claiming that it is 'impossible to know' what the actual risk is (Sveriges Radio 2020).

Discussion and suggestions for how to improve current practices
As illustrated by the review in Section 3, different methods and approaches are used to assess and convey the risks related to COVID-19. In the present section, we provide a discussion of these practices, particularly focusing on the key issues raised in Section 1. The discussion will point to some of the limitations and weaknesses of the current approaches, followed by reflections on how they can be improved by drawing on guidance from contemporary risk science knowledge, representing state-of-the-art principles, methods and approaches for how to conceptualize and describe risk.

Fundamental understanding of the risk concept and how it is described
Governments and public health authorities rely on the output of risk assessments to guide policy and decision-making. Moreover, assessments and statements expressing the pandemic risks are presented to the public as a means of encouraging and motivating adherence to guidelines and recommendations. Clearly, the approaches and methods used to describe and convey the risks related to COVID-19 have a substantial impact on how societies respond to the current crisis. However, in order for decision makers and other relevant stakeholders to understand what the statements describing the pandemic risks actually mean, they need to have a clear idea of the risk concept; how is risk defined, and what does the concept reflect?
Risk is extensively referred to in the large corpus of statements, recommendations and analyses produced in relation to COVID-19. Although a clear definition of the concept is rarely given, a closer look at the statements from Section 3 reveals different notions of risk. The line of interpretations can be divided into four categories, as seen below: i. Risk = a probability/likelihood ii. Risk = the product of probability and consequence iii. Risk = a combination of probability and consequence iv. Risk = a prediction of potential consequences Although the term 'likelihood' is rarely referred to by Tegnell, the way risk is conveyed in his statements coincides well with the category i) type of interpretation. Thus, when he refers to risk as 'non-existent' , it is with reference to a very low (or close to zero) probability of some specified event occurring. Similarly, the comparison of the risk related to COVID-19 with that of crossing a road is simply an assertion that the probability of experiencing severe consequences as a result of the coronavirus disease is lower than being hit by a car on a busy street. Furthermore, according to this interpretation, the notion of risk as 'impossible to determine' is essentially equivalent to the idea that probability judgments cannot be accurately or meaningfully assigned.
An interpretation of risk corresponding to the category i) line of thinking can also be found in statements by Anthony Fauci, in which the term 'risk' refers to a judgment of likelihood. In some of the statements, the risk term is used to express the unconditional likelihood of a specified event occurring (e.g. the likelihood of getting infected by the coronavirus disease), whereas, in other cases, the term points to a conditional likelihood, given that an event has occurred (e.g. the likelihood of requiring medical intervention, given an infection).
A similar use of the term 'risk' as a synonym for likelihood is found in press briefings by the White House. However, these briefings also contain statements in which the pandemic risks are expressed using predictions of potential consequences, indicating a notion of risk resembling the category iv) interpretation.
Referring to predictions of future consequences is a common way of expressing risk, and several of the approaches listed in Section 3 can be linked to this category (iv), including the forecasts produced by the SPI-M and CDC, as well as the approaches relying on scenario analysis. Although these approaches all have in common that they describe risk by making predictions of potential consequences related to the pandemic, what distinguishes them is the degree to which they reflect the associated uncertainties.
In some cases, the underlying definition of risk is explicitly stated. An example is the risk assessment reports by the public health agency in Norway (FHI), in which risk is defined as the product of probability and consequence (in line with category ii above). While the effort to provide an explicit interpretation of the concept deserves recognition, the adopted definition is unfortunate.
By interpreting risk as the product of consequence and probability, the concept is seen as an expression of the expected value, the centre of gravity of the relevant probability distribution. Although this metric can provide an informative risk description in some cases, there is broad consensus among risk science scholars that the use of expected values to reflect risk has strong limitations and could serve to mislead decision makers (Paté-Cornell 2002; Aven 2012; Haimes and Sage 2015). A key problem relates to the concept's lack of ability to show the potential for and likelihood of extreme consequences. Two probability distributions could have the same centre of gravity but could be completely different when it comes to the shape. A situation with a heavy tail -there is a chance of some severe outcomes -would require a different risk management strategy than one where such outcomes can be ignored.
Expressing risk as an expected value presupposes that the specified consequences and probabilities are assigned numerical values. However, in the approach used by the FHI, these dimensions are assessed qualitatively, rendering this line of interpretation unsuitable and not consistent with how risk is actually characterized in the reports. De facto, in the initial assessments, risk is described as a combination of likelihood and consequence (corresponding to the category iii) interpretation). A similar perspective is found in the risk assessment guidance document produced by the EMG, in which risk is defined as the combination of likelihood and severity of harm (SAGE 2020).
In later reports published by the FHI, the notion of risk is further developed to incorporate two additional factors: confidence and scope. How the dimensions are weighted and combined is not specified; thus, is it not clear how each of the components contributes to the overall judgment of risk.
Notably, the most recent risk reports by the FHI display a significant shift in how risk is understood and conveyed. Firstly, the concepts of likelihood and confidence are excluded from the reports, and explicit judgments of these dimensions no longer constitute part of the risk description. Furthermore, although the FHI now abstains from clearly stating the underlying definition of risk, two separate notions of risk can be identified when scrutinizing the reports: throughout, the term 'risk' is generally used analogously with likelihood, suggesting a category i) line of interpretation. However, the overall risk judgment is presented by reflecting on potential consequences, which indicates a notion of risk corresponding with a type iv) interpretation.
Common to the interpretations above is that they all express different ways of measuring or describing risk. However, the overall notion of risk as a concept, and what it actually represents, is left unaddressed. The distinction between how risk is conceptualized and how it is measured is essential: it opens up the way for the acknowledgement that there could be aspects of risk and uncertainty that the chosen measure of risk is not able to capture. Some of the issues related to the use of expected values to express risk have already been pointed out. There are, however, challenges and limitations associated with using any risk metric or description based on probability. Some of the most pressing issues relate to how aspects of uncertainty and knowledge are addressed, but, before we can explain and discuss this point any further, we need to clarify what probability means.

The meaning of a probability in a risk assessment context
Probability is a key concept when it comes to assessing and conveying risk, and judgments of probability and likelihood constitute an integral part of several of the approaches outlined in Section 3. The risk assessments produced in relation to the pandemic serve as essential sources of information when it comes to decision-making and policy development, but the output of these assessments cannot be properly understood unless they are accompanied by a clear interpretation of what the probability statements represent.
Yet, little attention is focused on clarifying the underlying interpretation of the concept. The consensus statements by the SPI-M, for example, assert that all probability statements are in line with the PHIA framework of language for discussing probabilities -but this tool offers guidance on how to articulate probability judgments, it does not provide an interpretation of what the concept of probability reflects per se.
A characteristic feature of Anthony Fauci's statements to the public is his proclivity for 'using direct language and breaking complex topics into understandable components' (Guo andCannella 2021, 1423). For instance, very clear interpretations are provided for key concepts such as 'herd immunity' , 'vaccine efficacy' and 'breakthrough infections' . However, the likelihood/probability concept is not delineated with a similar level of effort, although Fauci refers to the concept extensively in his statements.
Furthermore, those who attempt to assign a meaning to probability are not able to provide definitions with the required precision. The FHI, for example, states that probability can be understood as the probability of an event occurring -but this definition merely points to the fact that the probability judgment is linked to some event or outcome. What the probability actually expresses is left unaddressed.
The frequent use of the term 'risk' as a synonym for likelihood/probability adds to this confusion. Although the approaches lack clear and explicit definitions of the probability concept, a closer look at how terms such as 'likelihood' and 'risk' (in the sense of probability) are being used and referred to in the documents and statements suggests different lines of interpretation.
In some of the statements, the concept of likelihood/probability is used to reflect the fraction of the population experiencing some specific consequence (e.g. infection, severe illness, hospitalization, death, etc.). As an example, consider the statement (3.1) made by Anthony Fauci at the beginning of March 2020. According to the notion of probability used, often referred to as a frequentist perspective, the concept is used to describe stochastic variation in large populations of similar units. Other examples of this interpretation of likelihood/probability can be found in the risk assessment reports published by the FHI. In these documents, the terms 'risk' and 'likelihood' are used interchangeably, both pointing to a frequentist type of probability. There is, however, no clear distinction between frequencies expressing historical data and those that represent estimates of future quantities. Consider, for example, the following two judgments of likelihood: (a) In a report published on January 26, 2022, a judgment of the likelihood of reinfection for Omicron vs Delta is made by comparing the observed number of reinfections in two specific time intervals in which the respective variants were dominant. Based on the observed data, it is concluded that the risk (in the sense of likelihood) of reinfection is 16 times higher for the Omicron variant (FHI 2022b). b. In the same report, the FHI refers to results from data modelling by the infectious disease agency in Denmark (SSI), where a scenario is produced to describe the future trajectory of the pandemic. The generated scenario is based on several assumptions, including a judgment on the hospitalization risk (understood as likelihood of hospitalization), which is assessed to be 50% lower with Omicron compared to the Delta variant (FHI 2022b).
In the former statement, the likelihood judgment simply reflects the fraction of historical observations in which the considered event (an individual is reinfected) occurs. This is also the case for the likelihood judgment by Fauci, referred to above. Using this value to express the future frequency of events means that a number of assumptions are made: most importantly, that the historical data can be extrapolated to provide accurate descriptions of the future.
The latter likelihood, however, represents a judgment about the variation of some phenomena in relation to the future. More specifically, it expresses an estimate of the fraction of the population who will need to be hospitalized, given an infection. In this case, the judgment includes, but is not limited to, historical data. It could also incorporate other sources of information, including expert knowledge and models.
A frequentist probability is a model concept, generated by a mind-constructed experiment in which the considered situation is repeated (hypothetically) an infinite number of times under similar conditions. In general, the 'true' value of the probability is unknown. It needs to be estimated, and the purpose of the risk assessment is to produce an estimate of this true value. The distinction between the underlying, theoretical frequency and the judgments that provide estimates of this frequency is not made explicit in the statements referring to this type of probability.
The fundamental prerequisite for adopting a frequentist perspective on probability is that the model can be justified, i.e. it is possible to define an (infinitely) large population of similar units. Determining whether these conditions are met is, however, not straightforward (Aven and Reniers 2013). Consider, for example, the probability of hospitalization referred to in statement (b). If our reference is the probability for the population as a whole, a frequentist approach can be considered suitable; it is possible to construct a large population of people that satisfies the similarity criterion. However, if we are concerned with the probability of hospitalization for a particular person, the model lacks a proper justification -it is difficult to define a population that is sufficiently 'similar' to the specific individual, without reducing the sample of persons to a level at which it no longer fulfils the criterion of population size. Furthermore, frequentist probabilities require a certain degree of stability in the phenomena studied. There are, however, several aspects of the current pandemic that violate the requirement of phenomena stability, in particular the emergence of new variants.
In the approaches referred to in Section 3, we find several judgments of likelihood that concern events for which the idea of infinite repetitions does not apply. Consider, for example, the following statements: (c) Among several so-called 'risk questions' referred to in a report published in December 2020, the FHI addresses the potential event that the South African and English variants of the virus will spread to Norway. The likelihood of such an event is judged to be 'high' (FHI 2020c) (d) In a report published in December 2021, the PHAS describes the potential trajectory of the pandemic in Sweden for the coming three months by providing a selection of three potential scenarios (referred to as 0, 1 and 2), of which scenario 2 is referred to as the 'least likely' development (Folkhälsomyndigheten 2021c) (e) In a consensus statement from January 26, 2022, the SPI-M-O assesses the potential occurrence of 'large future waves of infection that need active management to prevent detrimental pressure on the health and care sector' as a 'realistic possibility' , corresponding to a probability of 40% -50% according to the PHIA Probability Yardstick (SPI-M-O 2022a, 2) (f ) SAGE refers to a similar scenario in their situation update from January 28, 2022, stating that 'The long-term pattern of the epidemic in the UK is highly uncertain, but future waves of infection should be expected (high confidence)' . Furthermore, they state that 'It is not clear how long it will take for a stable global pattern to emerge, but the situation is likely to fluctuate for several years (medium confidence)' (SAGE 2022, para. 5) In the statements referred to above, we cannot meaningfully define a population of similar situations -the events addressed are unique. In these cases, the concept of probability is used to reflect the assessors' degree of belief in an event occurring, often referred to as a subjective (knowledge-based) perspective.

Reflecting uncertainty and knowledge
Making judgments about the future development of the coronavirus and its impact on global society is not an easy task. The pandemic trajectory is influenced by a number of different factors, many of which we have limited data and knowledge about; how will future mutations of the virus behave? What is the long-term efficacy of current vaccines, and will they provide protection against the new variants? How will the public comply with the recommended protective behaviours? The situation is characterized by large uncertainties, and a key task of the risk assessments is to address these uncertainties and convey them to decision makers and the public in a way that provides a sound basis for choosing policies and measures to confront the pandemic.
As illustrated in Section 4.1, the approaches and methods used to describe the risks related to COVID-19 are based on four main notions of risk (referred to as i)-iv)). Notably, uncertainty is not explicitly included as an aspect of risk in any of the interpretations. Nevertheless, which perspective is chosen affects how uncertainty is reflected, as the different ways of representing risk vary in the extent to which they allow for considerations on uncertainty to constitute an integral part of the risk characterizations.
Let us first take a closer look at how uncertainty is addressed in the approaches where risk is expressed as predictions of potential outcomes (category iv)). In some cases, the outcomes refer to a quantity, such as the number of hospitalizations or deaths in a population in the coming months (e.g. CDC 2020a; SPI-M-O 2022b). Other statements point to qualitative descriptions of potential events or scenarios (e.g. CDC 2020c, 2021; Folkhälsomyndigheten 2021c; White House 2021b). In either case, the predicted consequences are model dependent -how the future will play out is unknown; there are uncertainties related to the deviation between the predicted outcomes and the actual consequences that will occur. The lack of relevant data and knowledge contributes to this uncertainty, as predictions about the future development of the pandemic need to be based on a number of assumptions, the validity of which may be difficult to evaluate. When representing risk by exclusively referring to predicted outcomes, however, the assessors are not required to reflect on these aspects. The absence of a systematic approach for handling uncertainty has led current practices to suffer from a large degree of arbitrariness when it comes to how uncertainties are addressed in the assessments.
In some cases, risk is described by referring only to the severity of the predicted consequences, omitting any judgment on the associated uncertainties. Consider, for example, the risk assessment framework used by the UK Health Security Agency (PHE 2021), as well as the statement made by the White House COVID-19 Response Team ('For the unvaccinated, you're looking at a winter of severe illness and death for yourselves, your families, and the hospitals you may soon overwhelm' (White House 2021b, para. 8). In the assessments in which risk is presented as a list of specified scenarios, the scenarios are either presented without any measure of uncertainty attached (CDC 2020c; Folkhälsomyndigheten 2021c) or alternative representations of uncertainty are assigned, such as plausibility (CDC 2021). What is meant by 'plausible' is, however, not explained. Moreover, in the approaches that rely on statistical analysis to generate predictions of potential outcomes, the produced estimates are often accompanied by various types of intervals, including 'prediction intervals' (CDC 2020b), 'confidence intervals' (SPI-M-O 2021c) and 'credibility intervals' (SPI-M-O 2021a(SPI-M-O , 2022b. What these intervals actually express is, however, not straightforward. Yet, again, the assessments do not provide any clear interpretations.
The severity of potential outcomes related to the pandemic is clearly of interest to the public and decision makers. Relying on this information alone to describe risk is, however, problematic. The foundation required to make risk-informed decisions extends beyond the predicted consequences; stakeholders also need to have a clear understanding of the uncertainties involved.
For instance, what assumptions have been made when deciding on which scenarios or events to focus on? Are there aspects of these consequences that are unknown, and which of the outcomes should we be most concerned about? These aspects are integral to the overall judgment on risk, and, thus, risk cannot be properly reflected without incorporating some sort of assessment of the associated uncertainties. Moreover, evading any mention of uncertainty when making statements about the potential outcomes could convey an illusion of certainty. However, what will happen in the future cannot be known with certainty, and conveying such an idea could be misleading, at best, creating a false premise for policy development and behavioural response.
Referring to the above examples on uncertainty characterization, it is difficult to see the conceptual platform for the measures used. A critical prerequisite for enabling stakeholders to interpret the output of an uncertainty assessment is to ensure that the terms used are clearly explained. However, as noted by van der Helm (2006), 'There is no added value in the claim that some future is plausible (or probable, or possible), if there is no meaning given to plausibility (or probability, or possibility) itself' (van der Helm 2006, 26). Similar issues arise when referring to intervals to express uncertainty in relation to statistical modelling, as there is a lack of clarification with respect to the underlying meaning of these intervals. Do they represent stochastic variation, or do they reflect the assessor's uncertainty about the accuracy of the estimates? The intervals provide valuable insights concerning the limitations of the analysis, but this information is difficult to capture if decision makers and the public do not have a clear understanding of what the intervals are actually expressing. Yet, to the best of our ability, we cannot trace any attempt to present a conceptual framework for these terms.
In the approaches where risk is represented according to the i)-iii) perspectives, probability serves as the main tool for expressing uncertainty. However, making probability a focal concept for the understanding and assessment of risk requires a clear delineation of the probability concept and what it represents in a risk assessment context. Unfortunately, the current approaches fail to provide such a clarification, leaving essential links between probability and uncertainty undisclosed.
As outlined in Section 4.2, two types of probabilities are referred to in the approaches and statements concerning the pandemic risks: frequentist probabilities (reflecting phenomena variation) and subjective probabilities (used to express degrees of belief ). Distinguishing between the two perspectives is essential in order to understand how uncertainty comes into play. If we are referring to frequentist probabilities (as in statements (a) and (b) above), there is uncertainty about the deviation between the estimated frequentist probability and the underlying, true probability. When there are considerable uncertainties about the phenomena, as in the current context, the accuracy of the estimate could be poor. However, when no clear distinction is made between the true probability and the frequentist probability representing an estimate of this probability, the uncertainty related to this deviation is not reflected.
A subjective probability, on the other hand, reflects the assessor's uncertainty about the occurrence of some event or scenario. When it comes to this type of probability, there is no uncertainty about the assigned probability itself, as there is no underlying, true value to compare with. However, an essential issue concerning the use of subjective probabilities is their contingency on the background knowledge of the assessor. This knowledge could be more or less strong, and even erroneous, particularly in cases where there is little data and evidence to rely on. Lack of knowledge constitutes a major contribution to the uncertainties; thus, any subjective judgment of probability should be supplemented with reflections on the supporting knowledge base.
How to incorporate this aspect is, however, not trivial. In some of the approaches, the supporting knowledge is addressed by introducing a measure of confidence. An example is the situation updates published by SAGE, in which judgments on likelihood are accompanied by a confidence grade of 'low' , 'medium' or 'high' (see e.g. likelihood statement f ) above). The measure of confidence implicitly points to some sort of judgment on the foundation of knowledge supporting the assessment. However, the knowledge base could consist of a number of different components, including data, assumptions, expert judgments, theories and models -but which of these factors are given weight, and how are they evaluated? The underlying criteria for determining the level of confidence are not specified, leaving the measure with limited informative value.
A similar lack of clarification applies to the approach used for assigning confidence in the risk assessment reports by the FHI. In these reports, the measure of confidence is defined as a description of the assessors' confidence in the knowledge base supporting the assessment. Yet, how the measure is derived is not specified. Furthermore, little effort is directed towards establishing a clear idea of how the level of confidence affects the overall judgment of risk. Strikingly, the only referral to the relationship between confidence and risk is the assertion that no conclusion on risk can be made when the confidence in the knowledge base is weak. The degree to which this line of reasoning can be justified depends on the adopted perspective on risk. If the risk characterization is based on frequentist probabilities, there is a rationale supporting such a claim, as the lack of ability to conclude on risk can be understood as a situation in which the available knowledge and data are not sufficient to produce meaningful estimates of the frequentist probabilities. However, several of the events referred to in the reports by FHI are unique types of events, and a frequentist interpretation of risk cannot be justified: the probabilities specified are subjective judgments of likelihood. In this case, the knowledge base that these probabilities are conditioned on should constitute an integral aspect of the risk characterization. Consider, for example, a case in which two events are assigned the same likelihood. One is supported by a strong foundation of knowledge (high confidence) and the other is subject to a weaker background knowledge (low confidence). Judgments about the magnitude of the risk should reflect this and not be the same for the two events, as the lack of knowledge implies a higher level of uncertainty in the case of the latter. Under such circumstances, a weak knowledge support should not trigger a reluctance to make risk judgments. Rather, the condition of incomplete knowledge underlines the importance of taking the limitations of the knowledge base into consideration in the overall assessment of risk.
As illustrated by the discussion above, the large uncertainties concerning the future trajectory of the pandemic are fuelled by a fundamental lack of knowledge about the virus, its properties and the effects of mitigating measures. Conveying what is known and what is unknown to the public and decision makers is, however, challenging, and current practice is characterized by a lack of clarity on the basic meaning of knowledge and its role in scientific discourse. Among the main issues is the idea that scientific knowledge represents some kind of objective truth. Notably, several of the statements made in relation to the potential development of COVID-19 are justified by referring to terms like 'the science' , 'facts' , 'truth' and 'evidence' . For example, in an interview with the Learning Curve Podcast, Fauci stated that 'science is truth, and if you go by the evidence and by the data, you're speaking the truth' , emphasizing the need to 'make sure that we make, consistently, the public health recommendation based on the truth and the evidence as we have it' (Learning Curve Podcast 2020). The depiction of science as an expression of the truth can be accommodated by adopting the traditional perspective on knowledge as 'justified true beliefs' . The suitability and rationality of this notion of knowledge can, however, be questioned for risk issues and problems; contemporary risk science refers to knowledge as 'justified beliefs' rather than 'justified true beliefs' (SRA 2015). 'A statement can be supported by strong theoretical arguments, a lot of relevant data and information, considerable experience, testing, but no one is in a position to label the beliefs as true' (Aven 2018, 878). Assuming that scientific knowledge represents the truth could hamper any attempt to scrutinize the knowledge base, as reflections concerning the potential of having erroneous knowledge will trigger an inconsistency; if knowledge, by definition, is understood as true beliefs, how do we open the door to the acknowledgement that these beliefs could be false? Moreover, knowledge is not static, and, in the case of the ongoing pandemic, there is a continuous emergence of new data and knowledge that could potentially challenge previously held beliefs and trigger a change in current policies and mitigation strategies. However, implying that science represents the truth makes the dynamicity of knowledge difficult to convey, as the idea of an evolving truth is counterintuitive.

Suggestions for improvement
To support decision-making and policy development in relation to the ongoing pandemic, governments and health authorities have sought advice and guidance from the scientific community. As a result, considerable resources have been mobilized, particularly directed towards providing assessments of the risks and uncertainties associated with the future development of the pandemic. However, there is room to improve the way risk is described and conveyed in these assessments, and the above review and discussion has pointed to some areas in which current practices fall short. The purpose of the present section is to show how current approaches to conceptualizing and describing risk can be improved by drawing on ideas and knowledge from risk science literature. This body of literature provides clear guidance on how to best assess and characterize risk, particularly focusing on the importance of giving due attention to the aspects of uncertainty and knowledge. Contemporary risk science knowledge highlights some fundamental principles that underpin a prudent assessment of risk, including the following (SRA 2015, 2018): • Key concepts are provided clear interpretations • The limitations of the tools and measures used to describe risk are recognized • The knowledge base supporting the assessment is thoroughly evaluated Using the issues outlined in the review and discussion as a basis, the following section will provide some reflections on how current practices can be improved to better incorporate these principles.
The risk assessments related to COVID-19 refer to probability/likelihood as the primary measure for expressing uncertainty. However, current approaches suffer from a lack of rigorous definitions, and, as a result, essential features of the concept are not reflected. When referring to frequentist probabilities, for example theoretical frequentist probabilities, estimates of these probabilities and frequencies expressing historical data are referred to interchangeably. This practice is unfortunate, as the three values represent completely different ideas. The theoretical frequentist probability is the underlying concept and is, in general, an unknown quantity. The probability estimates are trying to capture the true value of this quantity and could be more or less successful in this endeavour. Historical relative frequencies can be used as input to generate such estimates. However, in contexts such as the present, where the studied phenomenon is changing rapidly, relying on the historical frequencies alone would eliminate important considerations of the potential gap between what has happened in the past and what could happen in the future. An understanding of this gap requires clear boundaries to be made between the underlying concept that we are trying to measure (the frequentist probability), the measurement (the probability estimate) and historical data, the latter of which can serve as input to the probability estimate, but the data could be more or less representative for the future (Aven 2013).
Furthermore, there is a need to recognize probability for what it is: a tool. There are limitations to its use, and there could be important aspects of uncertainty that cannot be captured by the probabilities alone, as discussed above. Similarly, potential outcomes could be described by generating a set of pandemic scenarios, but these scenarios may represent more or less accurate depictions of the actual development of the pandemic. Addressing these issues and conveying them to decision makers as part of the risk assessments is an important prerequisite for evaluating the output of the risk assessment in light of its limitations and weaknesses, what is often referred to in risk analysis literature as broad risk evaluation or managerial review and judgment (see e.g. Hertz and Thomas 1983;Aven 2013). However, in current interpretations of risk (referred to as i) -iv) above), the concept is affiliated with a particular measurement tool, and, thus, any reflections that go beyond the scope of the chosen metric are difficult to incorporate. To open the door to the acknowledgement that the chosen measure could be more or less capable of capturing all relevant aspects or risk, we need a framework that allows us to distinguish between the concept of risk per se and how it is measured or described.
Using frequentist probabilities as a basis, such a distinction may to some extent be accommodated; by differentiating between the theoretical frequentist probability and the probability that expresses an estimate of this value, the former can be seen to represent the concept of risk and the latter its measurement/description. Adopting this perspective in the current setting is, however, problematic; when risk is defined with reference to frequentist probabilities, the applicability of the concept is restricted to contexts where such probabilities and models can be meaningfully defined. Considering the uncertainty and instability associated with the phenomena driving the pandemic, the extent to which such models can be justified can be discussed, and some of the most critical issues related to the use of frequentist probabilities to express risk in the present context have been thoroughly addressed in previous sections.
An alternative is to define risk using subjective probabilities, as this type of probability can always be assigned. However, the limitations and weaknesses associated with probability as a measurement tool are also relevant for subjective probabilities (Flage et al. 2014). Yet, we are confronted with the same issue as stated above: when the interpretation of risk is based on subjective probabilities, it is not possible to draw a line between the risk concept and its measurement; with no underlying, true probability as a reference, the two will coincide. There is, however, a way to resolve this issue: by replacing the probability component with uncertainty in the definition of risk, we obtain a way of conceptualizing the idea that there could be aspects of uncertainty that the assigned probability does not reflect. This leads us to contemporary perspectives on risk, in which the concept is seen as the combination of consequences (with respect to something of human value) and the associated uncertainties (SRA 2015(SRA , 2018. According to this perspective, 'probability enters the scene when we would like to describe or measure the uncertainties and thus risk' (Aven 2012, 40). Subjective probabilities -precise and imprecise -have been put forward as a suitable measure for uncertainty in the present context, and several of the approaches from Section 3 include referrals to this type of probability perspective. However, important aspects of the concept are not attended to in current applications.
Firstly, a clear interpretation of the concept is missing. Literature on the subject contains different ideas of what a subjective probability expresses, some of which have been subject to strong criticism (see e.g. Aven and Reniers 2013). The recommended line of interpretation is to use an urn reference (Lindley 2006): If the assessor assigns a subjective probability/likelihood of 0.1 (say) to an event or outcome, it means that he or she has the same degree of belief in the event/outcome occurring as that in drawing a particular ball out of an urn containing 10 balls in total.
Furthermore, as discussed in previous sections, a subjective probability is contingent on the background knowledge of the assessor and could be more or less strong. The importance of addressing the knowledge (or lack of knowledge) supporting the analysis has been given considerable attention in recent risk science literature (see e.g. Flage et al. 2014;Aven 2019). This work has given rise to the recommendation that evaluations on the strength of the background knowledge should constitute an integral part of the risk assessment. Different approaches can be used for this purpose, including the so-called NUSAP system (Funtowicz and Ravetz 1993;Kloprogge, van der Sluijs, and Petersen 2005;) and the evaluation scheme introduced by Flage and Aven (2009); see Aven and Flage (2018). Important assessment criteria for evaluating the knowledge base cover reflections on the validity of the assumptions made, the degree of phenomena understanding, expert consensus, the extent to which the data and models used in the analysis are relevant and reliable, and the degree to which the knowledge base has been scrutinized, for example with respect to potential surprises.
To illustrate how the recommendations above can be implemented in practice, let us consider a concrete example of how a risk assessment could have been carried out in the early phase of the pandemic. In Table 1, we outline the main steps of the recommended approach and provide some comments on how these adjustments can contribute to improving the understanding and management of risks related to COVID-19.

Conclusions
COVID-19 has triggered major efforts from governments and health authorities to develop strategies and policies to mitigate and control the impact of the pandemic. The policies and strategies build on output from different types of risk assessments, and how risk is described and conveyed in these assessments has a strong influence on how the pandemic risk is understood and managed. In the present paper, we review the approaches for assessing the risks related to COVID-19 in four countries (Norway, Sweden, the US and the UK) and show that current practices have significant room for improvement. The main issues relate to i) the lack of clear interpretations of key concepts, ii) the lack of focus on the limitations and weaknesses of the tools and measures to describe risk and iii) the lack of thorough reflections on the uncertainty and knowledge dimensions of risk. Some recommendations on how these issues can be rectified are provided by drawing on methods and approaches from contemporary risk science literature specifically targeting the uncertainty and knowledge dimensions of risk. A focal contribution of the recommended approach is the importance of including reflections on uncertainty that go beyond the assigned likelihoods/probabilities, as well as incorporating evaluations on the strength of knowledge supporting the assessment.
A proper handling and communication of risk requires a strong scientific platform supporting the analyses and assessments. In the context of the coronavirus disease, the management of risk has been a particular challenge; the large uncertainties and lack of knowledge characterizing stages of the pandemic created poor conditions for making decisions and developing policies.
In such cases, science cannot provide authoritative answers and recommendations. Still, we can rely on risk science to provide clear guidance on how to best assess, describe and convey the risks related to the pandemic. The present paper has pointed to some fundamental approaches and principles from risk science literature in which such guidance can be found. Although it is certainly easier to identify weaknesses and errors in the risk assessments and handling of COVID-19 with the benefit of hindsight, it is the belief of the present authors that, had these approaches and principles been adopted, important aspects of the assessment and communication of the risk in relation to COVID-19 would have been significantly better. This applies to both lay people's and health professionals' understanding of risk, and particularly the communication between different actors such as experts and non-experts. If the health experts lack a risk science foundation, how can they, for example, convincingly explain how large a risk is or compare the magnitude of different risks? The analysts specify a list of events and consequences that could occur. examples of such events could be the occurrence or not of community transmission of the coronavirus in a particular country X, denoted by a1 and a2, respectively. To describe the outcomes for these events, different intervals could be specified, e.g. for the number of fatalities.
The description of the consequences constitutes the basic frame conditions for the risk assessment, by establishing a clear idea of 'what is the risk of what to whom' .

assign probabilities
The events and outcomes are assigned subjective probabilities (precise or imprecise). Where qualitative probability scales are used, a link to specified intervals is provided. for example, event a1 is judged to be unlikely, where 'unlikely' refers to a subjective probability of < 0.1 (say).
The assigned probabilities provide essential input to the understanding and characterization of risk 3 evaluate the strength of knowledge (sok) The sok supporting the consequence analysis and probability assignments is assessed. for the current example, there are limited amounts of data to rely on, a lack of knowledge on the phenomenon (the virus), and the models used are based on a large number of assumptions, resulting in an overall weak sok judgment.
The sok judgments provide valuable information concerning the limitations of the analysis and how much weight should be given to the specified consequences and assigned probabilities.

assess uncertainty factors not captured by the probabilities
The knowledge base is scrutinized, key assumptions are challenged, and specific attention is directed towards events and consequences assigned negligible probabilities. To allow for different perspectives, this assessment should be performed by a group of analysts other than those who conducted the risk assessment.
reflecting on aspects of uncertainty that go beyond the assigned probabilities is essential in order to identify and address potential knowledge gaps that could give rise to unforeseen and surprising events and outcomes. 5 Present the risk description to decision makers and/or other stakeholders The list of events and consequences having the highest risk score based on assigned probability and strength of knowledge judgments is presented. The risk description is accompanied by clear interpretations of key concepts. in addition, the output of the uncertainty assessment conducted in step 4 is included.
Providing decision-makers and stakeholders with a clear understanding of the risk, including the risk assessment's limitations and boundaries. The risk description creates a foundation for conducting broad risk evaluations, and is essential in order to create a solid basis for developing risk-informed policies and strategies.