Keywords

1.1 Realism, Illusions, or Even Delusions?

Conceiving of mental health and good adjustment in terms of individuals’ accurate perception of the world around them and of themselves seems virtually self-evident. In other words, psychiatry’s interest includes analyzing whether we perceive ourselves and the outside world as they are or whether we distort, or falsify, both of these images. Psychiatry identifies loss of contact with reality, delusions, and delusional beliefs of patients that they are someone other than they really are as axial symptoms of mental illness (e.g., Meisner et al., 2021; Zandersen & Parnas, 2019). Psychology as well, until the late 1960s, was convinced that mental health and good adjustment were closely related to realism: accuracy in assessing one’s own mental qualities and potential for success in various areas of life or a person’s mature, realistic attribution of responsibility for both positive and negative events that befall them. Such a belief was characteristic of both those strands of psychology that were humanistic (e.g., Horney; 1937; Maslow, 1954) and strictly cognitive (e.g., Festinger, 1954; Trope, 1975). This is different; it was thought, in the case of people whose mental health is disturbed, because then the “images” (of themselves, the world) are falsified. And this does not apply only to obvious clinical cases of psychosis but to much more subtle illusions, which are typical for almost all people (and therefore also for us, the authors of this book). For example, the theory of cognitive dissonance (Festinger, 1954) reveals that if, along with a paltry salary, we have worked on something for a very long time or if we have been in a relationship with another person for years with no apparent achievements coming from our relationship, when asked about the meaningfulness of our endeavors, we will respond in a manner that allows us to maintain a good opinion of ourselves. Instead of saying that we worked for years for a pittance on something terribly boring, we will say that it is not boring at all. Instead of admitting that we are stuck in a bad relationship, we will say that it has its pluses. And all this is only because we retroactively justify to ourselves that “since I have invested so much, it must have meaning.”

It is a common belief that depressed people look at the world through gloomy lenses, thus overestimating the likelihood of negative events and underestimating the chances that desirable states of affairs will occur. However, is this really the case? In 1979, a very interesting article by Lauren Alloy and Lyn Abramson was published, whose subtitle labels depressed people “sadder but wiser.” The research of these authors showed that depressed people show more realism in assessing themselves and their ability to influence the course of events than nondepressed people. In other words, it is rather well-adjusted people who seem to have very rose-colored glasses on, rather than depressed people wearing glasses that obscure reality. This result, although shocking from a certain perspective, was perfectly in line with the notion, gradually emerging in scientific psychology at the time, that mental health and good adjustment are not at all served by an accurate and fully realistic view of oneself and the world around us.

In the realm of social psychology, Anthony Greenwald (1980) introduced the term “totalitarian ego” into the literature, showing in a series of studies that mentally healthy people make (usually unconscious) distortions of their own memory. Our ego resembles the regime functionary from George Orwell’s famous book “1984” (1949/2021). This functionary constantly and consistently changes the content of newspaper articles in the library from years ago so that it is in conformity with the (current) party line (much like a censor, which, as an aside, we experienced for ourselves, since until 1989 we lived in a communist country where censors changed not only the content of news but even ordinary songs). We behave similarly – we remember things so that they are consistent with our positive self-perception (e.g., we don’t remember that we showed ignorance in a conversation with another person, lied, or hurt other people) and so that they do not shatter our conviction of our own competence or morality (“I’m a good, truthful person”). Significantly, from the perspective discussed here, Greenwald shows that such “totalitarian” inclinations of our ego are functional: they serve not only our well-being but also provide a coherent and stable view of ourselves, helping us to make appropriate life decisions and stimulating our achievements.

Ellen Langer (1975) showed, in turn, the prevalence of the human tendency to manifest the illusion of control. People seem to believe that they have influence over purely random events. When they play dice and there is a lot of money on the table, they shake the dice in the cup longer and then more carefully toss them on the table. When they take part in a traditional lottery, they are unlikely to reach for the ticket lying on the very top. They prefer to put their hand inside the box, stir the tickets with their hand, and pull one out from underneath. When they have to bet on something and the verdict will be whether a tossed coin will fall “heads or tails,” they prefer to bet before the coin is tossed rather than when it is already lying on the table and they only need guess what you see on it. Apparently, if the coin has not yet been tossed, we can concentrate and somehow influence with our stream of thought or energy the probability of heads showing. If the coin is already in place, we can only guess.

At the same time, studies have begun appearing in increasing numbers in the psychological literature showing that people are not necessarily oriented toward seeking believable diagnoses of themselves. On the contrary, they are extremely often biased to seek rather such information that can provide them with a rationale for thinking positively about themselves, and they block, distort, or avoid such information that could disrupt this positive thinking (e.g., Korman, 1976; Suls, 1977; Goethals, 1986). Thus, one can clearly see how many fields we deceive ourselves in so as to simplify life. Like the aforementioned totalitarian censor from “1984,” we falsify reality because it is easier and more beautiful for us to live that way.

These and other numerous manifestations of inadequate attitudes of mentally healthy people toward themselves and the external world formed the basis of Shelley Taylor and Jonathon Brown’s (1988) groundbreaking psychology article with the highly suggestive and unambiguous title of “Illusions and Well-Being. A Social Psychological Perspective on Mental Health.” Reviewing a very large number of empirical studies, these authors conclude that the so-called “normal,” i.e., well-functioning and effective, people manifest three types of falsifications, positive illusions about themselves. So, first of all, they are characterized by an inadequate, inflated concept of self-worth (and thus, compared to the opinions of others who know them, they rate their intelligence or social competencies higher). One phenomenon of this type, better than average, is analyzed more extensively in this monograph. Second, such people are characterized by the aforementioned illusion of control. They see their influence on desirable, positive events even though realistically they do not have this influence or they estimate this influence to be higher than it actually is. Third, they are characterized by the so-called unrealistic optimism, which is discussed at length in the context of COVID-19 in subsequent chapters. Then, when, for example, university students estimate the likelihood of desirable events in their lives (such as having a happy family in the future or pursuing a career that is both rewarding to them and well-paid), they estimate that they are more likely to do so than most of their peers. However, when they think about potential negative events (such as having a heart attack before turning 40, contracting AIDS, or sliding into alcoholism), they believe it is less likely to happen to them than to the average student in their class. Summing up this important work, Taylor and Brown (1988) conclude that the falsified image is beneficial: it carries clear benefits for the falsifier – the totalitarian censor.

Later studies have enriched the picture now familiar to readers of the relationship between positive illusions and well-being (e.g., Boyd-Wilson et al., 2004; Collard et al., 2016) with results showing that in many situations this relationship may be more complex than psychologists initially thought. In their article, Dufner et al. (2019) present the results of a meta-analytic review involving as many as 299 studies with a total of 129,916 participants. They show that the tendency to maintain unrealistically positive self-views is positively related to personal adjustment (life satisfaction, positive affect, infrequent experiencing of negative affect and depressive states), but the associations of positive illusions with interpersonal adjustment (informant reports of domain-general social valuation, agency and communion) were found to be weak. Brooking and Serratelli (2006), on the other hand, show data indicating that positive illusions about the self and the world correlate positively with subjective well-being, but their relationship with measures of personal growth is negative. In other words, in addition to gains, losses from a falsified image of self and the world were also indicated. A review of the literature by Young (2014) also came to similar conclusions. The not entirely consistent and not entirely conclusive findings on the relationship between positive illusions and subject functioning may be due to two facts, among others.

First, Roy Baumeister (1989) pointed out that positive illusions can optimize human functioning only when their magnitude is not too great. However, they become self-destructive when they increase in intensity. Thus, while a moderately unrealistic perception of one’s own abilities, skills, and character traits (“I think I’m a bit smarter than others”) is characteristic of mentally healthy people, an overly exaggeratedly favorable perception of oneself (“I’m brilliant and can easily handle any situation”) is usually associated with an individual experiencing mental issues. Baumeister thus speaks of the optimal margin of illusion. If this threshold is exceeded in the direction of reinforcing falsification, the term “illusion” should no longer be used (e.g., “rose-colored glasses” would be an inappropriate term here). Much more apt in this situation would be the term “delusion” – a completely falsified image of oneself and/or the world.

Empirical evidence for the validity of this approach was provided by Asendorpf and Ostendoerf (1998). They showed unequivocally that exaggerated favorable self-perceptions are usually associated with an individual’s experience of psychological problems. The negative consequences of exaggerated positive illusions in the context of psychotherapy are discussed, in turn, by Kinney (2000), suggesting that they set in motion various irrational mental processes that are an obstacle to patients’ mature and healthy perception of their problems. Concurrent with this approach are the results of clinical studies suggesting that in situations involving various negative events in a person’s life, overly positive self-perception is a significant factor in the development of depression (Young-Hoon & Chi-Yue, 2011). Nonclinical studies show, in turn, that greatly heightened positive illusions can be an obstacle to students’ achievement of good grades (Ochse, 2012).

Second, Peter Gollwitzer (1996) reasonably argues that if positive illusions occur in the so-called implemental phase, i.e., accompanying the execution of the action itself, they promote the effective implementation of resolutions, immunize against difficulties, and motivate to overcome them. Consequently, they increase the individual’s effectiveness. However, if such illusions occur in the deliberative phase, that is, when deciding to take action or choosing the degree of difficulty of the action, they can be destructive, because the actor chooses tasks that are too difficult, overestimating their level of competence and/or nourishing an unreasonable hope that over time the situation will arrange itself so favorably that they will be able to cope with the task. These assumptions, too, have been confirmed in empirical studies (e.g., Armor & Taylor, 2003; O’Creery et al., 2010). Interestingly, the vast majority of well-functioning, mentally healthy people suspend (or at least limit) the experience of positive illusions in the deliberative phase and only reveal them in the implemental phase (e.g., Gollwitzer, 2003; Gollwitzer & Kinney, 1989; Puca, 2001). Thus, it can be said that a person behaves rationally when it is beneficial and becomes somewhat irrational only when the very activation of positive illusions can help them. (Or, looking at the same problem from a different perspective, a typical person lives by illusions when it is beneficial but suspends them when following them becomes dangerous.) This perspective will accompany us in the following chapters of this book where we will demonstrate how many of the illusions that we ourselves actively build can help us and, at the same time, how they can harm us.

1.2 Social Comparison and Egotism

The aforementioned esteemed social psychologist Leon Festinger formulated the social comparison theory, the axial premise of which is the proposition that people strive to accurately assess their skills, possibilities, and also want to be convinced of the accuracy of the opinions they express (Festinger, 1954). Festinger even refers to this motivation with the term “drive” (p. 117), usually reserved for aspirations of a biological nature, such as the need to satisfy hunger and thirst or to gratify sexual needs. He is, of course, far from suggesting that the drive to know oneself is a phenomenon of a biological, impulsive nature; rather, by using the term “drive” metaphorically, he wants to emphasize both the universality and the intensity of this need. According to Festinger, people primarily try to obtain information about themselves from objective sources. To check the accuracy of one’s own views about the size of the population in India or the height of Mount K2, it is easiest to turn to relevant, reliable printed sources or, in today’s times, the Internet. Knowledge of one’s skills and abilities (e.g., IQ test score) can be obtained by performing certain diagnostic tasks (e.g., completing a reliable intelligence test). This also applies to other capabilities. If a 40-year-old woman wants to find out if she can swim a kilometer, she goes to the pool and gives it a try. Also, knowledge about the state of one’s own physical condition (e.g., whether we have a fever) can be obtained through unambiguous tests (using a thermometer). On the other hand, if we want to know what our cholesterol level is, we submit a blood sample to a laboratory and find out from the doctor either that everything is normal or that we should start treatment because we are not healthy.

However, in many other cases, it is not possible to evaluate oneself so directly. If the aforementioned woman would like to find out not only whether she can swim a kilometer (this she has already verified for herself and knows she can) but also whether she can do it quickly, there is little to be gained from the fact that she managed to do it in, say, 39 min. To answer the question “how fast do I swim?,” she must make interpersonal comparisons. Only by relating this result to, for example, such data as the unofficial women’s world record (since no competitions are officially held at this distance) or the performance exhibited by her mother, daughter, or coworkers at the moment can she identify the basis of any assessment of herself. Thus, if information about oneself is not directly available from nonpersonal sources, one is fated to obtain it by comparing one’s own opinions, capabilities, skills and abilities with those of other people. Indeed, adequate knowledge of oneself is that which Festinger assumed humans earnestly strive for, and that is why he called this need “drive.”

In Festinger’s concept, the question of who the subject compares himself with is not without relevance. Returning to our example, the diagnostic value of comparisons with a world record holder, a 70-year-old mother, and work colleagues is markedly different. According to Festinger, given a variety of people to potentially choose from in the process of social comparison, individuals choose those who are similar to themselves and/or are in a similar situation. For a 40-year-old woman, comparing herself with both her mother, who last swam at a Girl Scout camp 65 years ago, and with a world record holder is hardly diagnostic. However, the most information she can get about her swimming skills (or physical form more broadly) is by comparing herself with other 40-year-old females (preferably ones who, like her, are working professionally, have suffered a heart attack, and also have raised three children).

However, people are not always motivated to compare themselves with others who are similar. When it comes to assessing one’s own abilities and skills, we can also acquire useful information by comparing ourselves with people who are better than us. This is because by doing so, we gain knowledge of how much we are missing to achieve a certain desired state. Someone learning a foreign language can compare themself with a colleague proficient in that language, observing over successive months and years that they consistently deviate less and less (but still) from them in performance.

That said, research on social comparison processes, inspired by Festinger’s (1954) ideas, has shown that people’s preferences for choosing others to benchmark themselves against are much more complex than the author of social comparison theory believed and that the desire to obtain diagnostic information about oneself is not the only motivation driving people to compare themselves with others.

Bib Latané (1966), while not refuting the basic thesis of Festinger’s concept, which is that, in the absence of objective criteria, people are inclined to derive evaluative knowledge about themselves from comparisons they make between themselves and others, pointed out at the same time that the purpose behind this is not necessarily to obtain reliable and diagnostic information. He suggested that people often seek information that confirms their judgments about themselves, rather than information that might significantly undermine that knowledge. This is because people are reluctant to change their views not only about the physical and social world around them but also about themselves. Both classics (e.g., Swann, 1983, 1987; Swann & Read, 1981) and new, more contemporary empirical studies (e.g., Hart et al., 2009; Gregg et al., 2017) have shown that this is indeed the case in many situations.

Based on the hypotheses of Latane, who illustrated the biased nature of comparisons, other researchers assumed that the core motivation in the process of social comparison is also biased, but primarily defensive and therefore egotistical (e.g., Jellison & Davis, 1973; Gruder, 1971; Goethals, 1986). This is associated with the fact that people primarily seek such information that can raise and enhance their self-esteem (self-enhancement). This is particularly common in situations where someone making comparisons is experiencing stress or suffering negative emotions or their self-esteem is threatened for various reasons (e.g., Hahmiller, 1966; Wills, 1981, 1987). The need for this individual to compare themself with others who are inferior – thus falsifying the objective image – grows. This is because we cannot come off badly by comparing ourselves with objectively weaker, inferior people. The aforementioned woman will compare herself with people in a swimming school for kindergarteners, while someone else will compare their IQ score with people who have little knowledge of the language in which the diagnostic test is conducted. In doing so, such comparisons can be passive (selecting people who are actually less competent or talented than the subject for comparison) or active (attributing negative qualities to other people that they do not actually have). Consequently, this shows that the aforementioned censor employs a number of ploys that go beyond “1984”: not only does he retroactively alter the records of facts but also the perception of human qualities. Numerous pieces of empirical evidence indicate that such comparisons with inferior others improve people’s well-being (e.g., Crocker & Gallo, 1985), as well as increase their self-confidence (e.g., Lemyre & Smith, 1985). This means that such falsifications are beneficial to us in some manner.

Thus, we see that comparisons with other people may not only serve the purpose of accurately recognizing one’s competence and capabilities, as Festinger assumed. On the contrary, in many cases, people may be concerned primarily with avoiding diagnostic information about themselves when comparisons may be unflattering, while at the same time acquiring such information that allows them to assess their own competencies, abilities, and potential capabilities very positively. Psychology has accumulated both ample evidence of the prevalence of such motivations and the occurrence of a wide variety of sometimes highly sophisticated behaviors that allow people to prevent information from reaching them that could threaten their self-esteem, while being open to any information that reinforces and boosts self-confidence (e.g., Leary, 2007; Grosser et al., 2021; Jelic, 2022).

Summarizing this section and indicating what we will later discuss, let us note that comparing oneself with others does not necessarily mean confronting one’s own qualities, competencies, or abilities with those of some other, specific person (a swimmer compares herself with people in a swimming school). Indeed, in certain situations, people may rather compare themselves with an abstracted “average person” or estimate what percentage of people in a certain population they are better or worse than in certain respects.

We will describe two such phenomena – the better-than-average effect (BTAE) and unrealistic optimism (UO) – more extensively below, since people’s manifestation of them, as well as the consequences of this state for their functioning, became the subject of our empirical inquiries at a time when the world was affected by the coronavirus pandemic. We will also present the effects of such comparisons (especially regarding the manifestation of health-seeking behavior) in the context of the COVID-19 pandemic. We will see whether these social illusions benefit or harm the faker.

1.3 Better-than-Average Effect

Let’s start with a reminder that one of the three basic positive illusions that Taylor and Brown (1988) wrote about in their now classic article was inadequate, inflated confidence in one’s own worth. Well-functioning people judge their own competence and other desirable qualities better than they should from an objective point of view. The better-than-average effect is one manifestation of just such an overly positive self-perception. Psychological research on this effect involves asking participants to rate the strength of some trait, ability, or skill they possess by comparing themselves with other people.

In one classic study of the better-than-average effect (Alicke et al., 1995), college students were asked to rate the extent to which they possessed 20 positive personality or character traits (such as intelligent, honorable, reliable, or responsible) and 20 negative traits (such as deceptive, humorless, snobbish, or liar) compared to the average college student of their sex. They used a 9-point scale to do this each time: from 0 and the phrase “much less than average college student” on the left end to 8 and the phrase “much more than average college student.” The middle value was 4, “about the same as the average college student.”

Logically, it could follow that some students should consider themselves “better” than others, some as equal, and some as “worse.” It turned out that the majority of those surveyed felt that positive designations better suited them than the average student, while negative designations better suited the average student than them. What a strange result! After all, it’s impossible for everyone to be better than the average person! The individual values presented in this table represent the degree to which the average estimates for “self” deviate from the middle value of 4. Obviously, positive numbers mean that the respondents believe that the trait describes them better than it does the average student (comparisons that result in portraying themselves in a positive light), and negative numbers mean that it fits the average student to a greater extent than it does them.

In doing so, let us note that among people who describe themselves as better (in certain respects) than the average person, there may be both those who are actually better, those who are average, and those who, realistically speaking, are worse. Thus, in the case of people belonging to the former group, it is difficult to treat such estimates as an illusion or to refer to them as bias. Note, however, that only half of the population surveyed can be better than the average person. This is because participants are comparing themselves to an average person, not to an average value. The distribution of a particular value may be such that more than 50% of the population can be better than average. For example, 62% of a country’s population may earn above the average salary in that country. This will be the case if a sizable portion of that country’s population lives in poverty. Also, 58% of the population may eat more meat than the average per capita consumption of meat. This will be the case if a large group of people don’t eat meat at all or eat meat very rarely and in small amounts. But the logic of the better-than-average effect is different. The individual does not compare to the average value, but to the average person.

With reference to the study mentioned above (Alicke et al., 1995), at most, every second college student can be better than the average college student. Thus, if more than 50% consider themselves better, then some of them simply cannot be correct. The better-than-average effect thus refers to an illusion we observe at group level, stating that it is shared by a portion of the population. At the same time, this part increases in size together with the extent to which the group of people who consider themselves better than the average person exceeds the magic barrier of 50%. At the same time, it should be emphasized that even if literally all respondents claimed to be better than the average person (e.g., they all thought they were more intelligent than the average person), we could only speak of bias with regard to half of them. (After all, half may indeed be more intelligent than this average as much as half could be less intelligent.) Be that as it may, the research presented above is excellent evidence of the workings of the falsifier-censor: I believe that I am better than others! Someone – and no small part of those participating – must be falsely positive in their self-assessment!

A similar regularity revealing the tendency of most people to see themselves as better than average was also noted by Jonathon Brown (1986), but using a slightly different method of measurement. The participants first estimated to what extent they themselves had various positive and negative qualities and then to what extent the average person had these qualities. This time, therefore, the better-than-average effect can be ascertained from the difference in estimates of the self and the average other. However, this method of measurement, different from that used by Alicke et al., did not affect the recorded result, which illustrates how universal and robust the described process is. It turned out that people as active censors not only falsify the past to match the present but also falsify the present to maintain a positive self-image. To achieve this, they ascribe positive traits to themselves to a greater extent than to the “average other,” and when it comes to negative traits, they see less of them in themselves than in the average person.

Still another method employed in psychological research to measure the better-than-average effect was to have participants select one of two responses: “below average” or “above average,” when answering the question to what extent they possessed certain (negative and positive) traits. Perhaps the most spectacular (at least in terms of results) study conducted in this paradigm revealed that 94% of teachers considered themselves to be better educators than the average teacher (Cross, 1977). This means that hardly any of them were worse than average! Let us note here in passing that the disadvantage of this distinct paradigm, compared to those previously described, is the impossibility of defining oneself as entirely similar to the average person – one can only be better or worse than them, which of course is inconsistent with what at least some people may actually think of themselves. Of course, this inconvenience can be easily avoided by introducing a third option for participants to choose from: “about the same as an average person.”

Some researchers have followed a more precise approach to determining the strength of the better-than-average effect. Here, the participant is asked to give the percentage of people from an own group (e.g., the percentage of students of their sex at the college they attend) compared to which they possess a given (positive or negative) trait to a greater degree. If we imagine a scale from 0% (no one) to 100% (everyone), its center would be a value of 50% (half of the people in that group). Thus, an average score significantly above 50% for desirable traits, or significantly below 50% for undesirable traits, will indicate the presence of a better-than-average effect in the population under study because such a self-reporting distortion (attributing positive traits to oneself while attributing negative ones to others) casts the person in an extremely positive light.

The classic study conducted in such a paradigm is one in which students with drivers’ licenses from both the United States (University of Oregon) and Sweden (University of Stockholm) quantified their competence as automobile drivers (Svenson, 1981). The study was a collective one, and the participants were tasked with estimating the proportion (in percent) of those in the room that the given participant drives more safely than and of those who are more skillful drivers. As we will see in Table 1.1, the majority of people surveyed from both countries believed that they drive more safely than most other students and have better driving skills, which of course is simply objectively impossible.

Table 1.1 Distribution of percent of estimates over degree of safe and skillful driving in relation to other drivers

The better-than-average effect with regard to being a driver can be vividly illustrated by the example of driving on the highway. From time to time, we overtake some vehicles, and from time to time, some vehicles overtake us. What do we think of other drivers? Those we overtake are most likely driving slowly because they lack skills and therefore drive overcautiously. They are simply poor drivers. Those who overtake us, on the other hand, are lunatics, “organ donors” who think that they alone can drive well, while they in fact compensate for their lack of skill with needless bravado. So, in our own eyes, we are better highway drivers than both those who drive slower than us and faster than us.

However, the means of identifying the better-than-average effect – by indicating the percentage of people whom one is better than – has one serious drawback: when we are not dealing with a normal distribution, ambiguity enters into the picture. If we assume, for example, that one-third of participants believe that they are better in some respect than as many as 90% of people and most of the remaining participants estimate that they are better than, say, 45%, serious difficulties with interpretation arise. After all, depending on the perspective taken, in such a situation, it is possible to conclude that the better-than-average effect is present (the average estimate exceeds being better than 50% of people) or that it is not present. (In our opinion, the latter conclusion would be correct, since only a clear minority of people believe that they are significantly better than the vast majority of people in a certain respect, while the majority believe that they are among the worse half.) Let us add for the sake of clarity that we are not aware of any case in which the doubts we have presented actually appeared in psychological research on the effect we describe here. In other words, the body of literature on this subject does not allow us to cast doubt on the universality of the belief in one’s own “superiority.”

The use by researchers of the four distinct approaches detailed above to measuring the better-than-average effect does sometimes complicate the comparison of results recorded in particular empirical studies, but, on the other hand, it increases our confidence that we are dealing with a real, robust, and prevalent psychological phenomenon. This does not mean, of course, that people consider themselves better than others (or better than the average person) in terms of all possible qualities and competencies. This is described by the phenomenon sometimes termed the “Muhammad Ali effect.”

Muhammad Ali is considered by many boxing fans to be the greatest fighter of all time. One anecdote involving Ali says that when he was questioned by a journalist about his intelligence, he replied that he never considered himself intelligent nor does he think he is smarter than other athletes or other boxers. At the same time, he explained that he is simply much faster than others, which ensures his success in the ring. As he put it, “I’m so fast that last night I turned off the light switch in my hotel room and was in bed before the room was dark.”

As Paul Van Lange (1991) has shown, most people do not claim to be superior to others in all positive respects. The Muhammad Ali effect assumes that belief in one’s own superiority is limited to selected characteristics. In most cases, it is more important for people to believe that they are more moral than most others than to hold the belief that they are more competent than others (Van Lange & Sedikides, 1998). In the case of competence, people only believe they are better than the average person in their chosen areas (e.g., they think, “I am a better driver than most people, but I certainly don’t dance better than the average person of my sex”).

However, since individual people may see this self-betterness in relation to different traits and competencies, the overall picture emerging from empirical research indicates that the better-than-average effect is largely universal. In other words, the sphere of comparison is as extensive as the number of traits or other personal characteristics being compared. It should thus come as no surprise that people’s belief that they are better than others in a wide range of mental traits, skills, and competencies has been evidenced in a multitude of studies (see Chambers & Windschitl, 2004; Zell et al., 2020, for review), further indicating the robustness of this effect. In yet other studies, it has been shown that, in addition to the aforementioned competencies or positive psychological traits, people also consider themselves better than others with regard to the frequency and intensity of various behaviors. Thus, for example, Leviston and Uren (2020) showed that most of the people they surveyed consider themselves more involved in environmentally friendly activities than others. In turn, a number of other studies indicate that people believe they are more likely to be involved in various charitable endeavors than most other people (e.g., Brown, 2012; Epley & Dunning, 2000). Still other studies reveal that people believe they eat much healthier than others (Scherer et al., 2016) and even that they eat for “better” reasons than others. This is because people declare that they themselves eat mainly because they are simply hungry and the food they choose is tasty and healthy. Others, they say, eat for different and not necessarily praiseworthy reasons: they want to make a desirable impression on others, conform to social norms, or make themselves feel better through food (Sproesser et al., 2017).

What is particularly interesting is that comparisons can concern traits that are unusually strongly related to being, the morality that makes us human. For example, Dolinski and Grzyb (2020) conducted an extensive research program on obedience that drew on the famous experiments of Stanley Milgram (1963, 1965, 1974). In these studies, individuals were invited into a laboratory, and it was explained to them that they would be participating in an experiment on the effect of punishment on learning performance. Their task was to step into the role of a teacher and electrocute a “student” (in fact a confederate of the experimenter, who only suggestively feigned suffering). The student was to be punished for each successive mistake with an increasingly powerful jolt. A special apparatus was used to administer punishment – an electric current generator, equipped with 30 switches, the first of which was marked with the symbol 15 V, and each subsequent one was correspondingly marked 15 V higher. Thus, the second switch was marked with 30 V and the third with 45 V, and the last was marked with 450 V. The experiment ended when the participants categorically refused further cooperation or when they successively pressed all 30 switches.

Dolinski and Grzyb studied various personality and situational determinants of participants’ behavior in such a situation (for ethical reasons, they stopped the study as soon as the participant pressed the tenth switch, labeled 150 V), but they also conducted one study in which they explored people’s beliefs about the behavior of people who are instructed by a psychologist to electrocute another person in the lab (Grzyb & Dolinski, 2017). The participants were presented with Milgram’s experiment in detail, and they were asked to judge both how they themselves would behave in the experiment and how other participants would act. In doing so, these other participants were referred to variously as an “average person” (in general), as well as an “average person of the same nationality” (the research was conducted in Poland), alongside questions about “average people” of several other nationalities. Each time, the participants were asked to indicate the last switch that a participant in the Milgram experiment would press. As we will see in Table 1.2, participants were convinced that they themselves would terminate their participation in the experiment much sooner than others. In other words, they declared that they were more moral people, less likely to visit evil on an innocent stranger. Importantly, these declarations are easily juxtaposed with the participants’ actual behavior (in Milgram’s original study and many subsequent replications), which showed that more or less two-thirds of them acted unethically by applying all possible punishments to the “student.”

Table 1.2 The voltage of the last switch that the participants indicated in particular responses (myself, average person, average Pole, etc.)

The majority of participants in the study by Grzyb and Dolinsky had only just learned about the Milgram experiment scenario, but as there were also some with prior knowledge of Milgram’s research, they should have been more rational, less biased. It turned out that prior knowledge of this shocking experiment modified beliefs about how other people would behave but did not affect beliefs about one’s own behavior (the differences in the two groups were not statistically significant) nor did it reduce the better-than-average effect (see Fig. 1.1). Thus, it can be said that knowledge of the Milgram experiment scenario and its results impacted judgments about other people but did not make a difference in beliefs about oneself, thus demonstrating how strong the better-than-average effect is. Despite knowledge of the experiment’s results, we still persist in portraying ourselves in a more positive light.

Fig. 1.1
A bar chart exhibits the value of participants familiar 111, and participants not familiar 94 for how would I behave. The value for participants familiar 203, and for participants not familiar 161 for how other people behave.

Means in questions “how would I behave” and “how other people behave” in group familiar and not familiar with Milgram experiment

Source: Frontiers in Psychology fpsyh.2017.01632, Figure 1

Copyright: Frontiers

As we approach the conclusion of this section, let us take a look at the issue from a broader perspective. As with all other psychological phenomena, in addition to “the way it is” (this we already know), it is necessary to ask “why is it so?” In other words, we inquire as to the psychological mechanism underlying the better-than-average effect: why do we exhibit the tendency to falsify our perception of the world and ourselves in it? The empirical material accumulated so far gives us cause to assume that the motivation to protect and boost one’s self-esteem lies at its core. This is because, first of all, it has been shown that the aforementioned effect is stronger when the characteristics being compared are important (vs. less important) to the participant (Brown, 2012; Ziano et al., 2020), as well as when they are important (vs. less important) from the perspective of the culture in which the research is conducted (Sedikides et al., 2003; Lee, 2012). In addition, the effect analyzed here is stronger when people compare themselves to others on abstract dimensions (Dunning et al., 1989) and those for which it is difficult to develop objective criteria for verifying the accuracy of estimates (Van Lange & Sedikides, 1998). Also worth mentioning here are studies in which belief in one’s own “superiority” rose immediately after participants experienced threats to their self-esteem (Brown, 2012).

However, on a case-by-case basis, cognitive mechanisms can reinforce or weaken people’s beliefs that they are better than others. It has been shown, for example, that the better-than-average effect weakens when people are asked first to estimate to what extent a certain trait is characteristic of the average person and only then to estimate the extent to which it applies to themselves (e.g., Pahl & Eiser, 2007). This change in the order of questions means that the typically positive opinion held of oneself ceases to operate as a reference point. Also peculiar is that the better-than-average effect is stronger when participants compare themselves with an abstract “average other” as opposed to a specific person (e.g., a family member or friend). This can be explained by the fact that information about self is usually cognitively accessible to the individual: we know more about our own actions or qualities than about the actions or qualities of others, even those closest to us. Thus, we may feel superior to the “average other” because we know nothing about this abstract figure, while we know quite a bit about ourselves. However, if we are asked to compare ourself to a particular friend or spouse, the difference in the availability of information about ourself and about this other person is much smaller precisely because we know much more about them – if only because of a familiarity bred over years spent together – than about, for example, neighbors we rarely see (Kruger, 1999).

To summarize here the research on the phenomenon described in this section, we would like to present the conclusions of a meta-analysis of studies on the better-than-average effect (Zell et al., 2020). The main results presented by the authors of these studies can be summarized as follows:

  1. 1.

    Better than average is highly robust across studies, with overall effect size that vary from large to very large. It should be added here that the effect size is clearly larger than that usually recorded in psychological studies, which shows that this effect is extremely strong.

  2. 2.

    The better-than-average effect is weaker under conditions in which participants are addressing positive attributes or competencies than when they compare themselves with others concerning negative attributes.

  3. 3.

    The better-than-average effect is stronger among young people than older ones.

  4. 4.

    Participants’ sex and race did not influence the magnitude of the reported better-than-average effect.

1.4 Unrealistic Optimism

An earlier chapter explored how we make false comparisons as regards our own traits. We attribute desirable qualities to ourselves and negative qualities to others. As we are about to see, this is not the only way in which the censor rewrites reality. This time, the falsification concerns not fixed qualities but rather events in everyday life that can affect us all. Thus, we should all reasonably expect that, whether good or bad, they will happen to both us and others with the same probability.

Every individual’s life is inevitably at risk of a traffic mishap, natural disaster, the loss of one’s home, or an (incurable) illness. Desirable things can (and sometimes do) also happen to each of us: an interesting job, a successful marriage, healthy and talented children, or a fascinating vacation in an exotic country. Armed with knowledge of the better-than-average effect, you are likely already predicting (correctly) that people attribute probabilities (positive vs. negative) of events to ourselves and others irrationally and unequally. And you are correct! However, before we explore this phenomenon, we begin with an introduction to optimism.

Numerous psychological studies that are now considered classics of the literature have demonstrated that we believe desirable events are more likely to happen than they actually are, while undesirable events seem less likely to us than objective circumstances would indicate (Marx, 1951; Irwin, 1953). There is also a fairly common tendency to see the present as better than the past and the future as better than the present (Matlin & Stang, 1978). Although the sources of this optimistic view of one’s future remain in dispute, the prevalence of such an attitude toward the possible course of events is undisputed. Most people are characterized by an optimistic attitude toward reality, and this attitude is activated in most situations (Seligman, 1998).

We write in this book about positive illusions. As you might imagine, our focus here is on precisely such optimism characterized by falsification and illusion, which is not an objectively sensible perception of the world. The very term “unrealistic optimism” clearly indicates reference to a positive expectation of future events that, at least partially, is divorced from reality. Psychology understands unrealistic optimism in two ways. First, we define positive estimates as unrealistic if we confront them with an objective criterion, allowing us to state the extent to which a given person’s expectations are (un)justified. The literature sometimes speaks in such cases of unrealistic absolute optimism (Shepperd et al., 2013).

Second, we can explore unrealistic optimism in a manner similar to how the occurrence of the previously discussed better-than-average effect is operationalized, namely, at the group (vs. individual) level. A similar process occurs with regard to unrealistic optimism: people can be asked to determine the probability that they will experience various desirable and undesirable events in their lives and to determine the probability that the same things will happen to the average person. If it is the case that most people believe their chances of positive experiences are higher than the average person, while their chances of having negative experiences are lower, we are observing unrealistic optimism at the group level. We may speak of an internal censor falsifying the probability of events while not claiming that it applies to every individual. In contrast to unrealistic absolute optimism, we can call this phenomenon unrealistic comparative optimism (Shepperd et al., 2013).

Let’s start with the first of these issues. In some cases, the unrealistic nature of positive expectations about the future can be ascertained only after the event in question has occurred. Thus, for example, it is possible to determine students’ realism or unrealism about their grade on an exam only after they have taken it (e.g., Ruthig et al., 2017) or the degree of unrealism of their optimism about the salary they will receive after graduation by confronting their prior expectations with the offer they actually receive (Shepperd et al., 1996). When it comes to financial advisors, it is also possible to verify post facto the accuracy of their positive expectations about economic developments (Calderon, 1993), and in the case of managers, it is possible to determine whether they were correct in claiming that risky and presently loss-making business projects will prove profitable in the long term (Meyer, 2014). Perhaps the most spectacular effect associated with optimism framed in this way is the planning fallacy. People embarking on a task usually estimate that it will take them less time than it actually subsequently takes (Tversky & Kahneman, 1974). The results of many empirical studies have unequivocally proven this regularity for a wide variety of tasks (e.g., Koole & Spijker, 2000; Griffin & Buehler, 1999; Byram, 1997).

In other cases, the degree of (un)realism of optimism can be estimated at the stage in which we determine the probability of certain states of affairs. As concerns the chances of winning the lottery, it can be precisely calculated. Thus, if someone filling out a lottery ticket believes that they have a better chance of winning than the math would indicate, we can safely call this optimism unrealistic. Similarly, someone who pulls out 1 of 52 cards from the deck, hoping to draw the ace of spades, believes that they have, say, a ten or even five percent chance of drawing that very card demonstrates completely unjustified, and therefore unrealistic, optimism (Marks, 1951; Irwin, 1953).

In other cases, we cannot rely on such clear-cut calculations derived from probability calculus but can rely on approximations. With precise data on the patient’s condition, his medical history, data on genetic burdens, etc., a doctor or insurer can, based on a computer algorithm, determine the approximate probability of the patient’s recovery or calculate how long the patient is likely to live. If the patient’s expectations deviate significantly toward more positive illusions (longer life expectancy) than these estimates, we can term this optimism unrealistic.

This operationalization of unrealistic optimism may seem appropriate, since we relate it to a specific person (e.g., someone has “good” vs. “bad” genes and a long vs. short life in the family) who either is or is not unrealistically optimistic and not – as is the case with comparative optimism – to a group of people, some of whom may be realistically optimistic and only some of whom demonstrate unfounded optimism (not everyone should not assume that they will live longer than others). In fact, however, the issue is far more complex. Indeed, in psychological research, unrealistic optimism has rarely been estimated in the manner outlined above, and so the whole battery of variables related to a person’s life history, genetic, and personality factors affecting their current and future situation, etc. has rarely been taken into account. In general, the estimates made by participants were compared with the probability of certain states of affairs in a population. The respondents thus made judgments about how likely it was that they would have breast cancer or the likelihood that they would need to undergo dental surgery; it was then checked how far these judgments deviated from the probabilities calculated for the population to which, demographically speaking, the given person belonged (e.g., Hanoch et al., 2019, 2022).

By the same token, a specific individual who believes, for example, that their likelihood of contracting lung cancer is significantly less than would result from statistics that take into account the incidence of smokers of their sex and age may simply be right. This is because their estimates take into account the fact that there has never been a case of cancer in their family, and they may reasonably assume that by not smoking they are avoiding this serious disease. Similarly, someone who would estimate a lower probability of causing a car accident than the statistics would indicate may be right, either because they are a very experienced and cautious driver who hasn’t received a ticket in twenty years or because they don’t drive, thereby excluding the possibility of causing an accident. In fact, the factors that individual participants may take into account that objectively modify the probability of a variety of desirable and undesirable events are abundant, and it is downright impossible to take them all into account when calculating a probability that we might deem objective.

Therefore, we feel that a more appropriate way to study unrealistic optimism is through a procedure in which people compare themselves to other people. While we do not know whether, in the case of a particular individual manifesting optimism, this optimism is of an unrealistic nature, it should be explicitly emphasized that we also do not assume that we know this. On the contrary, we talk about the unrealistic nature of optimism exclusively at the group level because “everyone can’t have a greater chance than the average person” (in the case of positive events) and “everyone can’t have a lesser chance than the average person” (in the case of negative events).

Before proceeding to a discussion of the regularities associated with unrealistic optimism understood from this perspective, let us emphasize that studies which simultaneously measure absolute optimism and comparative optimism have shown that they are distinct phenomena, in the sense that it is possible for one to manifest absolute optimism while not manifesting comparative optimism, and vice versa – i.e., manifesting comparative optimism, in the absence of absolute optimism. The dynamics of change over time of the two types of unrealistic optimism may also differ (Ruthig et al., 2017).

The first studies on comparative unrealistic optimism were carried out by the creator of the concept, Neil Weinstein (1980). He asked students to estimate how likely they were to experience 18 positive states of affairs in the future (such as “like postgraduation job,” “owning your own home,” “living past 80,” or “having a mentally gifted child”) and 22 negative states of affairs (such as “having a drinking problem,” “attempting suicide,” “heart attack before age 40,” or “being sterile”), compared to the average student of their sex from their university. They were instructed to specify the difference in percentages (from 0, meaning no possibility of occurrence, to 100, meaning completely certain occurrence of a given event), indicating the direction of the difference. In the vast majority of cases the students were convinced that desirable things would happen to them rather than to others, while undesirable things would happen to others rather than to them.

Surely you have noticed, presenting the results of Weinstein’s (1980) pioneering study, that unrealistic optimism was very pronounced for some future possible states of affairs (e.g., like postgraduation job or having a drinking problem) and weaker for others (e.g., having a mentally gifted child or having gum problems). There were also categories for which the phenomenon of unrealistic optimism was not noted (e.g., marrying someone wealthy or being a victim of burglary). Weinstein’s explanation is that optimistic distortions (biases) are particularly fostered by events that the individual perceives as controllable. People may, for example, believe that they will choose the kind of job they like, but whether they will be robbed depends more on the burglar than on themselves. (Moreover, if they simultaneously believe that they will earn well, they should, after all, also consistently assume that a burglar will want to break into their house rather than the house of a poorer neighbor.) In turn, this should diminish their optimism about the possibility of avoiding an unwanted visit from a burglar. Also, some diseases and unfortunate events are of such a nature that, at least at some point in one’s life, they can be avoided (e.g., alcoholism or drowning in a pool if one cannot swim and does not enter pools), while the occurrence of others is influenced much less or, practically speaking, not at all (e.g., pancreatic cancer or being hit by a car).

At the same time, Weinstein states unequivocally that the optimistic nature of such judgments that he has revealed is not necessarily unrealistic for all people questioned. Undoubtedly, some students have perfectly rational grounds for believing, for example, that they will own their own home (because they have, e.g., rich parents who have promised them this) or that they will most likely manage to avoid a heart attack before the age of 40 (because no one in their family has ever had heart trouble and they themselves are not obese, play recreational sports, eat healthily, do not smoke, drink alcohol only occasionally and in small amounts, undergo regular checkups frequently, and avoid a stressful lifestyle). However, unrealistic optimism can be considered at the group level, because it is simply mathematically impossible for most people to have a better chance of experiencing positive states and avoiding negative states than the average person.

Subsequent empirical research on unrealistic comparative optimism has confirmed that with regard to a great many future events, people believe that positive events are more likely to happen to them than to others and that negative events are more likely to happen to others than to themselves. This research not only confirmed the results obtained in Weinstein’s (1980) pioneering study but also revealed other areas in which people feel a specific kind of privilege. Thus, it is others (rather than me) who will fall victim to various types of crime (Perloff & Fetzer, 1986); it is others (rather than me) who will be involved in a serious car accident (McKenna, 1993); it is other women (rather than me) who will experience an unwanted pregnancy (Burger & Burns, 1988). Also, study participants tended to reveal a strong conviction that various illnesses and ailments (not just those studied in Weinstein’s pioneering study) would befall other people in the future, rather than themselves (Clarke et al., 2000; Hoorens & Buunk, 1993; Weinstein, 1983, 1984, 1987). Perhaps the most spectacular example of comparative unrealistic optimism came from a survey of pilots. As many as 95% of them were convinced that another, average pilot was more likely to cause an airplane accident in the future than they were (O’Hare, 1990).

The hypothesis that unrealistic optimism is stronger with respect to controllable events than those over which the participant has minimal or no influence on their occurrence (or avoidance in the case of negative events) has also found support in other, more contemporary studies (e.g., Harris, 2008; Klein & Helweg-Larsen, 2002; Menon et al., 2009). In one of them, the belief of having influence turned out to be so strongly linked to unrealistic optimism that it led Dutch prostitutes to believe that they were less likely to contract AIDS than the average Dutch woman (van der Velde et al., 1994). Other research has shown, in turn, that people very often believe that others have less influence on achieving desirable states and avoiding undesirable ones than they do themselves (Hoorens & Smits, 2001), which also strongly supports the thesis of a positive relationship between beliefs about personal control (already referred to more than once) and the degree of unrealistic optimism.

A second factor that largely determines the very occurrence of the unrealistic optimism effect (and, possibly, its magnitude) is the degree to which the occurrence of a particular event is important to the individual. Having an interesting and, in addition, well-paid job is important both from an existential, long-term perspective and from a self-esteem perspective. The fact that someone can steal our car (especially if it is insured against theft) is far less important particularly in the long run (except, perhaps, when one makes a living from driving; but that is, after all, the exception to the rule, certainly only a small number of those reading this work are professional drivers). Such an event would not be a great blow to our existence, and self-esteem is unlikely to be affected at all. Not surprisingly, with regard to work after graduation, unrealistic optimism was very pronounced, while with regard to the possibility of losing one’s car, the effect was entirely absent.

It is also easy to see that, in general, unrealistic optimism is more pronounced for negative events than for positive ones. This is an effect consistent with a regularity that stems from prospect theory (Kahneman & Tversky, 1979). Avoiding losses is usually more important to people than achieving gains. Congruently, people are more motivated to believe that something bad will happen to other people rather than to them than they are to believe that something good will happen to them rather than to other people.

Finally, it is worth asking not only whether the effect exists (we have already answered this) but also what purpose it serves. What psychological mechanisms constitute the phenomenon of unrealistic optimism? As with the better-than-average effect discussed earlier, motivations related to protecting and elevating self-esteem (self-protection and self-enhancement) underlie comparative unrealistic optimism. In the case of unrealistic optimism about adverse events, however, we can also refer to a reduction in fear of the future that was not present in BTAE. Unrealistic optimism allows us not to worry about what might happen (Hoorens, 1995; Klein & Weinstein, 1997), deceiving ourselves about the fact that although the given “bad” events happen, they do not happen to me: the person who engages in these falsifications. It is easier, for example, to get in the car and drive to a family reunion believing that traffic accidents will happen to other people rather than to me, allowing us to take the wheel with less anxiety/less fear. Similarly, we may deceive ourselves when considering whether to grab a beer, believing that alcoholism threatens other people rather than us. Absent such optimistic beliefs, driving could be associated with tremendous stress, and drinking beer would not be an enjoyable experience.

In the case of the better-than-average effect, mechanisms of a cognitive nature played a significant role. It is no different with regard to unrealistic optimism. Recall that in BTAE, we noted that in the case of future events that can be influenced in some way, individuals are much more aware of their own competence and their own actions that they plan to engage in, rather than the competence and behavior planned by others. One’s own actions are simply almost always cognitively available (the actor is aware of them), while the actions of others are only sometimes available. Thus, an individual may mistakenly believe that they manifest certain behaviors more often than others or those behaviors are more relevant or properly adjusted to the external environment than others. This mechanism is usually referred to by psychologists as cognitive egocentrism. (We note in passing that individuals should also realize to a greater extent that they themselves are more likely than others to manifest inappropriate behavior, such as risky behavior. This, in turn, should not facilitate the appearance of the unrealistic optimism effect. Cognitive egocentrism thus has some limitations as a mechanism for explaining the occurrence of the illusion discussed here, which is usually overlooked in the psychological literature.)

Another cognitive mechanism responsible for falsification or self-deception may be those related to attentional focus. It may already be obvious to you that in the case of comparisons with other people, individuals are focused on themselves – we serve as our own target, or reference point. However, if we ask someone to first estimate the probability of a certain event occurring in the life of another, (average) person, we will focus their attention on that very social object. Now it is this object (“average person”) that will be the target or referent point, which may weaken cognitive egocentrism. It has emerged that in such a situation, the power of unrealistic optimism diminishes (e.g., Chambers et al., 2003; Hoorens, 1995). Our focus of attention then shifts from the “inner falsifier/censor” to other people. Still other cognitive mechanisms conducive to the emergence of comparative unrealistic optimism have been written about extensively, e.g., Chambers and Windschitl (2004) and Shepperd et al. (2013).

Despite many similarities, the two comparative distortions (biases) described in this chapter differ in more than just the aforementioned element: better than average refers to constants, e.g., features, while UO refers to events in the future. Besides, the better-than-average effect and unrealistic optimism bias are distinct in that both mechanisms may be present in a specific situation but only one may be present in another situation. For example, the unrealistic optimism bias is present, and the better-than-average effect is not when a person thinks: “I am not better than others, but I have always had a lot of luck in my life” or “I believe God will save me from misfortune.”

At the same time, however, the similarities between the two phenomena are quite striking. We should emphasize that both in the case of the better-than-average effect and comparative unrealistic optimism, we are dealing with the favoritism of one’s own self in the processes of social comparison, while the object of these comparisons is not some specific other person but a set of people, belonging to the same group as the subject. In other words, as a censor working for ourselves, we falsify – in a positive way – our self-image, our actions, and the future that awaits us. Another issue binding these two concepts is their illusionality: the judgments are not grounded in facts and are impossible from a logical point of view. They are impossible since half should perceive themselves as better and the other half as worse. Not surprisingly, in the literature on the subject, the two phenomena are often discussed together (e.g., Chambers & Windschitl, 2004; Helweg-Larsen & Shepperd, 2001). In the next chapter, we will combine the knowledge discussed here so far with the purpose of this book: we will inquire whether these falsifications also take place during pandemics. If so, how do they manifest themselves? And if they do manifest themselves, how do they affect health-promoting behavior: do they help cope with a pandemic or, just the opposite, do they accelerate the falsifier’s destruction by risking illness and even death? And finally, we will consider whether changes are possible, “psychological vaccines” for the disease to reduce harmful illusions.

We will begin with a presentation of research with these comparative biases during the specific time of the spread of the disease, which in 2019 in China was initially diagnosed as severe pneumonia and which in the following year took the form of the COVID-19 pandemic, haunting almost the entire world by evoking such notions as pandemic (rather than epidemic), mass death, and helplessness.