Leveraging cultural narratives to promote trait inferences rather than stereotype activation during person perception

Correspondence Lasana T. Harris, 26 Bedford Way, London, WC1H 0AP, UK. Email: lasana.harris@ucl.ac.uk Abstract Stereotypes are cognitive constructs that make available a suite of traits and behaviors relevant to a social target. However, they also fuel social biases against social targets based on their perceived social group membership. Traditional approaches to social bias reduction focus primarily on the affective component—prejudice—on cognitive control as a regulatory tool, and on changing stereotype content. Here, I focus instead on changing the initial categorization processes, moving away from stereotype activation to trait inferences during person perception. Specifically, I argue that traits, another cognitive construct, can be promoted instead as the default social categorization. This can occur if cultural narratives highlight behavior rather than social groups, making trait inferences more salient as an initial categorization process rather than stereotype activation.

ongoing social problem in North American and European society; systemic racism abounds, even six decades after the Civil Rights Movement, which itself occurred a century after the American Civil War fought for the same reason. People of African descent 1 in North America and Europe continue to be treated as less than fully human, receiving systemic disadvantage in justice and policing, health-care, education, business, and seemingly every other facet of life. At the core of systemic racism are cognitive and affective responses to people of colour: stereotypes and prejudice respectively (see Feagin, 2013). Here, I seek to broaden the understanding of the stereotyping processes that sustain such long-lasting injustices. To do this, I will integrate the social neuroscience literature with social psychological theory in a search for alternate approaches to social bias reduction. Specifically, I will argue that cultural narratives-representations or explanations of behavior rampant in culture-can promote trait inferences rather than activate stereotypes.
Social cognition research describes the psychological processes by which people infer other minds (Fiske & Taylor, 1991, 2013. One route to such useful inferences emerges from social categorization processes (Fiske & Neuberg, 1990). People, upon perceiving another agent, determine whether the agent is a person or not, then ascribe a social category to that person (Harris, 2017). These categories are associated with stereotypes-cultural knowledge about the traits and behaviors category-group members are likely to display. Here, I attempt to facilitate a deeper exploration of stereotyping and trait inferences-a cognitive schema that describes possible behaviors, goals, tendencies, and predispositions a person may possess. Together, I describe stereotypes and traits as social categorization processes. I do not attempt to document the pervasive nature of stereotypes, or to justify their existence given the large number of publications already on said topics (for one such review, see S. C. Wheeler & Petty, 2001). Instead, I attempt to explain how social categorization processes occur and function, and to broaden the consideration of social categorization processes beyond stereotyping to trait inferences. Such broader understanding of social categorization processes will reveal alternative strategies to eradicating category-based violence and social bias by addressing the cognitive mechanisms that give rise to these phenomena. This broadening of perspective regarding social categorization will also challenge some of the widely held conceptions in the literature about stereotypes, including their utility, accuracy, and the relationship between stereotypes and trait inferences. Therefore, I will highlight strategies that may be effective for social bias reduction, both at the level of individuals and systems.

| AN ARGUMENT FOR A SHIFT IN SOCIAL CATEGORIZATION PROCESSES
Stereotypes and traits are cognitive concepts or schemas that group a number of possible past and future behaviors, motives, or goals, and outcomes around a concept, making these more readily accessible to predict and explain behavior (Fiske & Taylor, 2013). The difference lies in the concept: stereotypes use social groups made relevant in a cultural context, while traits use personality groups, natural kinds of human beings that depend on dispositional attribution. One might argue that this razor thin distinction between the two concepts does not clearly allow stereotypes to have an advantage as a heuristic or mental short-cut during person perception. This lack of an advantage is the first theoretical contribution of this discussion.
Stereotypes are inaccurate, despite discussions regarding a kernel of truth (Oakes et al., 1994); they often arise because of historical conditions that create the stereotype from a dominant group's perspective. These historical circumstances may simply reflect rampant biases that maintain systems at the time, which may be no longer relevant, but the stereotype's origin reflects the fact that it had a usefulness, allowing more efficient processing of a group of people. Therefore, this historical context has to be accounted for when thinking about changing the prevalence of master statuses-the dominant, most salient social categories (Ferree & Smith, 1979)-during person perception, and shifting perception from stereotype activation to trait inferences.
Here, I argue that it is possible to leverage the existing research to shift social categorization away from spontaneous stereotype activation to trait inferences. This literature argues that people who are more familiar are 2 of 14 -HARRIS less likely to activate stereotypes (see, for instance, Fiske & Neuberg, 1989). It is possible to enhance the familiarity of novel social targets simply by focussing on their behavior; most behavior is familiar to us all as human beings.
Within cultural contexts, behavior can be more salient if cultural narratives promoted behavior rather than social groups. For instance, media reports of behavior often begin by social group descriptions of the social targets, activating stereotypes. If such descriptions instead mentioned the behavior without the accompanying social groups, then trait inferences are more likely. Communicating that people were killed by a gunman at a train station does not require placing the gunman in a specific social group. Therefore, the media and other sources of cultural narratives could potentially unlock social bias reduction simply by re-shaping the focus during person perception, making behavior and subsequent traits more salient (see Figure 1). Thus, rather than stereotypes being the unavoidable initial social categorization, cultural narratives provide a lens through which either stereotypes or traits can be made salient during person perception.

| STEREOTYPES RELY ON PERCEPTUAL CATEGORIZATION
Categories simplify the word, facilitating easier and more efficient processing of information. The brain could not function without categories because the world is too complex (Quadflieg & Macrae, 2011;Wilson, 1987). Therefore, categorization itself as a psychological process is not the problem. The consequences of categorizationstereotyping-inaccurate inferences applied mindlessly to people, are where problems arise. Stereotypes are inaccurate representations of people, and such inaccuracies become problematic. Stereotypes apply to the imagined average or prototypical group member, but each group member is an individual, thus a variant from the average or prototype, making the stereotypes inherently less accurate. Stereotypes can be both positive (see Zou & Cheryan, 2017) and negative. However, positive stereotypes can lead to negative outcomes for individuals through depersonalization and other processes when those stereotypes are applied to an individual (Siy & Cheryan, 2016).
Bluntly, the social problem arises when the stereotype, whether positive or negative, is taken as a ground truth about an individual. Beyond the fact that stereotypes are inaccurate, groups are not homogenous, and applying a broad trait resulting from an inaccurate categorization does not account for the variability within groups. This argument does not posit that stereotypes are accurate representations of people. Importantly, I distinguish between stereotype accuracy (e.g., Hall & Goh, 2017) and prototypical representations that are fictitious or at best abstract, not an accurate representation of an actual group member.
Stereotypes result from categorization processes that are by default rampant when we encounter entities in the world, including human beings. People need a reliable, quick, and efficient way of determining what they have encountered, and if it is a person, whether they are a friend or foe. Stereotypes provide this information by making salient a relevant social category (group of people) based on inferences drawn primarily from visual and auditory categorization processes (Devine, 1989;Ko et al., 2006;Stangor & Lange, 1994). Feature space mapping in the temporal lobe of the brain facilitates matching the visual or auditory stimuli to categories (Haxby et al., 2004).
Though there are areas in temporal lobe specialized for faces (fusiform face area; Kanwisher et al., 1997) and body parts (extrastriate body area; Downing et al., 2001), these brain regions respond to expertise (Tarr & Gauthier, 2000), not people exclusively, and provide a brain mechanism that does not fully separate human and nonhuman perception.
I argue that the reliance on stereotypes reflects an adaption that prioritizes the importance of a target's social groups in interpersonal interactions rather than a target's personality or character (Harris, 2017). As social group size expanded beyond smaller groups, interactions with strangers increased (see Hill & Dunbar, 2003). Quick evaluations of strangers were necessary, and since human beings can mislead or deceive (Cosmides & Tooby, 1992), behavior alone was not reliable. But social group, if known and assigned to these more frequent encounters with other people, would provide a way of quickly and efficiently recovering the benefits of trait inferences.

HARRIS -3 of 14
So how do social and non-social categorization compare? In both cases, perceptual categorization results from one of three processes: rule-based, information-integration, or prototype-distortion (Ashby & Maddox, 2005). Rulebased processes require criteria (rules) that stimuli must meet to belong to a category or not. For stereotypes, these rules are conveyed by perceptual features (e.g., skin-colour, dress, physical build). Changing stereotypes based on rule-based processes requires changing the salient categorization rule; a possibility I consider below. In addition, one may question the origination of the rule, often tied to social customs and beliefs within societies, shaped by their historical circumstances, making the precise rule culturally specific, not generalizable across human beings.
People must learn these cultural rules before applying them.
Information-integration captures a Bayesian processes through which people learn what belongs to a category or not. This is most useful for novel category formation, so it explains how new societal groups may coalesce around an emerging stereotype based on cultural narratives or salient behavior that is explained by inferred intentions and competition (see Fiske et al., 2002). Information-integration also provides an important route to stereotype change if different behaviors and intentions are attributed to groups. Therefore, it also applies to stereotype change: a common occurrence within societies (e.g., see Bergsieker et al., 2012;Gilbert, 1951;Karlins et al., 1969;Katz & Braly, 1933). This categorization process emphasizes cultural narratives around groups that shape explanations for behaviors and attribute intention, reinforcing stereotypes.
Prototype-distortion processes-learning processes where labels are associated with different category exemplars-are also relevant to stereotype change. Over time, a prototypical category member emerges as a cognitive construct or schema. The stereotype hinges on this category prototype, and includes distortions of that prototype as category members. For stereotypes, this explains sub-typing phenomena where a stereotype is preserved by perceiving an exemplar inconsistent with the stereotype as belonging to a sub-type or different category (e.g., business women and housewives as sub-types for women; see Hewstone, 1994). Such separation of the exemplar from the original stereotyped prototype also hints at another mechanism of stereotype change; social categorization content change. Stated differently, prototype distortion presents an opportunity to change the salient category that elicits a stereotype, if not changing the stereotype-category association itself; the exemplar F I G U R E 1 Person Perception Models. (a) The current dominant theoretical perspective, where perceivers first activate stereotypes before inferring traits during person perception. (b) A revised model equates stereotype activation and trait inferences. Cultural narratives determine the likelihood of each being used, changing the activation of each during person perception 4 of 14 -HARRIS now belongs to another category with different stereotypes. Moreover, prototype distortion makes salient questions regarding master status (or spontaneously salient) social categories; specifically, the tendency to rely on these rather than another possible social categorization.
Again, a comparison of human and non-human categorization processes proves useful. People can very easily revise a non-human category to include a novel exemplar that does not conform to the original category. This occurs in part because non-human categories are graded or 'fuzzy' (Kempton, 1978). For instance, a vessel for drinking coffee can be considered a cup or a mug. We hold a prototype of the category 'cup' in our minds that is not clearly distinguished from the category 'mug', allowing a vessel for drinking coffee to comfortably fit into either.
Human (social categories) are often mutually exclusive; there is less fuzziness and clearer distinctions between categories. For instance, a mixed-race person may be perceived as predominantly one race or the other, as a novel 'mixed' category (Lewis, 2016;Peery & Bodenhausen, 2008), or as another racial category (Nicholas et al., 2019).
Moreover, the social context determines how people categorize mixed-race people (for review, see Young et al., 2020). For instance, European American participants with low prior interracial exposure show increased and abrupt category shifting when categorizing mixed-race faces (Freeman et al., 2016). Moreover, one's racial category membership, be it mono or multi-racial, can also affect whether targets are essentialized by race or not (Pauker & Ambady, 2009). But the idea that such a target belongs to both categories is uncomfortable to most perceivers who desire a less fuzzy categorization of that person. Therefore, these cognitive perceptual categorization processes suggest that each person we encounter is quickly put into a 'box' or assigned to a category. The most spontaneously salient social categories are assigned first (Hughes, 1945), but such master statuses can be revised (Bergsieker, et al., 2012;Gilbert, 1951;Karlins et al., 1969;Katz & Braly, 1933).

| SOCIAL BIAS REDUCTION
If trait inferences instead of stereotype activation could reduce social bias, how do we get perceivers to infer traits rather than stereotypes during person perception? Given the influence the social context exerts on behavior (see Foley, 1996, for one such account), what accounts for social categorical inferences' cognitive architecture? Trait inferences perhaps were the original categorization tools, allowing us to simplify the complexities of social life.
As group size expanded, they became less useful because they required direct learning experiences with the person, which relied on familiarity (Harris, 2017). Instead, human beings shifted to relying on categories that could be easily pulled from visual and auditory cues-stereotypes-based more on generalization of learning. Stereotypes provide a rapid way of obtaining information about a person (Stevens & Fiske, 1995). But in reality, stereotypes are less accurate, and more likely to limit behavior towards a target than traits since they are not always representative of that specific target (Oakes et al., 1994). Therefore, stereotype reduction could not only reduce social bias, but enhance the quality and accuracy of information available to the perceiver during person perception.
The dominant approach to social bias reduction recommends regulating prejudice responses once they occur (Devine, 1989). This perspective requires awareness of ones' bias, and cognitive resources to continuously monitor behavior and make amends . Such a substantial demand on cognitive resources leads to stereotypes persisting when cognitive resources are taxed. Therefore, stereotypes can drive discriminatory behavior unless people have the necessary cognitive resources, the awareness of their biased behavior, and the will to regulate, making this a less effective social bias reduction strategy.
A second approach to social bias reduction recognizes that bias results from social learning, and is a threat response. Prejudice depends in part on the amygdala, a brain structure responsible for emotion learning and threat responses (Hart et al., 2001;Phelps et al., 2000;M. E. Wheeler & Fiske, 2005). This most pernicious aspect of social bias accounts for its prevalence in all aspects of human behavior. However, such threat responses can be HARRIS -5 of 14 countered by the same learning processes that nurtured their acquisition. The contact hypothesis states that having experience with outgroup members (Dovidio et al., 2003;Gaertner et al., 1996), even imagined contact (Crisp & Turner, 2012), can reduce social bias, consistent with findings highlighting the role of cooperation in reducing intergroup hostility (Sherif, 1961). Therefore, through experience, negatively stereotyped people can be perceived in a less stereotypical manner, with a number of moderators of this improved perception (Hewstone & Swart, 2011;Miles & Crisp, 2014). But this does not eliminate the negative stereotype; the stereotype is just applied less often.
Moreover, promoting such high-quality social interactions in the real-world is difficult. Therefore, additional social bias reduction strategies are necessary. One perspective suggests that sub-categories-categories based on the interactions of stereotypes-are closer along the impression formation continuum towards traits (Crocker et al., 1984;Hewstone, 1994). Such models posit that stereotypes reside at the categorical end of a continuum, providing group-based information not necessarily unique to the target, and traits at the personalized or individuated end, providing unique information about the target (Alexander et al., 1999;Fiske et al., 1999;Fiske & Neuberg, 1989). Thus, sub-categories provide a less homogeneous account of people within the limitations of the brain and the complexity of the social world. This literature has largely explored sub-typing as a response to a disconfirming stereotype, a method of preserving the stereotype while explaining the behavior of the stereotype disconfirming target. Sub-groups, in contrast, result from observed similarities and differences between people, thus can include stereotype confirming and disconfirming individuals, changing the stereotype. But what if people could access sub-category information without needing a disconfirming target; what if sub-categories were the default perception instead of the currently readily available stereotypes based on master statuses of race, gender, and age (Hughes, 1945)? Addressing this question presents another potential pathway to reduced negative stereotype use.
But what provides a method more reliable than stereotypes to quickly and efficiently obtain social cognitive information about a target, particularly when there is not a direct learning opportunity such as when we encounter any of the hundreds of new people we may pass each day? Are sub-categories instead of stereotypes a better alternative? Sub-categories are closer to traits along the impression formation continuum, but still easily identifiable from perceptual information. But how do we spontaneously trigger sub-categories instead of stereotypes when navigating the social world? The answers may lie in the master statuses that have been historically relevant within cultures. If we see a white Bengal tiger attack a person, the sub-type is the 'white Bengal tiger'. However, most people would instinctively categorize the animal as a 'tiger', not a 'white tiger', a 'Bengal tiger', or a 'white Bengal tiger'. This is because 'tiger' is a master status: the most readily accessible possible category. Master statuses exist for objects and well as people (Hughes, 1945), making it difficult to move beyond gender, ethnicity, and age when perceiving other people. However, the task is not impossible, and people can be prompted to go beyond these master statuses.
People often categorize others who are not good fits to a category (Crocker et al., 1984;Hewstone, 1994). For instance, there are women who stay at home to take care of their kids, and there are women who go to work in high-powered jobs who also have kids. The category 'woman' may be accurate (both types of women may identify as such, and are biologically women), but the 'woman' stereotype of nurturing, caring, and the like may not neatly fit both women. As such, sub-categories become necessary. The interaction of business person and woman produces the sub-category 'business woman', which comes with its own set of stereotypes (Crocker, et al., 1984;Hewstone, 1994). This presents a possible solution to the problem of spontaneous stereotyping that does not require altering the cognitive structure of stereotypes but replaces their content. However, there is a dearth of literature that examines the possibility of sub-categories as an alternative to stereotyping, or investigates whether such categories can be easily and readily brought to mind, as well as the cultural and contextual factors that promote such processing.
Perhaps experience and cultural norms dictate preference for a specific category or goal-based inference, but it is uncertain whether either would be sufficient to guarantee social bias change. Moreover, the precise role agency and conscious choice play in determining the final inference remains unclear. Each individual has the capacity to not 6 of 14 -HARRIS be biased, but experience and social norms may override the best impulses, requiring constant practice. Therefore, cultural narratives that reinforce new learnings play a vital role. Finally, the individual and societal levels may both prove useful in addressing social bias, with individual effort supported by a re-setting of societal norms surrounding how we describe people, reducing the learning opportunities for category-based inferences.

| THE ALLURE OF TRAIT INFERENCES
The utility of stereotyping surrounds the plethora of traits the stereotype makes available; these traits are associated with the social category, providing explanatory and predictive information about possible behavior rapidly during the social encounter (Fiske & Taylor, 1991, 2013. This vital information is limiting, however, since it equally applies to anyone in the social category and is not unique to a particular individual. Though research does suggest that there is variability around the applicability of the stereotype (Wittenbrink et al., 2001), this requires the perceiver to exercise cognitive control, and occurs later in the social interaction after the generic stereotype has already been applied to the exemplar. The trait information it seems is too vital to not have handy, whether it is accurate or not. The goal of any inference, stereotype, or trait-based, is to predict and explain behavior. Accuracy thus is a crucial consideration. Both trait inferences from faces (see Todorov et al., 2015) and stereotypes are inaccurate. It is yet to be determined whether trait inferences from behavior is also inaccurate, though one could observe a limited set of behaviors and not come to a fully accurate trait inference. Nonetheless, one can determine whether traits and stereotypes differ in their degree of inaccuracy. Alternatively, one can argue that the kinds of errors made when inferring from traits relative to stereotypes are different.
Behaviors can be grouped or categorized according to a generalizing principle such as an underlying goal or intention. As such, traits are mental constructs that group people around a particular set of behaviors (Newman, 1993). Those behaviors, however, are tied to their goals, desires, intentions, and other mental states. This feature separates traits from other functional adjectives (often represented as nouns) that describe groups of people based on their behavior, such as commuters, diners, party-goers, and so forth. Moreover, traits make salient particular sets of behaviors that are distinct from related traits. Take the example of the trait 'witty'; it describes a humoros person but is not quite the same as 'ironic', or as unforgiving as 'sarcastic'. Yet, there are a number of people who could be described as 'witty' distinct from another group of people described as 'ironic'. The trait therefore can act as a social category.
Most importantly, ascribing a combination of traits to a target grants that target full humanity (Harris, 2017).
Perhaps for this reason, stereotypes and traits have been posited as different ends of a continuum of impression formation (Alexander et al., 1999;Fiske, et al., 1999;Fiske & Neuberg, 1989). Because traits are also categorical representations (Newman, 1993), what differentiates stereotypes and traits is the obviousness of the inference.
Stated differently, stereotypes are supposedly more easily inferred upon initial encounters than are traits.
Given the centrality of the idea of an efficiency benefit of stereotypes relative to trait inferences or individuated impressions as they are referred to in two of the influential impression formation models (Alexander et al., 1999;Fiske & Neuberg, 1989), one would expect a large number of studies establishing this efficiency benefit because of the explosion of literature on stereotyping in the late 1980s through the 1990s. However, while there is a lot of work describing the efficiency of stereotyping, it is usually in cases where one positive stereotype is compared to another negative stereotype (e.g., men vs. women, White vs. Black). The studies that do directly compare traits and stereotypes confuse the definition of the two (Andersen & Klatzky, 1987;Andersen et al., 1990;Brewer et al., 1981;Cantor & Mischel, 1979). These studies argue that stereotypes and traits differ such that stereotypes are richer in their cognitive representation, are more imaginable, thus have a higher quality representation, are more distinctive in memory, and are more concrete than traits. In these studies, the researchers avoid broad social stereotypes around master categories and social group membership (race, gender, age), and focus more on stereotypes associated with behavior (e.g., hero, snob, glutton). One might argue that the HARRIS -7 of 14 comparison of these non-social group stereotypes to traits (e.g., extravert, masculine, feminine) is not the best comparison of traits to stereotypes since the stereotypes relate to behaviors rather than social groups, while the traits often make salient social group membership. This highlights the definitional issues in the literature. These researchers also acknowledge that stereotypes may trigger traits (though not that traits trigger stereotypes).
Therefore, it is difficult from this evidence to continue to predict efficiency benefits for stereotypes rather than traits.
Here, I challenge the widely held assumption that stereotypes and traits differ in their efficiency, in their ease of access, and in the cognitive effort necessary to generate and employ each. This opinion is a departure from the traditional view of traits as cognitively complex, less accessible, and less efficient. In fact, the cognitive miser theory posits that people use stereotypes rather than traits because stereotypes are more accessible and efficient (Taylor, 1980). However, the brain imaging literature suggests that stereotypes and traits rely on overlapping brain networks (Harris et al., 2005;Hehman et al., 2014), making them potentially similar psychological processes.
Moreover, traits can be inferred after a few hundred milliseconds (Willis & Todorov, 2006), further challenging the assertion that stereotypes are more readily available cognitive constructs.
My argument suggests both categorical and individuated (trait) information are activated when perceiving a target, and is based on a flexible social cognition hypothesis that posits multiple attribution are activated during person perception (see Deroy & Harris, under review;Harris, 2017). In one electroencephalography study, researchers discovered that both traits and stereotypes trigger early brain signals, such that biases towards larger amplitudes for Black versus White faces were maintained despite individuation and accuracy goals (Kubota & Ito, 2017). This suggests that individuation and categorical perception both triggered a similar brain signal, albeit one that differentiated based on racial categories, perhaps because the behaviors implied by the trait were all stereotype consistent (aggression from Black males). This position is consistent with the flexible social cognition hypothesis. Thus, instead of asking whether categorical or trait inferences occur first, I posit that they are both activated, and either one can become relevant based on perceiver goals and the social context.
Given the possibility that stereotypes and traits are both easily accessible, and traits provide more specific information about a person beyond the stereotype, why are stereotypes the initial cognitive construct employed when encountering another person? Perhaps culturally, people in Western societies have relied on stereotypes because social group membership (e.g., race inferred from skin colour) has historically been more important than personality traits (content of character); knowing a person's racial group or gender determined their status within society as well as their personal freedoms, social roles, and expectations. Traits on the other hand vary within these social groups, making them less important for determining how a social encounter should proceed.
Nonetheless, the argument above suggests that traits can be used instead of stereotypes as the default social categorization process. Traits are categorical since they can be applied to a variety of targets. Like a stereotype, a trait makes salient a range of possible behaviors that can explain and predict a target's future behavior. For instance, introversion is not unique to one individual; many people in the world are introverted. Therefore, traits are categories; but are considered as human universals. Theoretically, any human being can possess a particular trait because traits are inherently essentialized (Prentice & Miller, 2007); people and scientists believe that traits are biologically determined. Stereotypes of social categories, particularly demographic categories, are essentialized as well, but scientists fiercely argue that they are not biologically determined, but social conveyed (Bastian & Haslam, 2006). Despite the essentialized nature of social categories, they, like traits, do exist along a continuum; identity theorists demonstrate that people can identity more or less with their social categories (Sellers et al., 1998). Both ethnicity and gender, master statuses, are considered social identities, not biological constructs.
This highlights a route to stereotype reduction, replacing stereotypes with traits.
The success of shifting cultural narratives in reducing social bias depends on flexible social cognition (Deroy & Harris, under review;Harris, 2017). Social cognition is inherently flexible-both in a moment, where different thoughts about another person's mind are salient (she is a woman, and she is clumsy), and because the specific attribution is 8 of 14 -HARRIS revisable with or without further information. However, this theory only supports the position that shifting cultural narratives is possibly a solution, not the conditions that would cause a shift in preference or promotion of one inference over another. Evidence in the literature suggests social goals can shift perception away from stereotypes to traits (Harris & Fiske, 2007;Macrae et al., 1997;Wheeler & Fiske, 2005).
Finally, social category cues are not predisposed to be perceived before behavior. Taking an evolutionary perspective, human beings who lived in smaller hunter-gatherer communities would not have used skin tone, for instance, as a way of determining group membership since outgroup targets would have had the same skin tone of the perceiver, given that skin tone is an adaptation to where on the planet people live. Thus, cues like skin tone as markers of social categories are culturally specific, not human universals. 2 Stated differently, people are socialized or acculturated to detect skin tone as salient for social categorization, rather than it being an automatic cue to group differences. Moreover, if we move beyond skin tone to physical features that distinguish gender, age, and other demographic categories, the argument about the automatic perception of these differences is further repelled. So, despite the fact that connectionist models of person perception posit that categories are inferred before behavior (see Freeman & Ambady, 2011;Kunda & Thagard, 1996), this does not necessarily mean that this is the human default; people are not required to make a categorical inference before perceiving behavior, and if they do, it is most likely due to the cultural importance of perceiving certain social group differences (e.g., race in the United States of America).

| FUTURE RESEARCH DIRECTIONS
The argument presented here has implication for dehumanization theory. The definition of dehumanization is a failure to consider a person's mental state (Harris, 2017;Harris & Fiske, 2009). Since personality traits consider the target's mind, then by default, dehumanization means ignoring a person's personality traits. There are other definitions of more blatant dehumanization (see Kteily et al., 2015) that are more akin to an attitude rather than the subtler cognitive process described here. Using more blatant definitions, claims about attributing personality traits to dehumanized targets may be valid. Future research can confirm that personality traits are not salient when encountering more subtly dehumanized targets.
Research is needed to tackle the difference between perceived social categories (from the perspective of the perceiver) and identified social categories (from the perspective of the target). Today, we live in an era where previously immutable demographic categories such as gender and race can be changed to be consistent with a person's self-perception. This makes essentialized perceived categories less accurate and predictive of behavior, and future research can delineate how previously essentialized categories fare in this modern world.
I have argued that cultural norms and narratives determine whether sub-categories exist. However, this may simply result from the fact that most research on this topic occurs in Western educated industrialized rich democratic (WEIRD) societies. Perhaps other societies do not derive sub-categories from cultural narratives. Thus, there exists fertile ground for future research in non-WEIRD samples that addresses this issue. For instance, different societies have different historical circumstances that emphasize different types of categorizations beyond age, race and gender. As an example, people in the country where I grew up, the Caribbean nation of Trinidad and Tobago, place more emphasis on social status rather that ethnicity or gender when dividing people into social groups. For people in this multi-ethnic melting pot (consisting of people with African, Asian-primarily Indian and Chinese-Middle-Eastern, Indigenous Caribbean, and European descent), where you live, who you know, and who are your parents are more important than what you look like when categorizing people. Similarly, non-WEIRD samples can also shed light regarding whether the hypothesized differences between traits and stereotypes has any merit.
Further, the distinction between information-integration and prototype distortion involves the role integrativeinformation processing plays in stereotype formation. When it comes to stereotype change, both information-HARRIS -9 of 14 integration and prototype-distortion function in the same manner, changing the representation of the prototype.
However, future research is needed to further compare and contrast the two processes.
Future research should also address the efficiency benefit of social group-based stereotypes and traits. Though the literature argues that stereotypes are more efficient, the evidence in social neuroscience described above, and the evidence from the spontaneous trait inference literature (see Uleman et al., 1996, for a review) certainly warrants a further investigation of this assumption, and re-examination of the evidence.
A behavior or concept may be associated with particular social groups, thus focussing on the behavior may also trigger a category-based stereotype (see Dixon & Maddox, 2005;Eberhardt et al., 2004;Payne, 2001). However, the flexible social cognition theory that inspires my arguments suggests that when observing a behavior, multiple inferences are available, including both the social group and a personality trait. Thus, behavior does not have to lead to a categorical inference, despite the fact that overwhelmingly people respond consistent with this categorical inference. This also begs for future research.
I have argued that both stereotypes and trait inferences may be inaccurate. Alternatively, one can argue that the kinds of errors made when inferring from traits relative to stereotypes are different. Future research is needed to dissociate these types of errors; for instance, stereotypes may provide more false alarms (claims that a behavior is likely when it is not), while traits include more misses (a failure to predict a behavior that is likely to occur).
Finally, more research is needed to better understand the impact of historical contexts on current cultural biases towards specific master statuses. I have argued extensively that cultural narratives are informed by historical contexts, but future research can more clearly identify the mechanisms by which historical contexts (e.g., the institution of slavery) influence cultural narratives (e.g., stereotypes about African Americans) propagating biased perspectives about certain social groups present in a historical moment over time. There is no evidence that such a shift in cultural focus would reduce the prevalence of social bias. As such, my argument requires future research to corroborate this most basic tenet.

| CONCLUSION
In conclusion, I propose a novel contribution to the literature in social and cognitive psychology by attempting to explain how categorization processes broadly and social categorization specifically are triggered, providing an alternative approach to social bias (prejudice) reduction. Currently, the most prevalent approach to social bias reduction stems for approaches to regulating affective responses. Because these affective responses result from the cognitive categorization processes, short-circuiting such categorization processes could short-circuit social bias responses. An empirical study provides evidence for short-circuiting social bias responses; participants were instructed to reduce their reliance on face morphology when categorizing African-American faces, resulting in changes at the early stages of face processing (Travers et al., 2020). This finding suggests that it is indeed possible to shift perceivers away from stereotype activation. To make such shifts the default, cultural narratives need to provide new learning that conditions perceivers to focus on behaviors and traits rather than stereotypes and social groups. Only then can we redress social bias and subsequent discriminatory behavior that contributes to continued systemic biases.