Semantic Memory

How is it that we know what a dog and a tree are, or, for that matter, what knowledge is? Our semantic memory consists of knowledge about the world, including concepts, facts and beliefs. This knowledge is essential for recognizing entities and objects, and for making inferences and predictions about the world. In essence, our semantic knowledge determines how we understand and interact with the world around us. In this chapter, we examine semantic memory from cognitive, sensorimotor, cognitive neuroscientific, and computational perspectives. We consider the cognitive and neural processes (and biases) that allow people to learn and represent concepts, and discuss how and where in the brain sensory and motor information may be integrated to allow for the perception of a coherent “concept”. We suggest that our understanding of semantic memory can be enriched by considering how semantic knowledge develops across the lifespan within individuals.

Semantic memory is conscious long-term memory for meaning, understanding, and conceptual facts about the world.Semantic memory is one of the two main varieties of explicit, conscious, long-term memory, which is memory that can be retrieved into conscious awareness after a long delay (from several seconds to years).Endel Tulving in 1972 (building upon a distinction between two primary forms of memory by Reiff and Scheers in 1959) distinguished between semantic and episodic memory.Episodic memory refers to stored representations for personally experienced episodes from one's life within a particular spatiotemporal context (e.g., dinner in Berkeley in January this year).Semantic memory refers to stored representations for meaningful facts or world knowledge, regardless of the spatiotemporal context in which the information was acquired and without information about personal experiences surrounding learning the information (e.g., the concept 'dinner'), and is necessary for language.Crucially, while episodic memory involves awareness of a feeling of having personally experienced an event or item, regardless of meaning (i.e., an item could be a nonsensical figure like abstract art and so has no meaning but has been experienced before as on multiple museum visits), semantic memory involves awareness of meaning unaccompanied by a feeling of familiarity of having previously experienced the event or item or remembering the place and time of the personal learning experience(s).For example, using semantic memory, you know what a dog is and can read the word 'dog' and be aware of the meaning of this concept, but you do not remember where and when you first learned about a dog or even necessarily subsequent personal experiences with dogs that went into building your concept of what a dog is.Even without a feeling of personal experience, you know what a dog is when you see, hear, or read about a dog.Thus, you have semantic memory for meaning, regardless of a feeling of familiarity or recollection of personal experiences with the origins of the concept.Ideas about semantic memory developed from attempts to explain how human language communicates concepts.While computer scientists proposed semantic nets for translating natural language as early as 1956, the term 'semantic memory' emerged in psychology in early models of human knowledge about word concepts circa 1969.Collins and Quillian viewed semantic memory as a hierarchical network of relations among concepts.A concept refers to meaning, which is stored in semantic memory.Language enables an arbitrary symbol, such as a stream of sounds comprising a word (e.g., 'dog'), to be associated with the memory representation of the meaning of the symbol (i.e., the semantic memory of dogs).As described in concept learning research, a concept is a mental representation that places an object, event, or idea into a category.Semantic memory can thus be said to be the store of mental representations of categories.In their original formulation of the organization of semantic memory, Collins and Quillian in 1969 assumed that categories are organized hierarchically, and defining features compose each category.For example, an animal has skin, moves, eats, and breathes.In 1976, Eleanor Rosch proposed different levels of categories.For example, song and field sparrows are subordinate categories of the more general category of sparrow, which is a basic-level category, along with eagle and cardinal of the superordinate-level category of birds, and, at a still more general, superordinate level, birds and fish are animals.Collins and Quillian's theory predicts that the response time to classify whether a feature belongs to a category depends upon how many nodes or levels of the hierarchy must be traversed to do the task, which was experimentally confirmed.

Feature Overlap p0020
Smith and colleagues modified this basic framework to suggest that the meaning of a concept is a set of features, as opposed to a single node.Further, characteristic features are merely typical of a concept (e.g., robins are bipedal, have wings, perch in trees, and are wild), whereas defining features are more essential (e.g., robins have red breasts).Consistent with this feature overlap model, people rate robins and sparrows as more typical birds than ducks and geese, and robins and sparrow are rated as more similar to each other than the other birds.However, there may be no defining features; as noted by the philosopher Wittgenstein in 1953, there is no feature that all games share.Also, feature overlap models compare features to decide the concept, but evidence indicates that other kinds of knowledge are relevant.For example, while a butterfly is readily categorized as an insect, subjects instructed to generate members of the insect category infrequently mention a butterfly.Such problems motivated alternative theories that continue to be debated and tested.The main competing theories can be grouped into those that propose that categories depend upon a prototype representation, which is an average of all examples, or multiple representations composed of each of the exemplars (or instances) of the category (e.g., each example of a dog experienced), referred to as prototype versus exemplar theories, respectively.s0030

Spreading Activation p0025
Most current theories organize concepts and categories as nodes in a network in which nodes can connect to one another via a semantic link, thereby associating together related concepts or categories.The length of the link in a semantic network model varies with the relatedness and associations between concepts.For example, car, truck, and bus may be connected directly via short links, and each of these to fire engine via a longer link.Nodes can be connected directly or indirectly via links to other nodes.For example, apple may connect directly to red and connect indirectly to fire engine through the red node.As in the earlier Collins and Quilian model, the properties of a concept/category can be connected to its node.Semantic network theories propose that activation spreads from one node to another along the links between them, allowing for even indirectly linked concepts to activate one another.Semantic networks can easily explain retrieval of meaning.For example, when thinking about apples, one might activate the concept of red, which might trigger one to think about fire engines, stoplights, or bricks.

p0030
The semantic network approach has the advantages over other theories of predictive power (perhaps too much so that it becomes unfalsifiable, according to some critics) and being readily modeled using neurocomputational methods (i.e., connectionist or parallel distributed processing models, as described by Rumelhart and McClelland, 1986).A node can be modeled as a neuronal cell, and the dendrites (input) and axons (output) that interconnect neurons to each other are modeled as links between nodes.Neural network models incorporate recurrent and feedback connections that are well-known principles of neocortical organization.A node in a semantic network has a level of activation representing the probability that the neuron will fire, thereby potentially activating a connected neuron sufficiently that it also fires.Activation in one node could thereby spread to other nodes connected to it directly or eventually indirectly.Semantic memory is acquired using learning rules (e.g., hebbian plasticity) that determine network connectivity by modifying the weights among connections based on experience.Contemporary neural network models have more biological realism.

s0035
Compound Cue p0035 Despite these advantages of the spreading activation account, compound cue models propose that semantic memory operates like other types of memory.For example, in the case of episodic recognition, memory is an interconnected feature set representing the item (i.e., its meaning), its learning context, and its relation with other such feature sets.Recognition cues are held in mind briefly to probe the feature sets, producing a familiarity signal send to a decision process, enabling a decision that the stimulus is old or new.Likewise, in the types of implicit memory tasks used to assess semantic memory, additional cues are relevant beyond those used for recognition.For example, in the lexical decision task often used in semantic memory studies, people decide faster whether a letter string is a real word when the target word (e.g., doctor) is preceded by a word that is related (e.g., nurse) than unrelated (e.g., butter).Prime and target are both cues that together constitute a third type of association besides the associations between target and context plus target and other feature sets, which are available for recognition.Compound cue theory attributes semantic priming for related primes and target pairs to the greater number of shared associates between them than for unrelated pairs.The common label, semantic memory, may not be the most appropriate but rather the term generic memory (suggested by D. L. Hintzman) or knowledge (suggested here) can include nonsemantic information.Consider that, in general, knowledge is what you know (e.g., that dogs bark, your house number, the capital of France, the color of spinach, the shape of a cat, as well as their meanings).Although linguistic stimuli (i.e., words) activate meaning, objects, scenes, and people are also meaningful.To activate meaning, the perceptual features of the stimulus must be matched to stored memory of these sensorybased features.For example, to categorize a dog, its perceived shape or other identifying perceptual attribute(s) (e.g., a bark) must match successfully to memory for the perceptual form associated with the dog category.Likewise, to activate the meaning of a word, the word form being currently perceived must match memory for the perceptual form of that word.Thus, semantic memory depends upon non-semantic memory Au5 to mediate between the perceived cue and its meaning.In addition, of nonsemantic memory can also activate associated nonsemantic information about the stimulus, as when observing a dog and becoming aware of its meaning and associated perceptual (e.g., its color, sound, smell), motor (e.g., its movements), emotional (e.g., fear), or mental state information.

p0045
Like semantic memory, nonsemantic knowledge is distinct from episodic memory.For example, patients with visual object agnosia are slow and make errors categorizing common objects when visually presented (e.g., seeing a dog but being unable to name or describe it meaningfully as a dog) but can tell that they saw the object before, demonstrating episodic memory.Moreover, all forms of visual object agnosia involve some impaired perceptual processing, even associative (i.e., semantic) subtypes; a knowledge system for the perceptual form of an object is necessary to know its meaning.In most theories, this perceptual matching stage must, to some extent, succeed before semantic memory can become active.Nonetheless, substantial parallel and interactive processing between perceptual form and meaning can occur.Thus, activating meaning always requires matching memory to the perceptual form of the referent, be it a word, object, face, or place.s0045 Knowledge, Priming, and Awareness p0050 Semantic memory and nonsemantic knowledge are nonepisodic, and aspects of these memories may be conscious, while others lie outside of awareness.Conscious semantic memory is primarily the variety of explicit memory that has been distinguished from episodic memory.After all, clearly, one can become aware of a concept in a semantic network, as when you are aware that you know what a word means.However, one is not necessarily conscious of activating the nodes or links in the network itself that lead to awareness of meaning or aware of the processes that match a perceptual form to its nonsemantic memory.Nonetheless, these nonconscious processes can lead ultimately to awareness of the shape, color, category, and meaning of the object.
p0055 By contrast, nonconscious implicit memory is thought to include nonsemantic memory as well as situations in which semantic memory is activated nonconsciously.Implicit memory is typically probed by repeating information.In such priming paradigms, the item (doctor) or a version of it (a picture of a doctor) or a related item (nurse) is presented, and then, following a delay, the item (e.g., doctor) is presented again.Relative to unrepeated (i.e., new) items, repeated items exhibit faster and more accurate performance, as well as different brain response characteristics.Repetition priming (i.e., doctor-doctor), conceptual priming (i.e., a picture of and the word for doctor), and semantic priming (i.e., nurse-doctor) are varieties of implicit (nonconscious) memory.It is important to note, however, that evidence is accumulating that consciousness is not the critical factor distinguishing varieties of learning and memory but rather the computational and decision demands of the task, and how these recruit different brain structures, are primary.Research on knowledge has focused on how meaningful (semantic) representations are organized, leaving nonsemantic knowledge organization relatively less understood.Multiple memory systems theory distinguishes between a semantic memory system and a nonsemantic perceptual representation system that can be matched to a currently perceived stimulus, for example, to determine what an object is, such as a dog, based on its perceived shape.This distinction of memory systems theory essentially reflects its adoption of the standard theory of meaning that proposes that conceptual knowledge resides in a single amodal system with a uniform architecture and exists separate from modal sensorimotor systems.

s0055
Anterior Temporal Lobe Stores Amodal Meaning p0065 Multiple memory systems theory (originating with Elizabeth Warrington in 1979) adopted the distinction between semantic and episodic memory and added the proposal that different brain systems support each type.In particular, while episodic memory depends upon the medial temporal lobe (MTL), semantic memory depends upon association areas of neocortex that lie outside primary sensorimotor areas and outside the MTL.Studies of patients with semantic memory problems indicate that an amodal system may reside in the anterior temporal lobe (ATL).The ATL is considered to be the best candidate for an amodal hub for meaning based on convergent evidence from patients with semantic memory problems and its anatomical connectivity.The ATL lies next to limbic system structures, including the amygdala and the orbitofrontal cortex, which has been implicated in emotion, reward, and motivation processing, thereby enabling associations among these abilities and sensorimotor and linguistic aspects of concepts.Further, the ATL lies next to the anterior MTL system for episodic memory, which is thought to contribute to learning conceptual knowledge gradually over multiple experiences, as when many personal experiences with a variety of dogs gradually result in a concept of the dog category.Hub theories, however, do not equate amodal with cross-modal (i.e., picture and word modalities), emphasizing that a cross-modal (or multimodal) region that integrates information from multiple sensory and/or motor regions may not perform the true amodal function required of a semantic hub.For example, the angular gyrus performs multimodal sensory integration but may not function as a semantic system for linguistic purposes.p0070 However, it is unclear what exactly is the difference between amodal and cross-modal/multimodal, and this distinction will be critical for determining the anatomical locus of an amodal hub for meaning that abstracts across stimulus form.Consider that any region that integrates information, (a) across sensory modalities, (b) multiple sensory plus motor or linguistic information, or (c) any of these plus emotion or mental state information, would meet the definitions of multimodal, crossmodal, and amodal (i.e., a similar pattern of neural activity is activated by more than one type of physical stimulus or type of response in the case of motor output).Moreover, alternative views about the organization of semantic memory, including those that posit no amodal hub, can accommodate the anatomical definition offered for the amodal semantic hub (i.e., integrates sensorimotor and emotion/reward information).p0075 Further, anatomical evidence suggests that the ATL may not be amodal (or fully multimodal).Some evidence suggests that, rather than being a domain-general semantic hub, the ATL stores knowledge about a unique item (e.g., an individual person, a famous landmark), which may be particularly necessary for socially relevant knowledge, as social information necessarily involves two or more unique people.Consider also that nonspatial (or object) processing inputs connect to the hippocampus via the perirhinal cortex in the MTL and the adjacent associative cortex, as well as the ventral visual pathway in the occipitotemporal cortex, in the ATL, whereas spatial processing inputs connect to the MTL via the parahippocampal gyrus and the adjacent associative cortex, as well as the dorsal visual pathway in the occipitoparietal cortex, in the posterior temporal lobe.In short, the ATL provides input to the perirhinal (nonspatial) and parahippocampal (spatial), which provide inputs to the hippocampus in the MTL.If the ATL is amodal, then how can it send segregated, modal nonspatial, and spatial inputs to the MTL? Modal segregation is difficult to reconcile with a definition of semantic memory organization that requires an amodal semantic hub where both nonspatial (object) and spatial information must be combined.Other types of sensorimotor, emotion, and reward inputs also send segregated inputs into the MTL via the ATL.s0060 Medial Temporal Lobe, Episodic Memory, and Meaning p0080 Indeed, perhaps the brain structure that shows the most amodal (or multimodal) properties is the MTL.The MTL shows highly sensory-invariant response properties.For example, MTL neurons respond to single individuals (e.g., Jennifer Aniston), regardless of the form of the stimulus (i.e., varieties of pictures, names), showing seemingly complete invariance, and have been suggested to represent meaning in long-term semantic memory.Further, MTL structures have been proposed to construct representations of integrated multimodal percepts that are sensitive to semantic variables.
p0085 Spared new learning of knowledge in amnesia suggests that the MTL is necessary not only for episodic memory but also for semantic memory.However, this idea is hard to reconcile with the substantial evidence dissociating episodic and semantic memory.For example, patients with developmental amnesia in which the MTL is dysfunctional from childhood have impaired episodic memory but remarkably spared semantic memory.Some evidence suggests that MTL amnesics can acquire some new explicit knowledge, but this is limited in amount and generalization and attributable to the remaining spared MTL structures, clearly so in some cases and possibly in others.Whether new explicit knowledge learning is spared in amnesia remains controversial in part due to the inherent difficulties of the lesion approach involving human patients; controlled, targeted lesions cannot be done in humans and so residual sparing of critical structures is hard to rule out.Overall, the evidence suggests that knowledge can be acquired using primarily cortical mechanisms but only through substantial repeated exposure, and episodic encoding processes of the MTL accelerate knowledge learning by integrating across multiple episodes in a way that also facilitates generalization and abstraction of knowledge, consistent with evidence that episodic and semantic memory are interlinked.Episodic and semantic memory systems have substantial mutual interdependence during encoding and retrieval.Neuroscience largely invalidates the strong form of the standard theory.All current views about the organization of knowledge incorporate an embodied (or grounded) cognition that says that knowledge depends upon multiple modality-specific systems, including those for sensorimotor properties in perceptual systems based on the senses (e.g., vision) and action systems for motor planning as well as emotion and mental states.For example, different modal knowledge systems in the extrastriate occipitotemporal cortex support face, word, and object knowledge.Within each system, modal knowledge varies in how specifically physical properties are represented.Some knowledge is more specific for a shape, spatial configuration, or other physical property (e.g., visually specific object knowledge) and others less so (i.e., being more abstract) showing, for example, more invariance across changes in physical properties between experiences (e.g., an object from different viewpoints) or cross-/multimodal activation patterns as when stimuli with the same associated meaning (i.e., a picture, sound, and word for dog) produce similar patterns of performance or brain activity.By an embodied account, a brain area can be both nonsemantic (e.g., sensorimotor), supporting, for example, both perceptual processing and perceptual memory, and semantic, supporting human symbolic abilities.Hybrid theories suggest that one or more separate amodal system(s) act as hub[s] or convergence zone[s] that interact reciprocally with embodied knowledge systems.
A key argument against embodied cognition is that socalled abstract words, such as abstract Au6 and freedom, are unrelated to sensorimotor processes.The main counterargument is that internal states, such as metacognition and emotion, are also stored as knowledge, and introspective states provide information that is central to representing abstract concepts.Unfortunately, relatively little is known about abstract concepts even though they play central roles in human cognition, as most research has focused on concrete concepts.How words activate meaning has been a central question in language and semantic memory studies.The mental lexicon stores word information, including meanings (i.e., semantic memory for words), syntax, and perceptual word forms.Most studies focused on speech comprehension, with early accounts (e.g., Levelt) positing a processing sequence from word sounds to syntax and finally to concepts in semantic memory.The importance of sequential processing for language theory and the fact that language comprehension is rapid, with all word identification achieved even before sentences end, the timing of semantic activation by words has been of greater interest than anatomy.Consequently, most neuroscience studies of semantics from words have used electromagnetic potentials that have high temporal resolution lacking in anatomical methods like functional magnetic resonance imaging (fMRI) that instead has been used to determine the brain regions.Most studies of language and semantic memory focus on the linguistic N400, which is a scalp-recorded, negative electrical potential peaking around 400 ms that varies with semantic processing between 300 and 500 ms in response to written words and spoken words for which the onset is slightly earlier.The N400 indexes a multimodal, abstract knowledge system for word meaning that is sensitive to ongoing context, constructive, and processes semantic information over an extended time period and across multiple brain regions.Thus, the meaning of a word is extracted within about 300 ms of processing.However, some lexical processing, including semantics, has been argued recently to occur before the N400 since ERPs to words between 200 and 300 ms seem sensitive to lexical processes.s0090

Anatomy of word concepts p0110
The N400 in response to words indexes activity in the ATL and the superior temporal gyrus, which are considered storage sites, and the ventrolateral prefrontal cortex (VLPFC), which supports efficient retrieval and encoding of this semantic knowledge.While electromagnetic potential and fMRI findings were combined to infer these neural generators, fMRI findings alone suggest that a more extensive, left-lateralized (i.e., more activity in the left cerebral hemisphere) network activates semantic memory in response to written and spoken words.The temporal lobe regions recruited extend (a) posteriorly into the modal visual association cortex implicated in category-specific semantic deficits and semantic dementia and may store object knowledge specifying perceptual and conceptual attributes and support multimodal integration, and (b) medially into the parahippocampal region of the MTL, implicating it as an interfacing region between the more lateral temporal cortex and the episodic memory system in the hippocampus of the MTL.Notably, the left superior temporal gyrus region implicated in language comprehension problems of Wernicke's aphasia is mainly the modal auditory cortex for speech perception and has not been implicated in word meaning, though the most ventral part may contribute to processing abstract concepts.Nearby, in the lateral inferior parietal lobe, an angular gyrus region is greatly expanded in humans, receives multimodal inputs, and may support the conceptual retrieval, integration, and fluent combination processes critical for understanding discourse.While these regions are on the lateral surface, the rest of the regions are medial.Specifically, the posterior cingulate region includes the retrosplenial cortex, which connects directly and bidirectionally with the MTL system for episodic memory and may promote episodic and semantic memory interactions.This cingulate region has been implicated in visuospatial, mental imagery, and simulation functions of both memory systems.In the frontal lobe, dorsomedial parts (BA 8) may support internally guiding semantic memory retrieval, while ventromedial parts support the emotional significance of concepts.
s0095 Semantic (default) network for words p0115 Intriguingly, the lateral temporal lobe regions, the angular gyrus, posterior cingulate, and medial prefrontal regions of this proposed semantic memory network for words (i.e., all word meaning regions except VLPFC) are all key components of the default network.The default network activates in an anticorrelated manner with an active task network, which essentially includes the rest of the neocortex, including VLFPC.The active task network activates more than the default network during tasks demanding greater selective attention, working memory, and executive functions.By contrast, the default network activates more than the active task network in many language studies, episodic memory tasks (for which the MTL also activates), and rest (i.e., when minimally engaged with a task).The default network is affected earlier and more than other brain systems in Alzheimer's disease patients who develop progressively severe problems encoding new long-term episodic memory and retrieving knowledge.The default network may thus have a greater role in semantic memory, consistent with proposals that this network supports mental imagery or simulation processes that creatively synthesize, integrate, and associate multimodal information, especially episodic memory from the MTL, across past experiences.These functions would be crucial for constructing sequential, higher-order concepts from multiple life episodes, such as generalizing across numerous restaurant visits to construct a framework to comprehend the next such visit, a knowledge representation known as a schema, and to predict and anticipate how the next such visit will unfold over time, a sequential knowledge representation known as a script.However, it is unclear whether these regions are sufficient to support all aspects of meaning, as these studies focused on words.After all, other regions in the active task network, including the VLFPC, are implicated in semantic memory and contribute important processes to knowledge encoding and retrieval

Uncorrected Proof
and to mental imagery.For example, script knowledge evoked by linguistic and nonlinguistic (e.g., picture) sequences involves active task network regions that interact with basal ganglia structures implicated in sequential processing and implicit learning more than the default network (Figure 1).The focus on semantic memory to words has left meaning in response to nonlinguistic stimuli relatively less well understood.Most work with nonlinguistic stimuli used pictures, revealing visual object knowledge.This topic is important because such research enables direct links between human and nonhuman animals not afforded by word studies since nonhuman animals have at best only very limited linguistic capacity, and most neuroscience questions can be addressed only in nonhuman animals for ethical reasons.Such links are necessary to understand the neural underpinnings of semantic memory from neural circuits to systems because

Au7
. Moreover, visual object knowledge is most important for human cognition, as vision is the dominant sensory modality, the best characterized sensory system, and objects are the focus of visual processing and attention.In response to a visually presented object, an N400-like scalp electrical potential, the N3 complex (aka N300, N350, N390), indexes neurophysiological processes between 200 and 500 ms involved in acquiring categorical knowledge, retrieving knowledge and implicit memory about objects, and making cognitive decisions based on object knowledge.The N3 complex peaks around 350 ms, differs in scalp distribution from the N400 (i.e., the N3 has a frontal maximum and can become positive over occipitotemporal locations, whereas the N400 is centroparietal), and cognitive manipulations affect it earlier, around 200 ms, than the N400.The earlier time course of the N3 relative to the N400 suggests that the arbitrary relationship between a word and its meaning takes longer to activate than the (nonarbitrary) association between a perceived object and its meaning for which the shape and other physical properties are part of its meaning.The N3 complex indexes a modal knowledge system from more visually specific to more abstract or invariant representations stored in extrastriate occipital and ventral temporal cortex.Crucial evidence that the processes underlying the N3 complex are part of a semantic memory system is that the N3 complex is sensitive to similar contextual, memory, and conceptual manipulations as the linguistic N400.For example, semantic priming, that is, preceding an item by a semantically related item (e.g., doctor preceded by nurse), reduces both brain potentials and response time.The different scalp distributions of the N3 and N400 indicate multiple, modality-specific knowledge systems.This is due to recruitment of the occipitotemporal cortex involved in storing object knowledge versus anterior and superior temporal regions involved in storing word knowledge.The VLPFC has a general role in semantic memory, however, controlling both posterior cortical processes for object and word knowledge to accomplish task-relevant goals and for decision-making (e.g., categorization).Notably, faces also evoke a functionally similar frontal N400-like potential.In sum, functionally similar but somewhat anatomically distinct, semantic memory systems support knowledge about words, faces, and other objects.Multiple knowledge systems are consistent with the functionally localized, hierarchical organization of the neocortex.From posterior to anterior areas along the ventral stream, stimulus selectivity becomes increasingly complex from more elementary, local features and greater visual-specificity to higher-order global shapes and combinations of features and increasing visual object constancy (i.e., similar responses despite changes in orientation, size, or other visual properties).Human occipitotemporal cortex is necessary for normal behavior on wide-ranging object cognition tasks.Patients with occipitotemporal damage have visual object agnosia: impaired perceiving, categorizing, and recognizing of visual objects with the pattern of deficits varying with the locus of damage.Occipitotemporal areas are retinotopic (i.e., adjacent neurons respond to different but nonoverlapping parts of the visual field) and object-sensitive (i.e., responding more strongly to intact images of objects than scrambled versions with no coherent object structure).However, recent evidence suggests that object-sensitivity, object perception, and invariant object knowledge continue into the MTL, including the hippocampus.Extended object processing and memory from the occipital into the MTL accords with embodied cognition but not the standard theory of an amodal system.s0120

Domains of Knowledge p0135
Multiple knowledge systems are consistent with embodied cognition but also an alternative, though not incompatible idea that it is object domain that primarily constrains conceptual knowledge organization.Distributed domain-specific theories propose that evolutionary history affects development, which thereby determines object domain.Convergent findings suggest that the domains are living animate (e.g., mammals), living inanimate (e.g., trees), conspecifics (e.g., humans), and tools.For example, an individual brain-damaged patient can display category-specific semantic problems with multimodal input, implicating abstract representations of conceptual knowledge.Both picture-naming and verbal questions about objects can be impaired for living animate objects (e.g., animals) but spared for nonanimals.Even so, the patients can also have problems with nonsemantic, visual structural processing and knowledge.These and other findings motivated other multiple semantic system accounts to distinguish instead between nonliving things, animals, and fruits/vegetables, proposing that visual motion and functional information are more important for knowing about nonliving things and other kinds of sensory information are more important for knowing about living things of which fruits/vegetables depend more on color and taste information (than animals do).Notably, a domain account need not imply that semantic memory is modular but rather current ideas emphasize that domain-specific neural networks are distributed across multiple cortical regions.Each domain of knowledge can be further subdivided according to the sensorimotor, affect, and mental state processes posited in embodied cognition theories, enabling a rapprochement between accounts.A central idea in hybrid accounts is that sensory processing within a specific domain (e.g., how a conspecific human looks based on visual processing) will be connected (e.g., via links in the semantic network) to other processes (e.g., how a conspecific human also sounds, emotes, or acts based on motor processing).Overall, findings converge on the idea that knowledge is organized across multiple cortical systems, contrary to the standard theory of meaning incorporated in multiple memory systems theory, but debates continue over the organizational principles governing the divisions (embodiment, domains, sensory-functional).The VLPFC controls knowledge encoding of mappings between knowledge stored in posterior areas and decision processes in frontal areas and subsequent retrieval.The human lateral PFC is organized functionally along a gradient from abstract decision and action planning processes in more rostral parts (e.g., VLPFC) to increasingly more concrete responserelated processes in more caudal parts (e.g., premotor cortex [PM]).This system maintains patterns of activity for multiple types of information (e.g., linguistic, visuospatial, object, rules) in functionally distinct neural populations, each of which influences (controls) other areas to accomplish a mental or overt action.For example, to decide the category of a visual object, dorsolateral PFC (DLPFC) and PM accumulate and compare visual evidence obtained from the occipitotemporal cortex to compute a decision according to a rule that determines the choice, which involves more rostral frontopolar (BA 10) areas.In the parietal lobe, the intraparietal sulcus (IPS) also accumulates evidence, consistent with its strong bidirectional connections with some decision-making regions.The VLPFC has an important role in disambiguating knowledge, as when multiple interpretations of the input result from initial processing (e.g., ambiguous figures, impoverished percepts, multiple alternative meanings or knowledge types are competing), and it interacts reciprocally with DLPFC and PM to recruit working memory resources to resolve uncertainty.Embodied cognition theories propose mental imagery, particularly automatic simulation varieties, as a core mechanism for deep conceptual processing, rather than language with which semantic memory has been commonly allied.For example, hearing the word dog automatically simulates the sensorimotor, affect, and/or mental state associated with experiences of dogs (e.g., what they look like, how they move, feel, etc.).The idea is that embodied processes encoded into the knowledge system during the initial experience are later recapitulated via cortical network simulation mechanisms in response to the original stimulus (e.g., seeing a dog) or associated stimuli (e.g., the word, dog).The human capacity for symbolic cognition arises from interactions between simulation in the cortical knowledge network and linguistic processing.By this view, nonhumans lack symbolic cognition insofar as they lack linguistic processes, even though nonhuman animals have simulation abilities like those in humans by virtue of common cortical architectures for sensorimotor, emotion, and mental state processes.p0150 However, mental imagery research has primarily investigated not automatic imagery but rather strategic mental imagery.Such studies, moreover, mainly use recently trained stimuli for which episodic memory (not semantic memory) likely dominates processing.For example, people are trained to memorize a few pictures until they can visualize them mentally with clear vivid detail.Later, while trying to (i.e., strategically) visualize these pictures, they answer questions about them requiring accurate mental images, such as whether a specific object part falls within a location of a grid on a computer screen.Consequently, little is known about mental imagery evoked automatically when semantic memory is activated.What is known comes mostly from studies of embodied cognition and two neuroimaging studies comparing episodic and p0040

s0050
Standard Theory of the Semantic Memory Systemp0060 Figure 1

s0125Frontals0130
Controlled Knowledge Retrieval and Decisions in Prefrontal Cortex (PFC) p0140