Abstract
Sign languages differ dramatically from spoken languages in their linguistic articulators (the hands/face vs. the vocal tract) and in how they are perceived (visually vs. auditorily), which can impact how they are processed in the brain. This review focuses on the neural network involved in sign language comprehension, from processing the initial visual input to parsing meaningful sentences. We describe how the signer’s brain decodes the visual signed signal into distinct and linguistically relevant representations (e.g., handshapes and movements) primarily in occipital and posterior temporal regions. These representations are converted into stable sign-based phonological representations in posterior temporal and parietal regions, which activate lexical-semantic representations. The higher-level processes which create combinatorial semantic-syntactic constructions from these lexical representations are subserved by a frontotemporal network of regions which overlaps with the network for spoken languages. The broad outline of this network is partially specific to the visual modality and partially supramodal in nature. Important avenues for future research include identifying and characterising patterns of activation and connectivity within macroanatomical regions which appear to serve multiple functional roles in sign language comprehension.
Keywords: bilingualism, cognitive science of language, deafness, neurolinguistics, perception, sign language comprehension, supramodal language, variation
1 |. Introduction
You are watching a person sign. They are moving their hands and facial features in a rapid and sequential manner, with few obvious pauses in movement. Their hands move into and out of different configurations, targeting specific locations on or near the body. While their hands are the most salient carriers of linguistic information, the signer’s facial movements also convey grammatical and lexical information. To understand this visual stream of signing, your brain must carry out a number of computational tasks: encode an array of complex and varying visual signals (e.g., hand locations, handshapes, facial expressions), categorise this input into stable linguistic representations, and combine these representations to create lexical signs and meaningful sentences. This review introduces the functional organisation of the neural network which subserves these linguistic processes, with a focus on describing the sensory and linguistic representations of sign languages.
A half-century of linguistic research shows that signs in sign languages exhibit a sublexical level of structure which is understood in terms of sign phonology. Just as spoken words can be decomposed into meaningless phonological units (e.g., consonants and vowels), signs can be decomposed into an organised set of form-based units, such as handshape, movement, and location (see Brentari 2019, for review). Sign languages also exhibit morpho-syntactic and grammatical processes that are parallel to spoken languages (see Sandler and Lillo-Martin 2006, for review). Figure 1 presents illustrations of some of the modality-specific linguistic structures found in sign languages. Figure 1A provides examples of minimal pairs from American Sign Language (ASL) that illustrate contrasts between the sublexical units of handshape, location (place of articulation), and movement (note that palm orientation and non-manual expressions, e.g., open vs. closed mouth, are also phonologically contrastive but are not shown). Figure 1B provides an example of a spatial classifier construction (see below). Figure 1C illustrates several morphosyntactic features: a) the grammatical use of facial expression (raised eyebrows) to mark a conditional subordinate clause, b) the production of inflected (directional) verbs (GIVE, SELL-TO), and c) the referential use of signing space (two discourse referents are associated with separate locations on the left and right). Comprehensive reviews of sign language linguistics can be found in Brentari (2010), Hill et al. (2018), Janzen and Shaffer (2023), and Pfau et al. (2012), among others.
FIGURE 1 |.

(A) Examples of minimal pair contrasts in ASL. Each pair of signs in each column differ only by a single phonological unit: location on the body (first column), handshape (second column), or hand movement (third column). Videos of the signs can be found in the ASL-LEX database (Caselli et al., 2017; Sehyr et al., 2021). (B) Example of a classifier construction depicting a standing person (a ‘1’ handshape) approaching a table (a flat ‘B’ handshape representing an object with a flat surface). (C) In this ASL sentence, the non-manual marker (raised eyebrows) signals the scope of the conditional clause, the direction of each verb indicates grammatical roles, and the initial and final locations for GIVE and SELL, respectively, are associated with unspecified referents.
Our goal is to provide a concise overview of the sign language comprehension network using evidence from EEG (electroencephalography), MEG (magnetoencephalography), fMRI (functional Magnetic Resonance Imaging), and lesion studies to characterise the nature of this network. Figure 2 provides a road map of the brain regions that we discuss (note that each region in the left hemisphere has a corresponding, i.e. homologous, region in the right hemisphere, which is not shown). Sign language comprehension typically engages homologous bilateral regions (i.e., similar regions in both the right and left hemispheres), but neural activation tends to be stronger on the left. Damage to the left (but not right) hemisphere results in sign language aphasia (Atkinson et al. 2005; Poizner et al. 1987), indicating the left hemisphere’s critical role in sign language processing. Although we describe aspects of sign comprehension in a stepwise manner (e.g., perceptual, phonological, lexical, and sentential processes), we caution the reader that these processes are dynamic and often occur simultaneously. Further, many neural regions are implicated in multiple functions; neural processes are not compartmentalised within modular regions but rather can be distributed across the sign language network.
FIGURE 2 |.

An illustration of brain regions discussed in this review. Regions are homologous, that is, are the same across the left and right hemispheres (the left hemisphere is shown here). This lateral view of the brain surface does not display regions on the medial or ventral faces of the cortex. The superior temporal sulcus (STS) lies between the MTG and the STG. The intraparietal sulcus (IPS) lies between the IPL and the SPL. The cortical representation was adapted from FreeSurfer’s fsaverage surface and parcellated using Destrieux et al. (2010).
2 |. Early Visual Processing of Signed Input
2.1 |. From the Eyes to the Brain
The brain first receives and encodes visual information from the retinas as a retinotopic map (i.e., a ‘map’ of neurons whose arrangement matches the arrangement of receptors in the retina) in the most posterior part of the occipital lobe, known as primary visual cortex (V1). Neurons in these regions encode information about very simple visual features and feed this information to higher-level retinotopic regions—secondary visual cortex—which encode more complex visual patterns or characteristics in the input. These early visual processes occur very quickly: bilateral posterior occipital cortex responds to the perceptual properties of signs about 80–120 ms after the onset of the sign (Leonard et al. 2012). The timing of early visual processing of the signed signal appears to be analogous to early auditory processing of spoken words in the bilateral superior temporal gyrus (STG).
The activation of occipital cortex for sign perception is primarily driven by domain-general visual processes which are not specific to signers. For instance, both signers and nonsigners synchronise their brain waves (measured by EEG or MEG) to the oscillating movements of the signer’s hands, a phenomenon which is called entrainment (Brookshire et al. 2017). Neural oscillations synchronised with sign movements are argued to be analogous to neural oscillations that entrain to acoustic oscillations in speech (i.e., fluctuations in volume). However, neural entrainment in frontal cortex (Brookshire et al. 2017) and in right parietal cortex (Rivolta 2023) is only observed in signers, indicating that linguistic knowledge can strengthen temporal predictions in these regions. Similarly, occipital activation in response to sign input can be influenced by higher-level linguistic processes, such as internal predictive models of upcoming sign input, which may be generated by frontal regions within the language network. When these predictions are violated, early visual processing effort appears to increase to resolve the mismatch between linguistic prediction and visual input (Stroh et al. 2019; Almeida et al. 2016).
Sign-specific occipital activity can also be observed without relying on linguistic violations, which are presumed to elicit top-down linguistic reanalysis. Evans et al. (2019) used a representational similarity analysis to assess which neural areas were activated similarly when signers perceived the same signs produced by two different signers (a man and a woman). The results identified bilateral regions in primary and secondary visual cortex. Because activation patterns within these regions were consistent across different sign models, these regions seem able to encode information about the abstract structure of signs in a consistent manner even when the retinal ‘picture’ is wholly different.
2.2 |. Recognising Linguistic Articulators: The Hands and Face
Spoken languages use one set of articulators: the vocal tract consisting of the lips, tongue, and larynx, which work together to create speech sounds. In contrast, sign languages use several articulators that can move independently of each other: the two hands and the face. The movements of the speech articulators are perceived auditorily and indirectly (e.g., some movements of the tongue and larynx are not visible). In contrast, the sign articulators are all perceived directly through vision. The brain analyzes the visual signal by decomposing it into distinct processing streams, which eventually diverge into separate but interdependent regions of the brain. One stream, known as the ventral stream (see Figure 2), is responsible for shape and object recognition (e.g., Milner and Goodale 2008). The ventral stream primarily travels through occipital visual regions to inferior temporal (IT) cortex, which contains neural regions that are functionally specialised for identifying bodies, body parts, and faces (see Peelen and Downing 2007, for a review).
In sign language comprehension, posterior IT cortex is often activated for simple tasks which are meant to identify regions involved in sublexical or phonetic processes (Matchin et al. 2022; P. C. Trettenbrein et al. 2021; Mayberry et al. 2011). However, posterior IT has not been reliably implicated in language-specific processes, and IT-specific activation may be simply due to the activation of hand and face representations during sign perception without specific contributions to linguistic processing. Corina et al. (2007) found IT activation in deaf signers when they viewed meaningless gestures, but not ASL signs. They suggested that the lack of IT activation for signs was due to top-down mechanisms, possibly in higher-level occipital regions, which quickly filter out linguistic from nonlinguistic hand movements to be processed by regions specialised for linguistic processing, rather than posterior IT cortex.
While known for processing static facial structure, face-selective regions in posterior IT are also implicated in processing dynamic facial expressions through their interaction with more dorsal regions in superior temporal cortex (see Lander and Butcher 2015, for a review). Facial expressions carry linguistic content in sign languages (e.g., marking adverbials, topics, conditional clauses—see Figure 1C), and linguistic facial expressions are distinct from emotional facial expressions (Reilly et al. 1990). Linguistic and emotional facial expressions in the signed signal activate IT regions for both signers and nonsigners, but activation is left-lateralised in signers and bilateral in nonsigners (McCullough et al. 2005). The perception of mouth patterns (phonologically-specified mouth movements that are unrelated to speech) in British Sign Language (BSL) also activate left IT more than lipreading speech for deaf signers (Capek et al. 2008), indicating sensitivity to linguistic facial expressions in left IT.
Therefore, for faces (and possibly for other body parts), left posterior IT may serve a role in dynamic feature processing during sign language perception through its resonance with regions in the superior temporal sulcus (STS) implicated in the perception of linguistic facial expressions (McCullough et al. 2005). Just as in nonsigners, the right STS of signers is involved in perceiving emotional facial expressions, but the left STS is uniquely recruited for processing linguistic information in facial expressions and emotional facial expressions in a linguistic context (McCullough et al. 2005; Emmorey and McCullough 2009).
2.3 |. Processing Linguistic Movement
Lifelong signers have extensive experience with the online extraction of linguistic information from rapid and complex manual and facial movements, and studies have shown that movement alone is a powerful carrier of linguistic information about signs (Poizner 1983; Poizner et al. 1981; E. Malaia et al. 2018). With point-light displays, in which only moving points of light representing the signer’s joints are visible, signers can identify individual signs, recognise which movements are purposeful or transitional, and identify familiar signers (Poizner et al. 1981; Klima et al. 1999; Bigand et al. 2020).
Recent research has suggested that the network involved in processing visual movement can be decomposed into two distinct pathways: the dorsal stream and the lateral stream (see Figure 2; Pitcher and Ungerleider 2021; McMahon et al. 2023). The dorsal stream constitutes a network that processes information about the spatial location and movement of objects and their related actions (e.g., reaching and grasping). Regions in the dorsal stream, such as the superior parietal lobule (SPL), seem to be involved in sign language comprehension only in limited contexts (see Section 4.3). In contrast, regions in the lateral stream are critically implicated in sign language comprehension.
The lateral stream is argued to be primarily engaged in dynamic social perception, and regions within this stream seem to be robustly activated for sign language comprehension. The lateral stream travels from primary visual cortex through higher-lever visual regions such as the middle temporal complex (area MT+) (see Figure 2). MT+ is implicated in the perception of simple and complex motion (McMahon et al. 2023; Bavelier et al. 2001). As would be expected for a linguistic signal with rich visual motion, MT+ is robustly activated during sign language perception for both signers and nonsigners (e.g., Levänen et al. 2001; MacSweeney, Woll, Campbell, McGuire, et al. 2002). The lateral stream terminates in the posterior STS, a functionally heterogeneous region which responds to a broad set of visual stimuli, including biological motion and social actions (McMahon et al. 2023).
MT+ is also more activated for signed sentences which describe visual motion (‘the deer walked along the hillside’) as compared to near-identical sentences which instead describe static scenes (‘the deer slept along the hillside’). This result suggests that MT+ is also engaged by linguistic motion semantics as the two sentence types were matched for amount of visual motion (McCullough et al. 2012). Just as for early visual areas, area MT+ seems to contribute to abstract linguistic processes over and above the perception of motion itself. However, studies have not yet investigated whether regional connectivity and/or activation within MT+differs for linguistic versus domain-general movement patterns.
2.4 |. The Action Observation Network
Research on nonlinguistic action observation has identified a network of frontoparietal regions which are engaged when one observes other people’s actions. This network, called the Action Observation Network (AON; Condy et al. 2021), is proposed to be critical to recognising others’ actions by mentally imitating or simulating the observed actions using one’s own motor system. This network would reasonably be expected to be engaged in sign language comprehension. Surprisingly, this prediction has not been borne out by studies looking at the AON in sign language comprehension. Studies comparing nonsigners and signers have found that nonsigners activate the AON as they view signs or communicative gestures, whereas signers do not (Corina et al. 2007; Emmorey et al. 2010). EEG studies which measure the mu rhythm tell a similar story. The mu rhythm is a synchronised pattern of electrical activity over sensorimotor cortex that has been correlated with action synchronisation in the AON. By measuring the mu rhythm while participants watched signs, Kubicek and Quandt (2019) found that nonsigners recruited the AON during sign perception, whereas signers did not.
The AON might not be critical to sign language comprehension because after a lifetime of experience decoding the signed signal, signers might rapidly dissociate between linguistic and nonlinguistic actions to send them down distinct processing streams (Corina et al. 2007; Kubicek and Quandt 2019). Instead of being processed by the AON, the signed signal is sent to more canonical language areas. However, the AON might not be fully disengaged for sign language comprehension. Kubicek and Quandt (2019) reported differential AON engagement for one-handed versus two-handed signs in signers, and this effect was larger in a later study which asked signers to repeat the signs after viewing them (Quandt and Willis 2021). Therefore, the AON might be only or especially recruited for linguistic processing when task demands surpass passive perception. It also may be that the AON is activated only when internal motor simulation of an observed action would help to perceive or produce that action. In everyday sign comprehension (without additional tasks), a signer would not need to continually simulate the signed signal and is able to rely on a more efficient visual-linguistic network through occipital and temporal regions to comprehend the linguistic input (for behavioural evidence that signers do not automatically engage in motor simulation during sign perception see Brozdowski and Emmorey 2020).
3 |. Extracting Phonological Parameters From Visual Input
3.1 |. From Perception to Phonology
Phonologically, signs are instantiated as a structured combination of phonological parameters, and the perceiver must segment the signing stream into distinct signs using word-level phonotactics (e.g., using the Possible Word Constraint; Orfanidou et al. 2010). A shift from perceptual representations to phonological structure implies several important processing steps, most importantly the normalisation of variable perceptual representations into fixed phonological representations and the fusion of these representations into a unified lexical representation.
Of the major phonological parameters (handshape, location, and movement; see Figure 1A), the movement parameter instantiates the prosodic structure of signs by acting as the nucleus of sign syllables (somewhat analogous to vowels in spoken languages; see Brentari 2019, for review). Movement may serve an important role in segmenting the signed stream into phonological representations. As reviewed in Section 2.2, sign language perception engages regions involved in motion perception; this movement-based information may then be analysed to parse signing into syllables to facilitate linguistic comprehension. E. A. Malaia and Wilbur (2020) propose that during both signed and spoken language comprehension, the brain entrains to a modality-independent syllabic rate (~4 Hz; but see Rivolta 2023). For sign languages, this syllabic rate is perceptually marked by rises and falls in visual entropy, allowing neural entrainment (as measured by EEG) to this rate, which may enable the brain to parse syllables for linguistic analysis and comprehension.
Unlike spoken languages, the phonology of signs is not primarily based on sequential structure; rather, signs are composed of both simultaneous and sequentially organised phonological units, with simultaneous units (e.g., handshape) remaining salient throughout the sign syllable. The simultaneous perception of many aspects of phonological form is a unique feature of sign languages that is enabled by the visuospatial modality. Handshape and location (place of articulation on the body), as simultaneous phonological units, remain the same or change only in constrained ways within each sign. To extract stable phonological representations of handshape and location from a dynamic and variable perceptual stream, the brain may engage in perceptual normalisation. Perceptual normalisation occurs when phonetic variation within phonemic categories is ignored, allowing for consistent categorisation of perceptual input into the appropriate phonological representation.
Such categorical perception (CP) effects have been found for signed stimuli: While both signers and nonsigners similarly identify and categorise handshapes that fall along a physical continuum, only signers exhibit better discrimination for handshapes that straddle a category boundary, as opposed to handshapes that fall within a category (Baker et al. 2005; Emmorey et al. 2003; see also, Eccarius 2008). However, CP effects have not been observed for location (Emmorey et al. 2003) and for some handshapes (Best et al. 2010). Sign perception is also impacted by viewing angle (Watkins et al. 2024; Corina et al. 2011), indicating that the brain does not extract angle-invariant representations of signs—a result that parallels the visual perception of objects (Edelman and Bülthoff 1992). Nonetheless, event-related potential (ERP) evidence from phonological priming paradigms indicates that (for signers only) the brain categorises phonological representations of handshapes and locations across signs (Meade et al. 2022).
There is also evidence that perceptual representations of signs and abstract phonological representations of signs activate distinct functional networks. In Cardin et al. (2016), deaf BSL signers, deaf nonsigners, and hearing nonsigners were asked to detect a target phonological feature (handshapes or locations) while watching a series of BSL signs, foreign signs, and phonologically illegal nonsigns. They found that for all groups, monitoring for a specific handshape versus for a specific location activated distinct perceptual networks encompassing inferior temporal and parietal regions implicated in complex visual perception. Critically, both monitoring tasks recruited bilateral superior temporal cortex (STC) only in the deaf signers, suggesting that linguistic phonological structure was processed by this region regardless of the specific perceptual processing demands. However, the meta-linguistic nature of the monitoring task provides only indirect evidence that phonological representations are encoded in bilateral STC during naturalistic comprehension.
3.2 |. Neural Regions for Sign Phonology
The task of converting perceptual representations of signing into stable phonological representations, and storing these representations seems to be subserved by a cortical network encompassing the posterior STC and the supramarginal gyrus (SMG; see Figure 2). The STC is a large neural region which encompasses the superior temporal gyrus (STG) and sulcus (STS). These regions are often subdivided into anterior (aSTG/aSTS) and posterior (pSTG/pSTS) regions.
Bilateral pSTS is reliably and robustly activated for sign language perception in deaf signers (Cardin et al. 2016; Emmorey, Xu et al. 2011; Emmorey, McCullough et al. 2011; Capek et al. 2008; MacSweeney, Woll, Campbell, McGuire, et al. 2002; Levänen et al., 2001; Neville et al. 1998). Emmorey, Xu et al. (2011); Emmorey, McCullough et al. (2011) scanned deaf signers and hearing nonsigners while they viewed ASL pseudosigns (possible but non-existing signs) and non-iconic signs, aiming to assess what neural regions respond to signed phonological structure without lexical meaning. They found that left STS seems to become tuned to linguistically structured body movements in signers, as signers activated left STS more than nonsigners when passively perceiving ASL pseudosigns, which contain phonological structure but no lexical meaning. This replicated results from a positron emission tomography (PET) study by Petitto et al. (2000), which found bilateral STS activation for signers but not nonsigners in response to pseudosigns and real signs. Emmorey, Xu et al. (2011) and Emmorey, McCullough et al. (2011) speculated that right STS, which was activated to a lesser extent, might subserve more gradient phonological processes whereas left STS might subserve ‘categorical and combinatorial processing of sublexical sign structure’.
The pSTG may also be involved in processing sign phonology. Emmorey, Xu et al. (2011) and Emmorey, McCullough et al. (2011) found that the left pSTG was activated more during pseudosign than sign perception in deaf signers only. Because left pSTG is known to process the sound structure of speech, the authors speculated that left pSTG may be involved in sublexical sign processing by identifying linguistic phonetic units within a dynamic signal, either auditory or visual. Malaia and Wilbur (2020) further suggest that the right STG is involved in segmenting sign syllables based on movement. However, the precise nature of the structural and/or segmental processing carried out by pSTG is not currently understood. For spoken languages, Binder (2017) argued that the left pSTG is not critical for speech comprehension but rather supports the storage of phonological codes which are retrieved for language production or for tasks which require holding these forms in short-term memory. If the linguistic functions of the pSTG are supramodal, this might be true for sign languages as well.
Cardin et al. (2016) also found that nonsigns activated bilateral SMG more than signs in deaf signers, and they interpreted this finding as an increase in phonological processing due to the nonsigns’ violation of phonotactic rules. The SMG is often implicated in phonological processing of signs (Corina et al. 1999; Emmorey et al. 2015; MacSweeney et al. 2004).
Corina et al. (1999) found that disrupting the function of the left SMG in a deaf signer (via cortical stimulation) caused phonological substitution errors in handshape and movement which were sometimes also semantically related, suggesting disruption at the lexical level. These findings implicate the left SMG in the assembly of phonological representations for lexical production (see also Emmorey et al. 2016). An electrocorticographic (eCoG) study analysing the spontaneous production of signs by a deaf patient found that activity of electrodes over the left SMG were correlated with the types of handshapes produced (Leonard et al. 2020). Meanwhile, electrodes over hand and face areas in sensorimotor cortex were correlated with the location of produced signs. Although Corina et al. and Leonard et al. investigated sign production, these studies provide some evidence that sign sublexical structure is systematically organised within the SMG.
The precise functional organisation and computational roles of the components within the phonological network in pSTS, pSTG, and SMG are still unclear and present important topics for future research. The STS is a core region for sign language comprehension, implicated in processing biological motion, linguistic facial expressions, sign phonology, syntactic structure, and semantics (McCullough et al. 2005; Emmorey, Xu et al., 2011; Emmorey, McCullough et al. 2011; MacSweeney et al. 2006; Matchin et al. 2022; Leonard et al. 2012). Future studies could employ fMRI techniques such as multivariate pattern and voxelwise analyses to identify how distinct types of linguistic information are organised within the STS at a finer scale, as has been done for social perception within this region (McMahon et al. 2023; Deen et al. 2015).
4 |. Lexical and Sentence-Level Processing
4.1 |. Lexical Processing of Signs
When the phonological units of a sign are bound together, the brain is presumed to activate a coherent abstract lexical representation, which includes the semantic and syntactic properties of the sign. To isolate the neural processes underlying the comprehension of lexical signs, neuroimaging studies have typically used lists of unconnected signs, with the expectation that such stimuli will not elicit combinatorial processing due to the lack of grammatical structure. Lists of unconnected signs are contrasted with a low-level baseline (e.g., viewing the sign model at rest). These studies have identified a bilateral network for lexical processing which encompasses posterior temporal cortex and the inferior and middle frontal gyri (IFG and MFG; see Figure 2) (e.g., MacSweeney et al. 2006; Li et al. 2014; Emmorey et al. 2015; Banaszkiewicz et al. 2021; Matchin et al. 2022).
Superior temporal regions within this network are also activated for sublexical processes, which can make it difficult to disentangle lexical from sublexical representations and processes. However, an MEG study by Leonard et al. (2012) found that activation in bilateral pSTS and pSTG is modulated by the semantic congruence between a picture and a sign for ASL signers. This neural modulation occurred in the same time window as the picture/word semantic congruence effect in the same regions for English speakers (200–400 ms after sign/word onset). The sensitivity of pSTS and pSTG to semantic content suggests that bilateral superior temporal cortex is involved in encoding semantic lexical representations for both sign and speech.
Two neural regions that are implicated in lexical processing for sign language are the posterior middle temporal gyri (pMTG) and inferior frontal gyri (IFG), which exhibit bilateral engagement but with greater activation in the left hemisphere (e.g., Emmorey et al. 2015; MacSweeney et al. 2006). The IFG and pMTG are often coactivated and are thought to be part of a lexicosemantic network that builds phonological representations into semantic and syntactic representations (Matchin and Hickok 2020). This lexicosemantic network is activated for single signs, but it also shares many commonalities with patterns of activation in response to sign sentences (see below). Therefore, this network may be functionally integrated with neural circuits underlying semantic-syntactic processing at the phrase level, and there may be no clear computational or regional demarcation between the comprehension of single signs and sentences.
In addition, neural regions outside the traditional network for spoken languages have sometimes been implicated in lexical processing for sign languages. One such area is the left intraparietal sulcus (IPS; see Figure 2), which has been found to be engaged during semantic processing of single signs when making picture-sign matching judgements (Leonard et al. 2012) or concreteness judgements (Emmorey et al. 2015). This region is not typically activated during the comprehension of single words (Leonard et al. 2012), and it might play a modality-specific role related to sign production. i.e., left IPS is often engaged during single sign production (e.g., Emmorey et al. 2016), and signers might recruit this motor-related region when making overt decisions about signs (rather than passive viewing).
A lexical feature that is arguably more common in signed than spoken languages is iconicity, the resemblance between a form and its meaning. For example, the form of the ASL sign DRINK depicts the act of drinking from a glass. However, it turns out that this lexical feature has little impact on neural processing. The comprehension and production of iconic and non-iconic signs are equally impaired with sign language aphasia (Atkinson et al. 2005), and evidence from fMRI (Evans et al. 2019), PET (Emmorey, Xu et al., 2011; Emmorey, McCullough et al. 2011), and EEG (Emmorey et al., 2020) indicates that the iconicity of signs does not influence neural activation patterns. Although recent ERP studies have found effects of iconicity on neural responses during picture-naming (Baus and Costa 2015; McGarry et al. 2024), these effects appear to be task-specific and may be related to the mapping between visual features of the picture and iconic features of the signs (Gimeno-Martinez and Baus, 2022; McGarry et al., 2023).
4.2 |. Sentence Processing in Sign Languages
To understand the thoughts being expressed by a signer, the perceiver must use their knowledge of the language’s grammar to interpret the order of lexical items in the signing stream and combine them into phrases with a cohesive meaning. Studies which assess sentence-level comprehension in sign languages have identified the same frontotemporal (also called the perisylvian) language network i.e. activated for lexical-level comprehension across several sign languages (e.g., MacSweeney et al. 2006; Mayberry et al. 2011; Inubushi and Sakai 2013; Emmorey et al. 2014). Language activation for sentence comprehension is often bilateral, but signers with lesions in left posterior temporal lobe perform worse on single-sign and sentence comprehension tasks compared to signers with lesions in the right temporal lobe (Hickok et al. 2002). This finding highlights the importance of the left posterior temporal cortex in sign language comprehension.
A question of interest is whether certain regions within this frontotemporal network specifically represent sentence-level (supralexical) processes, that is syntactic and semantic combinatorial processes. Studies on sentence-level processing must find a way to subtract neural activation associated with the lexical items themselves. One method is to contrast two or more conditions which only differ along the syntactic/semantic dimension to isolate neural activation associated with this dimension. MacSweeney et al. (2006) compared sentences with unconnected lists of signs and found that sentences recruited the left IFG and posterior temporal regions more than sign lists. More recently, Matchin et al. (2022) compared activation for three conditions that gradually increased in grammatical complexity: sign lists, simple two-sign sentences, and complex sentences. They found that increasing syntactic and semantic complexity elicited activation in left-lateralised anterior and posterior STS. Activation in this network peaked in the anterior temporal lobe (ATL), a region known to be involved in combinatorial semantics (see Blanco-Elorrieta et al. 2018, for a sign production study implicating the ATL in phrase-building). The regions activated by syntactic/semantic complexity only minimally overlapped with inferior temporal and occipital regions activated by the sign lists alone. Thus, Matchin et al. (2022) argue for a network centred around left STS which is sensitive to sentential structure and is distinct from a more posterior and bilateral network for lexical comprehension.
Other studies have used the same sign stimuli but changed the cognitive task to isolate sentence-level processes. Mayberry et al. (2011), Inubushi and Sakai (2013), and Stroh et al. (2019) all found increased or more widespread left IFG activation when signers were asked to make decisions requiring the grammatical analysis of sentences rather than phonological or lexical decisions. While the left IFG is activated for lexical comprehension along with the pMTG, these two regions seem especially involved in sentential processing. A meta-analysis by Trettenbrein et al. (2021) concluded that within a bilateral frontal-temporal-occipital sign language network, the left IFG is a hub for integrating abstract linguistic information in a similar manner as spoken and written languages. A network analysis by Liu et al. (2017) with hearing signers and non-signers reached a similar conclusion: a region in the left IFG was identified as a network hub only within signers.
The large-scale frontotemporal overlap between lexical and syntactic processes is consistent with recent accounts of the language system which postulate an integrated and distributed language system which does not abstract syntax away from lexical and compositional semantics (Shain et al. 2024). In this account, temporal and inferior frontal regions that have been hypothesised to be specialised for syntax process both lexical and phrase-level information dynamically and in parallel as the network builds meaning from linguistic input.
4.3 |. The Supramodal Language Network
Whereas the visuospatial nature of sign language comprehension leads to very different neural computations involved in decoding phonological structure, the neural substrates of signed and spoken comprehension above the lexical level are largely identical: a bilateral, but more left-lateralised, network of frontal and temporal regions. This frontotemporal network is similarly activated by sign, speech, and text comprehension within individual signers (Emmorey et al. 2014; Emmorey et al. 2015; Waters et al. 2007) and has been described as the supramodal language network (Matchin et al. 2022; Liu et al. 2020). Liu et al. (2020) found that hearing signers’ individual patterns of neural activation while comprehending spoken, written, and signed sentences within the frontotemporal network were similar across modalities. This finding suggests that language representations within these regions are supramodal, rather than unimodal representations which occupy common regions.
While regions across the frontotemporal language network may be similarly activated for all language modalities, there may be differences in the linguistic computations and representations subserved by these regions. For example, Evans et al. (2019) found that the semantic representations of individual spoken and signed lexical items (e.g., ‘train’, ‘bus’, ‘apple’, ‘grapes’) were not shared across languages (BSL and English) within an individual, but the neural representation of semantic categories was shared across languages. That is, while the concepts of modes of transportation and fruit might be supramodal in the brain of a bimodal bilingual, the neural representation of the spoken word train may be distinct from that for the sign TRAIN.
There are also some differences in the higher-level language network that are associated with linguistic constructions unique to sign languages. For example, classifier constructions are complex expressions in which the location and movement of the hands can depict the location and movement of referents, and handshape can represent the type of referent, such as a person or flat object (see papers in Emmorey 2003, and Figure 1B). Both lesion evidence (Hickok et al. 1996; Atkinson et al. 2005) and neuroimaging evidence (Emmorey et al. 2013, 2021) indicate that the right hemisphere is critical for comprehending and producing this type of spatial language (see Corina et al. 2013, for a review). These constructions may particularly necessitate the involvement of the right superior parietal lobule (SPL), which is not typically implicated in language processing. The SPL is a core region in the dorsal stream (see Section 2.3 and Figure 2), a network specialised for action perception which requires precise information about spatial locations (e.g., for reaching and grasping). In signers, it may contribute to the linguistic processing of classifier constructions.
Similarly, co-reference in sign languages involves the association and maintenance of referents with distinct locations in signing space (see Figure 1C), and again both lesion and neuroimaging data suggest a role for the right hemisphere, with neuroimaging data implicating the right SMG (Hickok et al. 1999; Stroh et al. 2019). The SPL along with the inferior parietal lobule (IPL) are also implicated, although MacSweeney et al. (2002) found involvement of left, not right, SPL and IPL in spatial reference processing. Interestingly, EEG evidence indicates that these types of constructions nonetheless exhibit similar neural responses (an increased N400) when they appear in non-canonical sentence structures, indicating a linguistic (rather than gestural) processing strategy (Krebs et al. 2021).
Beyond linguistic constructions that specifically require spatial processing, increased bilateral activation for signed as compared to spoken languages has been observed since the earliest ERP and fMRI studies on sign language comprehension (see Trettenbrein et al. 2021 for a meta-analysis). However, as noted above, lesion studies indicate that only left hemisphere damage results in frank sign language aphasias (e.g., Atkinson et al. 2005; Poizner et al. 1987). Laterality differences in the neuroimaging data have been argued to be stimulus-specific: the addition of a visual display of the interlocutor (required for sign, but not for speech) recruits right-hemispheric regions (MacSweeney, Woll, Campbell, McGuire, et al. 2002; Capek et al. 2004). Occipital and parietal regions are bilaterally activated by necessity, as each hemisphere processes visual and spatial information from the contralateral side of the body. Frontotemporal language areas, by contrast, still display a bias to the left hemisphere, supporting the importance of this hemisphere for supramodal language processes (see Trettenbrein et al. 2023, for a review). Nonetheless, some right-hemispheric regions (particularly in posterior temporal and inferior parietal cortex) seem to engage in linguistic processes, but the functional asymmetries between left and right regions remain underspecified.
5 |. Conclusion and Future Directions
In summary, the neural network for sign language comprehension reveals both striking similarities and important differences compared to spoken language processing. The early stages of sign comprehension are necessarily modality-specific: visual cortex, motion processing areas, and regions specialised for perceiving hands and faces decode the complex visuospatial signal into stable phonological representations. These are processes that have no direct parallel in auditory speech processing. However, once these visual-phonological representations are established, sign and spoken languages converge on a remarkably similar frontotemporal network for lexical and sentence-level processing. The existence of this supramodal language network suggests that the brain’s capacity for combinatorial semantics and syntax operates largely independently of sensory modality. The key insight is that while the ‘front end’ of language processing is shaped by whether we see or hear (or feel, for tactile languages) linguistic input, the ‘back end’, where meaning is constructed from structured linguistic elements, appears to be a fundamental property of language. We conclude that the human brain is able to extract linguistic structure and meaning from radically different sensory channels in equal measure.
An important direction for future research is to investigate the neurobiology and neurodevelopment of sign language processing in deaf children and adults who were not born into deaf signing families and who constitute the majority (90%–95%) of the deaf signing population. Almost all previous neuroimaging work has restricted the participants to ‘native’ signers in order to more easily make comparisons with spoken language studies. Although there have been some studies of hearing adults learning a sign language later as a second language (e.g. Banaszkiewicz et al. 2024), much less is known about how variation in the early developmental experiences of deaf people impacts the sign language comprehension network. Studies by Twomey et al. (2020) and Mayberry et al. (2011) suggest that the age of sign language acquisition can impact the neural response in left posterior STC and occipital cortex, but more studies with this population are critical given the possible effects of early language deprivation on neural structure and function (Hall 2017; Cheng et al. 2023).
In this review, we have described processes in a largely feed-forward fashion, following intuitions about the temporal order of linguistic computations, e.g., visual processing occurs before phonological assembly. However, we emphasise that there is also evidence for considerable top-down and parallel processes, which have yet to be fully described. Therefore, while the ‘front end’ of the sign language network may appear to be primarily composed of domain-general visual processes, it is likely that there are considerable specific adaptations within these regions which enable more efficient linguistic processing of visual input. We also note that neural regions are often activated for multiple linguistic processes, reflecting a distributed network within which linguistic functions are not modularised into typical functional contrasts such as phonology versus syntax. Nonetheless, some regions may serve distinct computational roles within the network, and novel methodological advances (e.g., new fMRI and EEG classification techniques, network localisers) are helping researchers understand the temporal neural dynamics of processes and representations within shared networks. These advances may reveal, for instance, how phonological structure might be represented within different perceptual and motor regions; how domain-general social perception and visual processing of linguistic structure might be organised within posterior STC; and how occipital regions involved in early visual processes may differentiate between linguistic and nonlinguistic stimuli.
Acknowledgements
We would like to thank Allison Bassett for her work supporting this manuscript. We also would like to thank Lorna Quandt, Stephen McCullough, and Laurie Glezer for discussions of this manuscript.
Funding:
This work was supported by NIH Grant R01 DC010997 (to KE) and the NSF Graduate Research Fellowship under Grant No. 2234692 (to BTC).
Footnotes
Ethics Statement
The authors have nothing to report.
Conflicts of Interest
The authors declare no conflicts of interest.
Data Availability Statement
No data was generated during the creation of this manuscript.
References
- Almeida D, Poeppel D, and Corina D. 2016. “The Processing of Biologically Plausible and Implausible Forms in American Sign Language: Evidence for Perceptual Tuning.” Language, Cognition and Neuroscience 31, no. 3: 361–374. 10.1080/23273798.2015.1100315. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Atkinson J, Marshall J, Woll B, and Thacker A. 2005. “Testing Comprehension Abilities in Users of British Sign Language Following CVA.” Brain and Language 94, no. 2: 233–248. 10.1016/j.bandl.2004.12.008. [DOI] [PubMed] [Google Scholar]
- Baker SE, Idsardi WJ, Golinkoff RM, and Petitto L-A. 2005. “The Perception of Handshapes in American Sign Language.” Memory & Cognition 33, no. 5: 887–904: Retrieved November 11, 2022, from. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2730958/. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Banaszkiewicz A, Costello B, and Marchewka A. 2024. “Early Language Experience and Modality Affect Parietal Cortex Activation in Different Hemispheres: Insights From Hearing Bimodal Bilinguals.” Neuropsychologia 204: 108973. 10.1016/j.neuropsychologia.2024.108973. [DOI] [PubMed] [Google Scholar]
- Banaszkiewicz A, Bola Ł, Matuszewski J, et al. 2021. “The Role of the Superior Parietal Lobule in Lexical Processing of Sign Language: Insights From fMRI and TMS.” Cortex 135: 240–254. 10.1016/j.cortex.2020.10.025. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Baus C, and Costa A. 2015. “On the Temporal Dynamics of Sign Production: An ERP Study in Catalan Sign Language (LSC).” Brain Research 1609: 40–53. 10.1016/j.brainres.2015.03.013. [DOI] [PubMed] [Google Scholar]
- Bavelier D, Brozinsky C, Tomann A, Mitchell T, Neville H, and Liu G. 2001. “Impact of Early Deafness and Early Exposure to Sign Language on the Cerebral Organization for Motion Processing.” Journal of Neuroscience 21, no. 22: 8931–8942. 10.1523/JNEUROSCI.21-22-08931.2001. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Best CT, Mathur G, Miranda KA, and Lillo-Martin D. 2010. “Effects of Sign Language Experience on Categorical Perception of Dynamic ASL Pseudosigns.” Attention, Perception, & Psychophysics 72, no. 3: 747–762. 10.3758/APP.72.3.747. [DOI] [Google Scholar]
- Bigand F, Prigent E, and Braffort A. 2020. “Person Identification Based on Sign Language Motion: Insights From Human Perception and Computational Modeling.” In Proceedings of the 7th International Conference on Movement and Computing, 1–7. 10.1145/3401956.3404187. [DOI] [Google Scholar]
- Binder JR 2017. “Current Controversies on Wernicke’s Area and Its Role in Language.” Current Neurology and Neuroscience Reports 17, no. 8: 58. 10.1007/s11910-017-0764-8. [DOI] [PubMed] [Google Scholar]
- Blanco-Elorrieta E, Kastner I, Emmorey K, and Pylkkänen L. 2018. “Shared Neural Correlates for Building Phrases in Signed and Spoken Language.” Scientific Reports 8, no. 1: 5492: Article 1. 10.1038/s41598-018-23915-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Brentari D 2019. Sign Language Phonology. Cambridge University Press. 10.1017/9781316286401. [DOI] [Google Scholar]
- Brentari D, ed. 2010. Sign Languages. Cambridge University Press. [Google Scholar]
- Brookshire G, Lu J, Nusbaum HC, Goldin-Meadow S, and Casasanto D. 2017. “Visual Cortex Entrains to Sign Language.” Proceedings of the National Academy of Sciences 114, no. 24: 6352–6357. 10.1073/pnas.1620350114. [DOI] [Google Scholar]
- Brozdowski C, and Emmorey K. 2020. “Shadowing in the Manual Modality.” Acta Psychologica 208: 103092. 10.1016/j.actpsy.2020.103092. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Capek CM, Bavelier D, Corina D, Newman AJ, Jezzard P, and Neville HJ. 2004. “The Cortical Organization of Audio-Visual Sentence Comprehension: An fMRI Study at 4 Tesla.” Cognitive Brain Research 20, no. 2: 111–119. 10.1016/J.COGBRAINRES.2003.10.014. [DOI] [PubMed] [Google Scholar]
- Capek CM, Waters D, Woll B, et al. 2008. “Hand and Mouth: Cortical Correlates of Lexical Processing in British Sign Language and Speech-reading English.” Journal of Cognitive Neuroscience 20, no. 7: 1220–1234. 10.1162/jocn.2008.20084. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cardin V, Orfanidou E, Kästner L, et al. 2016. “Monitoring Different Phonological Parameters of Sign Language Engages the Same Cortical Language Network but Distinctive Perceptual Ones.” Journal of Cognitive Neuroscience 28, no. 1: 20–40. 10.1162/jocna00872. [DOI] [PubMed] [Google Scholar]
- Caselli NK, Sehyr ZS, Cohen-Goldberg AM, and Emmorey K. 2017. “ASL-LEX: A Lexical Database of American Sign Language.” Behavior Research Methods 49, no. 2: 784–801. 10.3758/s13428-016-0742-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cheng Q, Roth A, Halgren E, Klein D, Chen J-K, and Mayberry RI. 2023. “Restricted Language Access During Childhood Affects Adult Brain Structure in Selective Language Regions.” Proceedings of the National Academy of Sciences 120, no. 7: e2215423120. 10.1073/pnas.2215423120. [DOI] [Google Scholar]
- Condy EE, Miguel HO, Millerhagen J, et al. 2021. “Characterizing the Action-Observation Network Through Functional Near-Infrared Spectroscopy: A Review.” Frontiers in Human Neuroscience 15: 627983. 10.3389/fnhum.2021.627983. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Corina D, Lawyer L, and Cates D. 2013. “Cross-Linguistic Differences in the Neural Representation of Human Language: Evidence From Users of Signed Languages.” Frontiers in Psychology 3: 587. 10.3389/fpsyg.2012.00587. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Corina D, Grosvald M, and Lachaud C. 2011. “Perceptual Invariance or Orientation Specificity in American Sign Language? Evidence From Repetition Priming for Signs and Gestures.” Language & Cognitive Processes 26, no. 8: 1102–1135. 10.1080/01690965.2010.517955. [DOI] [Google Scholar]
- Corina D, McBurney SL, Dodrill C, Hinshaw K, Brinkley J, and Ojemann G. 1999. “Functional Roles of Broca’s Area and SMG: Evidence From Cortical Stimulation Mapping in a Deaf Signer.” NeuroImage 10, no. 5: 570–581. 10.1006/NIMG.1999.0499. [DOI] [PubMed] [Google Scholar]
- Corina D, Chiu Y-S, Knapp H, Greenwald R, San Jose-Robertson L, and Braun A. 2007. “Neural Correlates of Human Action Observation in Hearing and Deaf Subjects.” Brain Research 1152: 111–129. 10.1016/j.brainres.2007.03.054. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Deen B, Koldewyn K, Kanwisher N, and Saxe R. 2015. “Functional Organization of Social Perception and Cognition in the Superior Temporal Sulcus.” Cerebral Cortex 25, no. 11: 4596–4609. 10.1093/cercor/bhv111. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Destrieux C, Fischl B, Dale A, and Halgren E. 2010. “Automatic Parcellation of Human Cortical Gyri and Sulci Using Standard Anatomical Nomenclature.” NeuroImage 53, no. 1: 1–15. 10.1016/j.neuroimage.2010.06.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Eccarius PN 2008. “A Constraint -based Account of Handshape Contrast in Sign Languages.” Theses and Dissertations Available from ProQuest: 1–182. [Google Scholar]
- Edelman S, and Bülthoff HH. 1992. “Orientation Dependence in the Recognition of Familiar and Novel Views of Three-Dimensional Objects.” Vision Research 32, no. 12: 2385–2400. 10.1016/0042-6989(92)90102-o. [DOI] [PubMed] [Google Scholar]
- Emmorey K, and McCullough S. 2009. “The Bimodal Bilingual Brain: Effects of Sign Language Experience.” Brain and Language 109, no. 2: 124–132. 10.1016/j.bandl.2008.03.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Emmorey K, Brozdowski C, and McCullough S. 2021. “The Neural Correlates for Spatial Language: Perspective-Dependent and -Independent Relationships in American Sign Language and Spoken English.” Brain and Language 223: 105044. 10.1016/j.bandl.2021.105044. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Emmorey K, ed. 2003. Perspectives on Classifier Constructions in Sign Languages. Psychology Press. [Google Scholar]
- Emmorey K, Xu J, and Braun A. 2011. “Neural Responses to Meaningless Pseudosigns: Evidence for Sign-Based Phonetic Processing in Superior Temporal Cortex.” Brain and Language 117, no. 1: 34–38. 10.1016/j.bandl.2010.10.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Emmorey K, Xu J, Gannon P, Goldin-Meadow S, and Braun A. 2010. “CNS Activation and Regional Connectivity During Pantomime Observation: No Engagement of the Mirror Neuron System for Deaf Signers.” NeuroImage 49, no. 1: 994–1005. 10.1016/j.neuroimage.2009.08.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Emmorey K, Winsler K, Midgley KJ, Grainger J, and Holcomb PJ. 2020. “Neurophysiological Correlates of Frequency, Concreteness, and Iconicity in American Sign Language.” Neurobiology of Language 1, no. 2: 249–267. 10.1162/nol_a_00012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Emmorey K, McCullough S, and Brentari D. 2003. “Categorical Perception in American Sign Language.” Language & Cognitive Processes 18, no. 1: 21–45. 10.1080/01690960143000416. [DOI] [Google Scholar]
- Emmorey K, McCullough S, and Weisberg J. 2015. “Neural Correlates of Fingerspelling, Text, and Sign Processing in Deaf American Sign Language–English Bilinguals.” Language, Cognition and Neuroscience 30, no. 6: 749–767. 10.1080/23273798.2015.1014924. [DOI] [Google Scholar]
- Emmorey K, McCullough S, Mehta S, and Grabowski TJ. 2014. “How Sensory-Motor Systems Impact the Neural Organization for Language: Direct Contrasts Between Spoken and Signed Language.” Frontiers in Psychology 5: 484. 10.3389/fpsyg.2014.00484. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Emmorey K, McCullough S, Mehta S, Ponto LLB, and Grabowski TJ. 2011. “Sign Language and Pantomime Production Differentially Engage Frontal and Parietal Cortices.” Language & Cognitive Processes 26, no. 7: 878–901. 10.1080/01690965.2010.492643. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Emmorey K, McCullough S, Mehta S, Ponto LLB, and Grabowski TJ. 2013. “The Biology of Linguistic Expression Impacts Neural Correlates for Spatial Language.” Journal of Cognitive Neuroscience 25, no. 4: 517–533. 10.1162/jocn_a_00339. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Emmorey K, Mehta S, McCullough S, and Grabowski TJ. 2016. “The Neural Circuits Recruited for the Production of Signs and Fingerspelled Words.” Brain and Language 160: 30–41. 10.1016/j.bandl.2016.07.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Evans S, Price CJ, Diedrichsen J, Gutierrez-Sigut E, and MacSweeney M. 2019. “Sign and Speech Share Partially Overlapping Conceptual Representations.” Current Biology 29, no. 21: 3739–3747.e5. 10.1016/j.cub.2019.08.075. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gimeno-Martínez M, and Baus C. 2022. “Iconicity in Sign Language Production: Task Matters.” Neuropsychologia 167: 108166. 10.1016/j.neuropsychologia.2022.108166. [DOI] [PubMed] [Google Scholar]
- Hall WC 2017. “What You Don’T Know Can Hurt You: The Risk of Language Deprivation by Impairing Sign Language Development in Deaf Children.” Maternal and Child Health Journal 21, no. 5: 961–965. 10.1007/s10995-017-2287-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hickok G, Say K, Bellugi U, and Klima ES. 1996. “The Basis of Hemispheric Asymmetries for Language and Spatial Cognition: Clues From Focal Brain Damage in Two Deaf Native Signers.” Aphasiology 10, no. 6: 577–591. 10.1080/02687039608248438. [DOI] [Google Scholar]
- Hickok G, Wilson M, Clark K, Klima ES, Kritchevsky M, and Bellugi U. 1999. “Discourse Deficits Following Right Hemisphere Damage in Deaf Signers.” Brain and Language 66, no. 2: 233–248. 10.1006/brln.1998.1995. [DOI] [PubMed] [Google Scholar]
- Hickok G, Love-Geffen T, and Klima ES. 2002. “Role of the Left Hemisphere in Sign Language Comprehension.” Brain and Language 82, no. 2: 167–178. 10.1016/s0093-934x(02)00013-5. [DOI] [PubMed] [Google Scholar]
- Hill JC, Lillo-Martin DC, and Wood SK. 2018. Sign Languages: Structures and Contexts. Routledge. [Google Scholar]
- Inubushi T, and Sakai K. 2013. “Functional and Anatomical Correlates of word-sentence-and discourse-level Integration in Sign Language.” Frontiers in Human Neuroscience 7: 681: Retrieved February 23, 2023, from. https://www.frontiersin.org/articles/10.3389/fnhum.2013.00681. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Janzen T, and Shaffer B, eds. 2023. Signed Language and Gesture Research in Cognitive Linguistics. De Gruyter Mouton. [Google Scholar]
- Klima ES, Tzeng OJL, Fok YYA, Bellugi U, Corina D, and Bettger JG. 1999. “From Sign to Script: Effects of Linguistic Experience on Perceptual Categorization.” Journal of Chinese Linguistics Monograph Series 13: 96–129. [Google Scholar]
- Krebs J, Malaia E, Wilbur RB, and Roehm D. 2021. “Psycholinguistic Mechanisms of Classifier Processing in Sign Language.” Journal of Experimental Psychology: Learning, Memory, and Cognition 47, no. 6: 998–1011. 10.1037/xlm0000958. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kubicek E, and Quandt LC. 2019. “Sensorimotor System Engagement During ASL Sign Perception: An EEG Study in Deaf Signers and Hearing Non-Signers.” Cortex 119: 457–469. 10.1016/j.cortex.2019.07.016. [DOI] [PubMed] [Google Scholar]
- Lander K, and Butcher N. 2015. “Independence of Face Identity and Expression Processing: Exploring the Role of Motion.” Frontiers in Psychology 6: 255. 10.3389/fpsyg.2015.00255. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Leonard MK, Lucas B, Blau S, Corina DP, and Chang EF. 2020. “Cortical Encoding of Manual Articulatory and Linguistic Features in American Sign Language.” Current Biology 30, no. 22: 4342–4351.e3. 10.1016/j.cub.2020.08.048. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Leonard MK, Ramirez NF, Torres C, et al. 2012. “Signed Words in the Congenitally Deaf Evoke Typical Late Lexicosemantic Responses With No Early Visual Responses in Left Superior Temporal Cortex.” Journal of Neuroscience 32, no. 28: 9700–9705. 10.1523/JNEUROSCI.1002-12.2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Levänen S, Uutela K, Salenius S, and Hari R. 2001. “Cortical Representation of Sign Language: Comparison of Deaf Signers and Hearing Non-Signers.” Cerebral Cortex 11, no. 6: 506–512. 10.1093/cercor/11.6.506. [DOI] [PubMed] [Google Scholar]
- Li Q, Xia S, Zhao F, and Qi J. 2014. “Functional Changes in People With Different Hearing Status and Experiences of Using Chinese Sign Language: An Fmri Study.” Journal of Communication Disorders 50: 51–60. 10.1016/j.jcomdis.2014.05.001. [DOI] [PubMed] [Google Scholar]
- Liu L, Yan X, Li H, Gao D, and Ding G. 2020. “Identifying a Supramodal Language Network in Human Brain With Individual Fingerprint.” NeuroImage 220: 117131. 10.1016/j.neuroimage.2020.117131. [DOI] [PubMed] [Google Scholar]
- Liu L, Yan X, Liu J, et al. 2017. “Graph Theoretical Analysis of Functional Network for Comprehension of Sign Language.” Brain Research 1671: 55–66. 10.1016/j.brainres.2017.06.031. [DOI] [PMC free article] [PubMed] [Google Scholar]
- MacSweeney M, Woll B, Campbell R, et al. 2002. “Neural Correlates of British Sign Language Comprehension: Spatial Processing Demands of Topographic Language.” Journal of Cognitive Neuroscience 14, no. 7: 1064–1075. 10.1162/089892902320474517. [DOI] [PubMed] [Google Scholar]
- MacSweeney M, Woll B, Campbell R, et al. 2002a. “Neural Systems Underlying British Sign Language and Audio-Visual English Processing in Native Users.” Brain 125, no. 7: 1583–1593. 10.1093/brain/awf153. [DOI] [PubMed] [Google Scholar]
- MacSweeney M, Campbell R, Woll B, et al. 2004. “Dissociating Linguistic and Nonlinguistic Gestural Communication in the Brain.” NeuroImage 22, no. 4: 1605–1618. 10.1016/j.neuroimage.2004.03.015. [DOI] [PubMed] [Google Scholar]
- MacSweeney M, Campbell R, Woll B, et al. 2006. “Lexical and Sentential Processing in British Sign Language.” Human Brain Mapping 27, no. 1: 63–76. 10.1002/hbm.20167. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Malaia EA, and Wilbur RB. 2020. “Syllable as a Unit of Information Transfer in Linguistic Communication: The Entropy Syllable Parsing Model.” WIREs Cognitive Science 11, no. 1: e1518. 10.1002/wcs.1518. [DOI] [PubMed] [Google Scholar]
- Malaia E, Borneman JD, and Wilbur RB. 2018. “Information Transfer Capacity of Articulators in American Sign Language.” Language and Speech 61, no. 1: 97–112. 10.1177/0023830917708461. [DOI] [PubMed] [Google Scholar]
- Matchin W, and Hickok G. 2020. “The Cortical Organization of Syntax.” Cerebral Cortex 30, no. 3: 1481–1498. 10.1093/cercor/bhz180. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Matchin W, Ilkbasaran D, Hatrak M, et al. 2022. “The Cortical Organization of Syntactic Processing Is Supramodal: Evidence From American Sign Language.” Journal of Cognitive Neuroscience 34, no. 2: 224–235. 10.1162/jocna01790. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mayberry RI, Chen J-K, Witcher P, and Klein D. 2011. “Age of Acquisition Effects on the Functional Organization of Language in the Adult Brain.” Brain and Language 119, no. 1: 16–29. 10.1016/j.bandl.2011.05.007. [DOI] [PubMed] [Google Scholar]
- McCullough S, Saygin AP, Korpics F, and Emmorey K. 2012. “Motion-Sensitive Cortex and Motion Semantics in American Sign Language.” NeuroImage 63, no. 1: 111–118. 10.1016/j.neuroimage.2012.06.029. [DOI] [PMC free article] [PubMed] [Google Scholar]
- McCullough S, Emmorey K, and Sereno M. 2005. “Neural Organization for Recognition of Grammatical and Emotional Facial Expressions in Deaf ASL Signers and Hearing Nonsigners.” Cognitive Brain Research 22, no. 2: 193–203. 10.1016/j.cogbrainres.2004.08.012. [DOI] [PubMed] [Google Scholar]
- McGarry ME, Midgley KJ, Holcomb PJ, and Emmorey K. 2023. “How (And Why) Does Iconicity Affect Lexical Access: An Electrophysiological Study of American Sign Language.” Neuropsychologia 183: 108516. 10.1016/j.neuropsychologia.2023.108516. [DOI] [PMC free article] [PubMed] [Google Scholar]
- McGarry M, Midgley KJ, Holcomb PJ, and Emmorey K. 2024. “The Role of Perceptual Vs Motoric Iconicity in Sign Production: An ERP Investigation of ASL.” Neuropsychologia 203: 108966. 10.1016/j.neuropsychologia.2024.108966. [DOI] [PMC free article] [PubMed] [Google Scholar]
- McMahon E, Bonner MF, and Isik L. 2023. “Hierarchical Organization of Social Action Features Along the Lateral Visual Pathway.” Current Biology 33, no. 23: 5035–5047.e8. 10.1016/j.cub.2023.10.015. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Meade G, Lee B, Massa N, Holcomb PJ, Midgley KJ, and Emmorey K. 2022. “Are Form Priming Effects Phonological or Perceptual? Electrophysiological Evidence From American Sign Language.” Cognition 220: 104979. 10.1016/j.cognition.2021.104979. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Milner AD, and Goodale MA. 2008. “Two Visual Systems reviewed.” Neuropsychologia 46, no. 3: 774–785. 10.1016/j.neuropsychologia.2007.10.005. [DOI] [PubMed] [Google Scholar]
- Neville HJ, Bavelier D, Corina D, et al. 1998. “Cerebral Organization for Language in Deaf and Hearing Subjects: Biological Constraints and Effects of Experience.” Proceedings of the National Academy of Sciences of the United States of America 95, no. 3: 922–929. 10.1073/pnas.95.3.922. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Orfanidou E, Adam R, Morgan G, and McQueen JM. 2010. “Recognition of Signed and Spoken Language: Different Sensory Inputs, the Same Segmentation Procedure.” Journal of Memory and Language 62, no. 3: 272–283. 10.1016/j.jml.2009.12.001. [DOI] [Google Scholar]
- Peelen MV, and Downing PE. 2007. “The Neural Basis of Visual Body Perception.” Nature Reviews Neuroscience 8, no. 8: 636–648. 10.1038/nrn2195. [DOI] [PubMed] [Google Scholar]
- Petitto LA, Zatorre RJ, Gauna K, Nikelski EJ, Dostie D, and Evans AC. 2000. “Speech-Like Cerebral Activity in Profoundly Deaf People Processing Signed Languages: Implications for the Neural Basis of Human Language.” Proceedings of the National Academy of Sciences of the United States of America 97, no. 25: 13961–13966. 10.1073/pnas.97.25.13961. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pfau R, Steinbach M, and Woll B, eds. 2012. Sign Language: An International Handbook. DeGruyter Mouton. [Google Scholar]
- Pitcher D, and Ungerleider LG. 2021. “Evidence for a Third Visual Pathway Specialized for Social Perception.” Trends in Cognitive Sciences 25, no. 2: 100–110. 10.1016/j.tics.2020.11.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Poizner H 1983. “Perception of Movement in American Sign Language: Effects of Linguistic Structure and Linguistic Experience.” Perception & Psychophysics 33, no. 3: 215–231. 10.3758/BF03202858. [DOI] [PubMed] [Google Scholar]
- Poizner H, Klima E, and Bellugi U. 1987. What the Hands Reveal About the Brain. MIT Press. [Google Scholar]
- Poizner H, Bellugi U, and Lutes-Driscoll V. 1981. “Perception of American Sign Language in Dynamic Point-Light Displays.” Journal of Experimental Psychology: Human Perception and Performance 7, no. 2: 430–440. 10.1037/0096-1523.7.2.430. [DOI] [PubMed] [Google Scholar]
- Quandt LC, and Willis AS. 2021. “Earlier and More Robust Sensorimotor Discrimination of ASL Signs in Deaf Signers During Imitation.” Language, Cognition and Neuroscience 36, no. 10: 1281–1297. 10.1080/23273798.2021.1925712. [DOI] [Google Scholar]
- Reilly JS, McIntire ML, and Bellugi U. 1990. “Faces: The Relationship Between Language and Affect.” In From Gesture to Language in Hearing and Deaf Children, edited by Volterra V and Erting JC, 128–141. Springer-Verlag. [Google Scholar]
- Rivolta CL 2023. Temporal Structure in Language Production and Processing: A cross-Linguistic Comparison of Spoken and Sign Language. Doctoral dissertation, University of the Basque Country. Digital Archive Learning Researching (ADDI). https://addi.ehu.es/handle/10810/62180. [Google Scholar]
- Sandler W, and Lillo-Martin D. 2006. Sign Language and Linguistic Universals. Cambridge University Press. 10.1017/CBO9781139163910.’ [DOI] [Google Scholar]
- Sehyr ZS, Caselli N, Cohen-Goldberg AM, and Emmorey K. 2021. “The ASL-LEX 2.0 Project: A Database of Lexical and Phonological Properties for 2,723 Signs in American Sign Language.” The Journal of Deaf Studies and Deaf Education 26, no. 2: 263–277. 10.1093/deafed/enaa038. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shain C, Kean H, Casto C, et al. 2024. “Distributed Sensitivity to Syntax and Semantics Throughout the Language Network.” Journal of Cognitive Neuroscience 36, no. 7: 1–43. 10.1162/jocn_a_02164. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Stroh A-L, Rösler F, Dormal G, et al. 2019. “Neural Correlates of Semantic and Syntactic Processing in German Sign Language.” NeuroImage 200: 231–241. 10.1016/j.neuroimage.2019.06.025. [DOI] [PubMed] [Google Scholar]
- Trettenbrein PC, Papitto G, Friederici AD, and Zaccarella E. 2021. “Functional Neuroanatomy of Language Without Speech: An ALE meta-analysis of Sign Language.” Human Brain Mapping 42, no. 3: 699–712. 10.1002/hbm.25254. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Trettenbrein P, Zaccarella E, and Friederici AD. 2023. “Functional and Structural Brain Asymmetries in Sign Language Processing.” Handbook of Clinical Neurology 208: 327–350. https://pure.mpg.de/pubman/faces/ViewItemOverviewPage.jsp?itemId=item_3565233. [Google Scholar]
- Twomey T, Price CJ, Waters D, and MacSweeney M. 2020. “The Impact of Early Language Exposure on the Neural System Supporting Language in Deaf and Hearing Adults.” NeuroImage 209: 116411. 10.1016/j.neuroimage.2019.116411. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Waters D, Campbell R, Capek CM, et al. 2007. “Fingerspelling, Signed Language, Text and Picture Processing in Deaf Native Signers: The Role of the Mid-fusiform Gyrus.” NeuroImage 35, no. 3: 1287–1302. 10.1016/j.neuroimage.2007.01.025. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Watkins F, Abdlkarim D, Winter B, and Thompson RL. 2024. “Viewing Angle Matters in British Sign Language Processing.” Scientific Reports 14, no. 1: 1043. 10.1038/s41598-024-51330-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
No data was generated during the creation of this manuscript.
