Nature & Wildlife

42 pages

Language processing across modalities: Insights from bimodal bilingualism

of 42
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Recent research suggests differences between bimodal bilinguals, who are fluent in a spoken and a signed language, and unimodal bilinguals, who are fluent in two spoken languages, in regard to the architecture and processing patterns within the
  In: Cognitive Sciences 2010, Volume 5, Issue 1 Editors: Miao-Kun Sun, pp. 57-98 © 2010 Nova Science Publishers, Inc. L ANGUAGE P ROCESSING A CROSS M ODALITIES :   I NSIGHTS FROM B IMODAL B ILINGUALISM   Anthony Shook    and Viorica Marian Dept. of Communication Sciences and Disorders, Northwestern University, Evanston, IL 60208 USA A BSTRACT   Recent research suggests differences between bimodal bilinguals, who are fluent in a spoken and a signed language, and unimodal bilinguals, who are fluent in two spoken languages, in regard to the architecture and  processing patterns within the bilingual language system. Here we discuss ways in which sign languages are represented and processed and examine recent research on bimodal bilingualism. It is suggested that sign languages display processing characteristics similar to spoken languages, such as the existence of a sign counterpart to phonological priming and the existence of a visual-spatial loop analogous to a phonological loop in working memory. Given the similarities between spoken and signed languages, we consider how they may interact in bimodal bilinguals, whose two languages differ in modality. Specifically, we consider the way in which bimodal bilingual studies may inform current knowledge about the bilingual language  processing system, with a particular focus on top-down influences, and the fast integration of information from separate modalities. Research from studies looking at both production and perception suggests that bimodal   Correspondence and requests for reprints: Anthony Shook at, or Viorica Marian, Ph.D., Dept. of Communication Sciences and Disorders, Northwestern University, Evanston, IL 60208 USA, Office: (847) 491-2420; Lab: (847) 467-2709, Email:  Anthony Shook & Viorica Marian 2  bilinguals, like unimodal bilinguals, process their languages in parallel, with simultaneous access to both lexical and morphosyntactic elements. However, given the lack of overlap at the phonological level (the presumed initial locus of parallel activation in unimodal studies) in bimodal bilinguals’ two languages, we conclude that there are key differences in processing  patterns and architecture between unimodal and bimodal language systems. The differences and similarities between unimodal and bimodal bilinguals are placed in the context of current models of bilingual language processing, which are evaluated on the basis of their ability to explain the patterns observed in bimodal bilingual studies. We propose ways in which current models of bilingual language processing may be altered in order to accommodate results from bimodal bilingualism. We conclude that bimodal  bilingualism can inform the development of models of bilingual language  processing, and provide unique insights into the interactive nature of the  bilingual language system in general. Language Processing Across Modalities: Insights from Bimodal Bilingualism “ The analytic mechanisms of the language faculty seem to be triggered in much the same ways, whether the input is auditory, visual, even tactual… ” -Noam Chomsky (2000, p. 100) 1.   B IMODAL B ILINGUALISM AND THE L ANGUAGE S YSTEM   One of the most striking features of bimodal bilingualism (which refers to fluency in both a signed and a spoken language) is the total lack of phonological overlap between the two languages. Bimodal bilinguals are able to create distinct, meaningful utterances with two separate sets of articulators, and have two output channels, vocal and manual. In contrast, unimodal bilinguals, whose two languages are spoken, utilize only one modality for both input and output. Moreover, bimodal bilinguals are able to perceive distinct linguistic information in two domains, via listening to speech and via visually perceiving signs. Although unimodal bilinguals utilize visual information as well, it acts primarily as a cue to facilitate understanding of auditory input, rather than providing a source of visual linguistic input independent from the auditory signal.  Language processing across Modalities 3 Research on bimodal bilingualism carries implications for understanding general language processing. For example, one issue at the heart of conceptual modeling of language is the level of influence of bottom-up versus top-down  processing. Specifically, when we process language, how much information do we gain from the signal itself (e.g. bottom-up input from phonological or orthographic features) and how much do we gain from higher-order knowledge (e.g., top-down input from background information, context, etc.)? While the existence of both top-down and bottom-up influences is universally acknowledged, the degree to which each holds sway over the language processing system is not entirely clear. Another important question is how bilinguals integrate auditory and visual information when processing language and whether that process differs between unimodal and bimodal bilinguals. To what extent does the ability to retrieve information from two separate modalities facilitate language comprehension? Is information from separate modalities accessed simultaneously or serially? Given the alternate input/output structure found in bimodal bilinguals, and lack of linguistic overlap between signed and spoken languages, it is important to consider what studies about bimodal bilingualism can tell us about bilingual language processing in general. Since the vast majority of bilingual research is  performed with unimodal bilinguals, it is somewhat unclear what similarities and differences exist between the two groups. Furthermore, comparing both the structural-linguistic and cognitive-processing aspects of unimodal and bimodal  bilingual groups can illuminate the effects of modality on language processing. Presently, we will review recent research on bimodal bilingualism and contrast it with results from unimodal studies in order to expand understanding of bilingual language processing. To study the influence that research on bimodal bilingualism has on both the mechanisms of bilingual processing and the architecture of the underlying language system, we will outline several models of bilingual language processing and examine how well they account for the results seen in recent bimodal  bilingual research. The present article consists of two main parts. The first part focuses on linguistic and cognitive aspects of sign languages in native signers and in bimodal bilinguals. Specifically, we will (a) compare and contrast how sign languages are represented linguistically, by examining previous work on the structural characteristics of sign languages, (b) discuss the cognitive patterns of sign language, in contrast to spoken language, by examining the similarities  between phenomena found in spoken language research with those found in sign language research, and (c) examine results from studies looking first at language  production and then at language perception in bimodal bilinguals, which will be  Anthony Shook & Viorica Marian 4 directly contrasted with previous results from unimodal bilingual studies. In the second part of the article, we will introduce several models of bilingual language  processing, focusing on models of both (a) language production and (b) language  perception, and discuss them in light of the results from bimodal bilingual studies. We will conclude by suggesting that spoken and signed languages are represented similarly at the cognitive level and interact in bimodal bilinguals much the same way two spoken languages interact in unimodal bilinguals, and that bimodal  bilingual research can highlight both the strengths and weaknesses of current models of bilingual language processing. 2.   R  EPRESENTATION AND P ROCESSING OF S IGN L ANGUAGES   2.1. Structure of Sign Languages Current models that explain the structure of sign languages are primarily  based on the study of American Sign Language, and its contrast with spoken English. It is important to note that just as spoken languages differ in phonology, morphology and syntax, so do sign languages. For example, not only do American Sign Language (ASL) and British Sign Language (BSL) utilize separate lexicons, they display morphological distinctions as well, such as BSL’s use of a two -handed finger- spelling system compared to ASL’s one -handed finger-spelling system (Sutton-Spence & Woll, 1999). There are also phonological distinctions, in that phonological segments in one language do not necessarily exist in the other, much like in spoken languages (sign languages use handshape, location of the sign in space, and motion of the sign as phonological parameters). We will focus mainly on the phonological aspects of sign language structure, while briefly discussing certain morphosyntactic traits. Given that much of the current body of knowledge about sign language structure is based on American Sign Language, we will focus specifically on the relationship between the phonologies of ASL and spoken English, in order to highlight some of the fundamental differences  between spoken and signed languages in general. The most salient difference between signed and spoken languages is that signed languages exist in a spatial environment, and are expressed grammatically through the manipulation of the body, notably the hands and face, within a linguistic sign-space, which is a physical area centered around the front of the speaker  ’s body. Like actors on a stage, the mechanics of grammar occur within this sign-space. This variance in articulatory location and modality results in  Language processing across Modalities 5 interesting syntactic differences. While English uses prepositional information to determine the location and relation of object, ASL creates schematic layouts of objects within the sign-space to determine their relationship in space, as well as to show motion. For example, rather than describe the movement of some object from point A to point B, the lexical item is presented and then physically moved. Syntactically, movement of verbs within the sign-space can determine the objects and subjects of sentences. Consider, for instance, the sign “give.” Whereas in English the difference between “I give you X” and “You give me X” is determined by word order and thematic role (subject “I” versus object “me”), ASL differentiates the two through variation in direction of movement of the verb. The two nominal entries are placed in the sign space, and the sentence structure is determined by whether the speaker moves the “give” sign from himself to his interlocutor, or vice versa (see Figure 1). This system also allows for ASL to have Figure 1: Signs for the ASL phrases “I give you” and “you give me,” and t he word “bite”. All images © 2006, Used by permission.  
Related Documents
View more...
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks