• No results found

also BSM

a term developed to elicit sentences containing specific grammatical struc-tures, using pictures. This instrument was used in a number of MORPHEME STUDIES.

 Ellis 2008; Burt et al. 1973 biolinguistics

also biological linguistics

a developing branch of linguistics which studies the biological preconditions for language development and use in human beings, from the viewpoints of both the history of language in the race, and the development of language in the individual. Topics of common interest to the two subject-areas involved include the genetic transmission of language, neurophysiological models of language production, the anatomical parallels between human and other spe-cies, and the development of pathological forms of language behavior. In recent years, Chomsky has called his entire GENERATIVE GRAMMAR an exer-cise in biolinguistics, claiming that it is possible to ask a question beyond

EXPLANATORY ADEQUACY: How did the language faculty evolve in the hu-man species?

 Crystal 2008 biological linguistics

another term for BIOLINGUISTICS bioprogram hypothesis

the hypothesis that children are born with inborn abilities to make basic se-mantic distinctions that lead to particular types of grammar. Human lan-guages differ rather substantially in their grammatical structures (e.g., in their basic word order). However, CREOLEs all over the world appear to be strikingly similar in their grammar: all creoles look pretty much alike, re-gardless of where they came into existence or of which languages provided most of the input into them. The British-born American linguist Derek Bick-erton has proposed an explanation for this observation. Since creoles are newly created languages, built out of the grammarless PIDGINs which preced-ed them, and since the children who create a creole are obligpreced-ed to build its grammar for themselves, Bickerton argues that there must be some kind of innate machinery which determines the nature of that grammar. He calls this machinery the bioprogram, and he sees the bioprogram as an innate default structure for language which is always implemented by children unless they find themselves learning an adult language with a different structure, in which case they learn that instead. The bioprogram hypothesis therefore rep-resents a rather specific and distinctive version of the INNATENESS

HYPOTHE-bootstrapping 49

SIS. It has attracted a great deal of attention, but it remains deeply controver-sial.

 Bickerton 1984; Richards & Schmidt 2010; Trask 2005 bootstrapping

in the study of CHILD LANGUAGE ACQUISITION, a suggested discovery proce-dure whereby children make deductions about the semantics or syntax of a language from their observations of language use. A prelinguistic infant has no lexicon against which to match the sound sequences encountered in the speech signal. Furthermore, connected speech provides few cues to where word boundaries lie. It is therefore difficult to explain how the language-acquiring infant comes to identify word forms and to map them on to mean-ings relating to the real world. It has been suggested that the infant can only achieve this task by relying on some kind of technique which gives it a head start—just as straps can help one to pull on a pair of boots (the metaphor comes via computer science). This technique might be specific to the process of language acquisition or it might be the product of general cognition, re-flecting, for example, a predisposition to impose patterns upon diverse in-formation.

Three main types of bootstrapping have been proposed as follows:

• In prosodic bootstrapping, the infant exploits rhythmic regularities in the language it is acquiring. At the phoneme level, it can distinguish a differ-ence between steady-state sequdiffer-ences representing full vowels and transi-tional sequences representing consonants. It is thus sensitive to syllable structure. From this and from its innate sense of rhythm, the infant ac-quiring English is able to recognize the difference between longer stressed syllables featuring full vowels and shorter unstressed syllables featuring weak quality vowels. It may be that the infant develops a met-rical template which reflects the tendency of English towards an strong-weak (SW) rhythmic unit. The template encourages the child to seek words which follow an SW pattern, and provides it with the working hy-pothesis that a stressed syllable in the signal is likely to mark a word on-set. This accounts for the following versions of adult words:

giRAFFE → raffe MONkey → monkey baNAna → nana

It also accounts for evidence of children joining words to form an SW pattern as in: I like-it the elephant.

The concept of prosodic bootstrapping has been applied to larger constit-uents than the word. It is suggested that infants learn to recognize intona-tion patterns (especially the placing of the tonic accent) and the regular

50 bootstrapping

occurrence of pauses. These features, which are often heightened in

CHILD DIRECTED SPEECH, provide infants with cues to phrase boundaries and to the structure of typical phrases.

• Syntactic bootstrapping assumes that an infant uses surface form to estab-lish syntactic categories. The early mapping process draws upon an as-sumption (innate or learned) that there is a word-class which relates to objects in the real world, one which relates to actions and one which re-lates to attributes. Once this is established, the infant can add less proto-typical items to each class (abstract nouns, state verbs) by noticing that they share grammatical properties with words that have already been ac-quired: in particular, their morphology and their distribution.

It learns to associate count nouns with the frame It’s a . . . and mass nouns with the frame It’s . . . . Experiments with non-words (It’s a sib, It’s sib) have demonstrated that infants are capable of making this associ-ation as early as 17 months. Infants are also capable of using formal evi-dence to recognize that non-words like nissing refer to a potential action and non-words like a niss refer to a potential object.

Later on, infants may use syntactic structure to establish distinctions of meaning. Thus, they can distinguish the senses of the words eat and feed by their distribution: eat occurring in the structure Verb + Noun (edible) while feed occurs in the structure Verb + Noun (animate). Among evi-dence cited in support of syntactic bootstrapping is the fact that blind in-fants manage to acquire the words see and look without difficulty. The suggestion is that they are able to do so by relating the words to the con-texts in which they occur, even though they lack a concept to which to at-tach them.

• Semantic bootstrapping hypothesizes the reverse process: that infants use their world knowledge in order to recognize syntactic relationships within sentences. Assume an infant has acquired, in isolation, the nouns rabbit and duck. Presented with a sentence such as The rabbit is chasing the duck and evidence from a cartoon film, the infant comes to recognize that the position of the word rabbit in the sentence is reserved for the ‘agent’

or syntactic subject and the position of the word duck is reserved for the

‘patient’ or syntactic direct object. The assumptions would be confirmed if the cartoon film later showed the reverse situation and the associated sentence was The duck is chasing the rabbit.

As formulated by Pinker, semantic bootstrapping also incorporates the assumption that certain linguistic concepts are innate in the infant: these include the notions of noun and verb as word classes and the notions of agent and patient as roles.

Other bootstrapping theories are:

bottom-up processing 51

• Perceptual bootstrapping, where the infant focuses its attention on the most salient parts of the input; this might explain why early utterances do not usually contain weakly stressed function words.

• Logical bootstrapping, a process whereby an infant systematically directs its attention first to physical objects (nouns), then to events and relation-ships between the objects (verbs and adjectives) and then to word order and syntax. This step-by-step building of meaning reflects the general pattern of vocabulary acquisition.

 Bates & Goodman 1999; Cutler & Mehler 1993; Crystal 2008; Field 2004; Gerken 1994;

Gleitman 1990; Nusbaum & Goodman 1994; Peters 1983; Pinker 1994a bottom-up processing

an approach to the processing of spoken or written language which depends upon actual evidence in the speech signal or on the page. Smaller units of analysis are built into progressively larger ones. There is a contrast with top-down processing, the use of conceptual knowledge to inform or to reshape what is observed perceptually. The terms ‘bottom-up’ and ‘top-down’ are derived from computer science, where they refer respectively to processes that are data-driven and processes that are knowledge-driven.

Bottom-up processes include: decoding from phoneme to grapheme (in lis-tening) and from grapheme to phoneme (in reading); whole word recogni-tion; noticing morphemes and/or segmenting words into morphemes; identi-fying or assembling short phrases in order to analyze them, syntactically and as units of meaning (PARSING); translating individual words, collocations and phrases into L1 (perhaps using glossaries or dictionaries); linking pronouns with their referents; linking subordinate clauses with the main clause; notic-ing textual clues (prosody, punctuation etc.). Thus, bottom-up processes in-clude all text-based resources which the listener/reader appropriates in order to carry out further processing.

Top-down processing involves schema (see SCHEMA THEORY) and script (conventionally recognized sequences of events). A reader or listener will almost inevitably trigger schema and script when the first content word of a text is understood (e.g., ‘snow’). As the reading/listening progresses the di erent schematic subsets should be activated, informed by the text-based resources being appropriated (sledge, children, fun [rather than] blizzard, frostbite, despair).

In L2 text access, the two processes act in compensatory fashion (resorting to schema in order to compensate for not recognizing words; resorting to careful text analysis in order to compensate for unfamiliarity with the text topic), and confirmatory fashion (using later text information to confirm ear-lier established predictions of what the text is about). Therefore, an interac-tive-compensatory model of text access is now generally accepted.

52 BSM

see also ACTIVATION, INTERACTIVE ACTIVATION, MODULARITY

 Field 2004; Macaro et al. 2010; Stanovich 1980 BSM

an abbreviation for BILINGUAL SYNTAX MEASURE

CA

an abbreviation for CONVERSATION ANALYSIS CAH

an abbreviation for CONTRASTIVE ANALYSIS HYPOTHESIS

CALL

an abbreviation for COMPUTER-ASSISTED LANGUAGE LEARNING

CALP

an abbreviation for COGNITIVE ACADEMIC LANGUAGE PROFICIENCY

capability continuum paradigm

a variability theory of L2 acquisition developed by Tarone to refer to the idea that L2 learners acquire a continuum of grammars for the L2 (which she calls ‘styles’) ranging from the most informal or vernacular style, to the most careful style, used when an L2 speaker is focusing on form, and trying to be as correct as possible. Tarone refers to this as the capability continuum, as illustrated in Figure C.1.

Figure C.1. Interlanguage capability continuum

The vernacular style is usually the least target-like, but the most internally consistent, while at the other pole the careful style is more target-like, per-haps incorporating grammatical knowledge which has been consciously learned by the L2 speaker. It will also be less internally consistent, involving

Vernacular style (more pidgin-like)

Careful style (more TL/NL-like Style 2 Style 3 Style 4 Style n

Unattended

speech data Attended speech data

Various elicitation tasks: elicited imitation,

sentence-combining etc.

Grammatical intuition

data