Jump to content

User:Amandafoort/sandbox

From Wikipedia, the free encyclopedia

Proposed Changes to Speech Production Page

[edit]

As an assignment for a university psychology of language class I, along with my colleagues will be working to improve the speech production page. Below are some of the changes I plan to make.

Neuroscience

[edit]

IMAGE ADDED + TEXT-For right handed people, the majority of speech production activity occurs in the left cerebral hemisphere.

History of Speech Production Research

[edit]

IMAGE ADDED + TEXT- Examples of speech errors. The target is what the speaker intended to say. The error is what the speaker actually said. These mistakes have been studied to learn about the structure of speech production.

Until the late 1960's research on speech was focused on comprehension. As greater volumes of speech error data amassed researchers began to investigate the psychological processes responsible for the production of speech sounds and to contemplate possible procedures through which people are able to speak fluently.[15] Speech Error research made evident many findings about speech which were soon incorporated into speech production models. Evidence from speech error data meant that linguists were able to ascertain certain ideas about the speech production process.

1. Speech is planned in advance.[16]

2. The lexicon is organized both semantically and phonologically.[16] That is by meaning, and by the sound of the words.

3. Morphologically complex words are assembled.[16] Words that we produce that contain morphemes are put together during the speech production process. Morphemes are the smallest units of language that contain meaning. For example, "ed" on a past tense word.

4. Affixes and Functors behave differently from context words in slips of the tongue.[16] This means the rules about the ways in which a word can be used are likely stored with them, which means generally when speech errors are made, the mistake words maintain their functions and make grammatical sense.

5. Speech errors reflect rule knowledge.[16] Even in our mistakes, speech is not nonsensical. The words and sentences that are produced in speech errors are overwhelmingly things that do not violate the rules of the language being spoken.

Aspects of Speech Production Models

[edit]

Models of speech production must contain specific elements to be viable and widely considered as accurate. These elements as listed below, are the elements from which speech is composed, and therefore must be explained by any model attempting to explain the process of speech production. The accepted models of speech production discussed in more detail below all incorporate these stages either explicitly or implicitly, and the ones that are now outdated or disputed have been criticized for overlooking one or more of the following stages.[17]

The attributes of accepted speech models are:

a) a conceptual stage where the speaker abstractly identified what they wish to express.[17]

b) a syntactic stage where a frame is chosen that words will be placed into, this frame is usually sentence structure.[17]

c) a lexical stage where a search for a word occurs based on meaning. Once the word is retrieved, information about it becomes available to the speaker involving phonology and morphology.[17]

d) a phonological stage where the abstract information is converted into a speech like form.[17]

e) a phonetic stage. where features and muscle instructions are prepared to be sent to muscles of articulation.[17]

Also, models must allow for forward planning mechanisms, a buffer, and a monitoring mechanism.

Following are a few of the influential models of speech production which attempt to account for or incorporate all of the previously mentioned stages and include information discovered as a result of speech error studies and other disfluency data [18](such as Tip of the Tongue research).

Models of Speech Production

[edit]

The Utterance Generator Model of Speech Production (1971)

[edit]

The Utterance Generator Model was proposed by Fromkin (1971).[19] It is composed of six stages and was an attempt to account for the previous findings of speech error research. The stages of the Utterance Generator Model were based on possible changes in representations of a particular utterance. The first stage is where a person generates the meaning they wish to convey. The second stage involves the message being translated onto a syntactic structure. Here, the message is given an outline.[20] The third stage proposed by Fromkin is where/when the message gains different stresses and intonations based on the meaning. The fourth stage Fromkin suggested is concerned with the selection of words from the lexicon. After the words have been selected in Stage 4, the message undergoes phonological specification.[21] The fifth stage applies rules of pronunciation and produces syllables that are to be outputted. The sixth and final stage of Fromkin's Utterance Generator Model is the coordination of the motor commands necessary for speech. Here, phonetic features of the message are sent to the relevant muscles of the vocal tract so that the intended message can be produced. Despite the ingenuity of Fromkin's model, researchers have criticized this interpretation of speech production. Although The Utterance Generator Model accounts for many nuances and data found by speech error studies, researchers decided it still had room to be improved.[22][23]

The Garrett Model (1975)

[edit]

A more recent (than Fromkin's) attempt to explain speech production was published by Garrett in 1975.[24] Garrett also created this model by compiling speech error data and there are many overlaps between this model and the Fromkin model off which it was based, but he did add a few things to the Fromkin model that filled some of the gaps being pointed out by other researchers. The Garrett Model and the Fromkin model both distinguish between three levels—a conceptual level, and sentence level, and a motor level. These three levels are common to contemporary understanding of Speech Production.[25]

Dell's Model

[edit]

IMAGE ADDED + TEXT: This is an interpretation of the Dell's model. The words at the top represent the semantic category. The second level represents the words that describe the semantic category. And, the third level represents the phonemes ( syllabic information including onset, vowels, and codas).

Dell proposed a model of the lexical network that became fundamental in the understanding of the way speech is produced.[1] This model of the lexical network attempts to symbolically represent the lexicon, and in turn, explain how people choose the words they wish to produce, and how those words are to be organized into speech. Dell's model was composed of three stages, semantics, words, and phonemes. The words in the highest stage of the model represent the semantic category. The second level represents the words that describe the semantic category. And, the third level represents the phonemes ( syllabic information including onset, vowels, and codas).[26]

Levelt Model (1999)

[edit]

Levelt further refined the lexical network proposed by Dell. Through the use of speech error data, Levelt recreated the three levels in Dell's model. The conceptual stratum, the top and most abstract level, contains information a person has about ideas of particular concepts.[27] The conceptual stratum also contains ideas about how concepts relate to each other. This is where word selection would occur, a person would choose which words they wish to express. The next, or middle level, the lemma-stratum, contains information about the syntactic functions of individual words including tense and function.[1] This level functions to maintain syntax and place words correctly into sentence structure that makes sense to the speaker.[27] The lowest and final level is the form stratum which, similarly to the Dell Model, contains syllabic information. From here, the information stored at the form stratum level is sent to the motor cortex where the vocal apparatus are coordinated to physically produce speech sounds.

Places of Articulation

[edit]

IMAGE MOVED + TEXT Human vocal apparatus used to produce speech.

The physical structure of the human nose, throat, and vocal chords allows for the productions of many unique sounds, these areas can be further broken down into places of articulation. Different sounds are produced in different areas, and with different muscles and breathing techniques.[28] Our ability to utilize these skills to create the various sounds needed to communicate effectively is essential to our speech production. Difficulties in manner of articulation can contribute to speech difficulties and impediments.[29] It is suggested that infants are capable of making the entire spectrum of possible vowel and consonant sounds. IPA has created a system for understanding and categorizing all possible speech sounds, which includes information about the way in which the sound is produced, and where the sounds is produced.[29] This is extremely useful in the understanding of speech production because speech can be transcribed based on sounds rather than spelling, which may be misleading depending on the language being spoken. However, as people grow accustomed to a particular language they lose not only the ability to produce certain speech sounds, but also to distinguish between these sounds.

Articulation

[edit]

Articulation, often associated with speech production, is the term used to describe how people physically produced speech sounds. For people who speak fluently, articulation is automatic and allows 15 speech sounds to be produced per minute.[30]

Proposed Changes to Babbling Page

[edit]

As a portion of a university class project myself and my colleagues Care.hail, and JessicaRJ will be working to improve the Babbling page. We will be taking into consideration the suggestions made in the GA Review that was posted in 2013 as well as adding current relevant information on the topic. I personally will add information about babbling in other languages and cultures to the existing article and work to improve the overall grammar, organization, and structure of the article. I will be preparing changes and a list of relevant sources here.

Inter-language Babbling Research

[edit]

Andruski, J. E., Casielles, E., & Nathan, G. (2014). Is bilingual babbling language-specific? Some evidence from a case study of Spanish–English dual acquisition.Bilingualism: Language & Cognition, 17(3), 660-672. doi:10.1017/S1366728913000655

  • -Spanish, English
  • -Differences in language development across cultures (Spanish/English) seem to be differences in input and social interaction practices rather than in the type of language spoken by caregivers.

de Boysson-Bardies, B., & Vihman, M. M. (1991). Adaptation to Language: Evidence from Babbling and First Words in Four Languages. Language, (2). 297.

  • -French,English, Japanese, and Swedish.

Lee, S. S., Davis, B., & Macneilage, P. (2010). Universal production patterns and ambient language influences in babbling: a cross-linguistic study of Korean- and English-learning infants. Journal Of Child Language, 37(2), 293-318. doi:10.1017/S0305000909009532

  • -Korean, English
  • - When comparing Korean/English infants consonant patterns are similar across groups. There were differences found in vowel usage by infants in Korean/English settings based on input and the frequency of vowels in their parent language.

Levitt, A. G., & Qi, W. (1991). EVIDENCE FOR LANGUAGE-SPECIFIC RHYTHMIC INFLUENCES IN THE REDUPLICATIVE BABBLING OF FRENCH- AND ENGLISH-LEARNING INFANTS. Language & Speech, 34(3), 235-249.

  • -French/English

Majorano, M., & D'Odorico, L. (2011). The transition into ambient language: A longitudinal study of babbling and first word production of Italian children. First Language, 31(1), 47-66. doi:10.1177/0142723709359239

  • -Italian

Lead

[edit]

-add sentence about physical structures to cover physiology.

-add sentence about timeline (12 months typical end).

-add sentence about abnormal development.

-add sentence about evidence across species.

Timeline

[edit]

-Adapt from Owens (2005). Timeline of typical vocal developments.Owens, R.E. (2005). Language Development: An Introduction. Boston: Pearson.

Timeline 0-12 months Vocal Developments (Owens, 2005)

0-1 Month pleasure sounds, cries for assistance, and responds to human voice.

2 Months distinguishes between different speech sounds, makes “goo”ing sounds

3 Months cooing (Single syllable CV), responds vocally to speech of others, makes predominantly vowel sounds.

4 Months babbles short strings of consonants, varies pitch, imitates tones in adult speech

5 Months experiments with sound, imitates some sounds,

6 Months varies volume, pitch and rate, reduplicated babbling.

7 Months several sounds in one breath, recognizes different tones and inflections

8 Months repeats emphasized syllables. Imitates gestures and tonal quality of adult speech Variegated babbling.

9 Months imitates non speech sounds

10 Months imitates adult speech if sounds are in their repertoire

11 Months imitates inflections rhythms and expressions of speakers

12 Months speaks one or more words. Words refer to the entity which they name they are used to gain attention or for a specific purpose.

Intro to Evidence

[edit]

- babbling is not unique to humans. Language is. Harley, Trevor A. (2005) The Psychology of Language 2nd Edition. Psychology Press. New York.

- babbling serves the same purpose. (use sources from non-human animals)

- restricted by the same types of things (physiology)

Organization - (lead, typical development (add timeline in here), abnormal development (encompasses deaf infants), and evidence across species (to encompass non human examples)

[edit]