Cargando…

Echoes of L1 Syllable Structure in L2 Phoneme Recognition

Learning to move from auditory signals to phonemic categories is a crucial component of first, second, and multilingual language acquisition. In L1 and simultaneous multilingual acquisition, learners build up phonological knowledge to structure their perception within a language. For sequential mult...

Descripción completa

Detalles Bibliográficos
Autores principales: Yasufuku, Kanako, Doyle, Gabriel
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8329372/
https://www.ncbi.nlm.nih.gov/pubmed/34354620
http://dx.doi.org/10.3389/fpsyg.2021.515237
_version_ 1783732487094009856
author Yasufuku, Kanako
Doyle, Gabriel
author_facet Yasufuku, Kanako
Doyle, Gabriel
author_sort Yasufuku, Kanako
collection PubMed
description Learning to move from auditory signals to phonemic categories is a crucial component of first, second, and multilingual language acquisition. In L1 and simultaneous multilingual acquisition, learners build up phonological knowledge to structure their perception within a language. For sequential multilinguals, this knowledge may support or interfere with acquiring language-specific representations for a new phonemic categorization system. Syllable structure is a part of this phonological knowledge, and language-specific syllabification preferences influence language acquisition, including early word segmentation. As a result, we expect to see language-specific syllable structure influencing speech perception as well. Initial evidence of an effect appears in Ali et al. (2011), who argued that cross-linguistic differences in McGurk fusion within a syllable reflected listeners’ language-specific syllabification preferences. Building on a framework from Cho and McQueen (2006), we argue that this could reflect the Phonological-Superiority Hypothesis (differences in L1 syllabification preferences make some syllabic positions harder to classify than others) or the Phonetic-Superiority Hypothesis (the acoustic qualities of speech sounds in some positions make it difficult to perceive unfamiliar sounds). However, their design does not distinguish between these two hypotheses. The current research study extends the work of Ali et al. (2011) by testing Japanese, and adding audio-only and congruent audio-visual stimuli to test the effects of syllabification preferences beyond just McGurk fusion. Eighteen native English speakers and 18 native Japanese speakers were asked to transcribe nonsense words in an artificial language. English allows stop consonants in syllable codas while Japanese heavily restricts them, but both groups showed similar patterns of McGurk fusion in stop codas. This is inconsistent with the Phonological-Superiority Hypothesis. However, when visual information was added, the phonetic influences on transcription accuracy largely disappeared. This is inconsistent with the Phonetic-Superiority Hypothesis. We argue from these results that neither acoustic informativity nor interference of a listener’s phonological knowledge is superior, and sketch a cognitively inspired rational cue integration framework as a third hypothesis to explain how L1 phonological knowledge affects L2 perception.
format Online
Article
Text
id pubmed-8329372
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-83293722021-08-04 Echoes of L1 Syllable Structure in L2 Phoneme Recognition Yasufuku, Kanako Doyle, Gabriel Front Psychol Psychology Learning to move from auditory signals to phonemic categories is a crucial component of first, second, and multilingual language acquisition. In L1 and simultaneous multilingual acquisition, learners build up phonological knowledge to structure their perception within a language. For sequential multilinguals, this knowledge may support or interfere with acquiring language-specific representations for a new phonemic categorization system. Syllable structure is a part of this phonological knowledge, and language-specific syllabification preferences influence language acquisition, including early word segmentation. As a result, we expect to see language-specific syllable structure influencing speech perception as well. Initial evidence of an effect appears in Ali et al. (2011), who argued that cross-linguistic differences in McGurk fusion within a syllable reflected listeners’ language-specific syllabification preferences. Building on a framework from Cho and McQueen (2006), we argue that this could reflect the Phonological-Superiority Hypothesis (differences in L1 syllabification preferences make some syllabic positions harder to classify than others) or the Phonetic-Superiority Hypothesis (the acoustic qualities of speech sounds in some positions make it difficult to perceive unfamiliar sounds). However, their design does not distinguish between these two hypotheses. The current research study extends the work of Ali et al. (2011) by testing Japanese, and adding audio-only and congruent audio-visual stimuli to test the effects of syllabification preferences beyond just McGurk fusion. Eighteen native English speakers and 18 native Japanese speakers were asked to transcribe nonsense words in an artificial language. English allows stop consonants in syllable codas while Japanese heavily restricts them, but both groups showed similar patterns of McGurk fusion in stop codas. This is inconsistent with the Phonological-Superiority Hypothesis. However, when visual information was added, the phonetic influences on transcription accuracy largely disappeared. This is inconsistent with the Phonetic-Superiority Hypothesis. We argue from these results that neither acoustic informativity nor interference of a listener’s phonological knowledge is superior, and sketch a cognitively inspired rational cue integration framework as a third hypothesis to explain how L1 phonological knowledge affects L2 perception. Frontiers Media S.A. 2021-07-20 /pmc/articles/PMC8329372/ /pubmed/34354620 http://dx.doi.org/10.3389/fpsyg.2021.515237 Text en Copyright © 2021 Yasufuku and Doyle. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Psychology
Yasufuku, Kanako
Doyle, Gabriel
Echoes of L1 Syllable Structure in L2 Phoneme Recognition
title Echoes of L1 Syllable Structure in L2 Phoneme Recognition
title_full Echoes of L1 Syllable Structure in L2 Phoneme Recognition
title_fullStr Echoes of L1 Syllable Structure in L2 Phoneme Recognition
title_full_unstemmed Echoes of L1 Syllable Structure in L2 Phoneme Recognition
title_short Echoes of L1 Syllable Structure in L2 Phoneme Recognition
title_sort echoes of l1 syllable structure in l2 phoneme recognition
topic Psychology
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8329372/
https://www.ncbi.nlm.nih.gov/pubmed/34354620
http://dx.doi.org/10.3389/fpsyg.2021.515237
work_keys_str_mv AT yasufukukanako echoesofl1syllablestructureinl2phonemerecognition
AT doylegabriel echoesofl1syllablestructureinl2phonemerecognition