Abstract

Code-switching (CS) refers to the phenomenon of using more than one language in an utterance, and it presents great challenge to automatic speech recognition (ASR) due to the code-switching property in one utterance, the pronunciation variation phenomenon of the embedding language words and the heavy training data sparse problem. This paper focuses on the Mandarin-English CS ASR task. We aim at dealing with the pronunciation variation and alleviating the sparse problem of code-switches by using pronunciation augmentation methods. An English-to-Mandarin mix-language phone mapping approach is first proposed to obtain a language-universal CS lexicon. Based on this lexicon, an acoustic data-driven lexicon learning framework is further proposed to learn new pronunciations to cover the accents, mis-pronunciations, or pronunciation variations of those embedding English words. Experiments are performed on real CS ASR tasks. Effectiveness of the proposed methods are examined on all of the conventional, hybrid, and the recent end-to-end speech recognition systems. Experimental results show that both the learned phone mapping and augmented pronunciations can significantly improve the performance of code-switching speech recognition.

Details

Title
Pronunciation augmentation for Mandarin-English code-switching speech recognition
Author
Long, Yanhua 1   VIAFID ORCID Logo  ; Wei, Shuang 1 ; Lian Jie 1 ; Li, Yijie 2 

 Shanghai Normal University, SHNU-Unisound Joint Laboratory of Natural Human-Computer Interaction, Shanghai Engineering Research Center of Intelligent Education and Bigdata, Shanghai, China (GRID:grid.412531.0) (ISNI:0000 0001 0701 1077) 
 Unisound AI Technology Co., Ltd., Beijing, China (GRID:grid.412531.0) 
Publication year
2021
Publication date
Dec 2021
Publisher
Springer Nature B.V.
ISSN
16874714
e-ISSN
16874722
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2566149066
Copyright
© The Author(s) 2021. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.