The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before
sharing sensitive information, make sure you’re on a federal
government site.
The
https://
ensures that you are connecting to the
official website and that any information you provide is encrypted
and transmitted securely.
Share
1
Departments of Speech, Language, and Hearing Sciences and Biomedical Engineering, Boston University, Boston, MA, USA. Electronic address: [email protected].
2
Department of Cognitive Sciences, Center for Language Science and Center for Cognitive Neuroscience, University of California, Irvine, CA, USA.
1
Departments of Speech, Language, and Hearing Sciences and Biomedical Engineering, Boston University, Boston, MA, USA. Electronic address: [email protected].
2
Department of Cognitive Sciences, Center for Language Science and Center for Cognitive Neuroscience, University of California, Irvine, CA, USA.
This chapter reviews evidence regarding the role of auditory perception in shaping speech output. Evidence indicates that speech movements are planned to follow auditory trajectories. This in turn is followed by a description of the Directions Into Velocities of Articulators (DIVA) model, which provides a detailed account of the role of auditory feedback in speech motor development and control. A brief description of the higher-order brain areas involved in speech sequencing (including the pre-supplementary motor area and inferior frontal sulcus) is then provided, followed by a description of the Hierarchical State Feedback Control (HSFC) model, which posits internal error detection and correction processes that can detect and correct speech production errors prior to articulation. The chapter closes with a treatment of promising future directions of research into auditory-motor interactions in speech, including the use of intracranial recording techniques such as electrocorticography in humans, the investigation of the potential roles of various large-scale brain rhythms in speech perception and production, and the development of brain-computer interfaces that use auditory feedback to allow profoundly paralyzed users to learn to produce speech using a speech synthesizer.
Behroozmand R, et al.
Neuroimage. 2015 Apr 1;109:418-28. doi: 10.1016/j.neuroimage.2015.01.040. Epub 2015 Jan 24.
Neuroimage. 2015.
PMID:
25623499
Free PMC article.
Hickok G, et al.
Handb Clin Neurol. 2015;129:149-60. doi: 10.1016/B978-0-444-62630-1.00008-1.
Handb Clin Neurol. 2015.
PMID:
25726267
Review.
Zheng ZZ, et al.
J Neurosci. 2013 Mar 6;33(10):4339-48. doi: 10.1523/JNEUROSCI.6319-11.2013.
J Neurosci. 2013.
PMID:
23467350
Free PMC article.
Villacorta VM, et al.
J Acoust Soc Am. 2007 Oct;122(4):2306-19. doi: 10.1121/1.2773966.
J Acoust Soc Am. 2007.
PMID:
17902866
Zaehle T, et al.
Brain Res. 2008 Jul 18;1220:179-90. doi: 10.1016/j.brainres.2007.11.013. Epub 2007 Nov 17.
Brain Res. 2008.
PMID:
18096139
Review.
Chang Y, et al.
Front Neurosci. 2023 Jun 30;17:1208581. doi: 10.3389/fnins.2023.1208581. eCollection 2023.
Front Neurosci. 2023.
PMID:
37457017
Free PMC article.
Teghipco A, et al.
Neurobiol Lang (Camb). 2023 Mar 8;4(1):81-119. doi: 10.1162/nol_a_00088. eCollection 2023.
Neurobiol Lang (Camb). 2023.
PMID:
37229143
Free PMC article.
Weiss AR, et al.
Neurobiol Lang (Camb). 2023 Jan 18;4(1):53-80. doi: 10.1162/nol_a_00086. eCollection 2023.
Neurobiol Lang (Camb). 2023.
PMID:
37229140
Free PMC article.
Bombonato C, et al.
Brain Sci. 2022 Dec 31;13(1):78. doi: 10.3390/brainsci13010078.
Brain Sci. 2022.
PMID:
36672059
Free PMC article.
Jacobs CL, et al.
Lang Cogn Neurosci. 2020;35(4):485-497. doi: 10.1080/23273798.2019.1693051. Epub 2019 Nov 21.
Lang Cogn Neurosci. 2020.
PMID:
35992578
Free PMC article.