PI Masapollo awarded HHF ERG

CAPS Lab Director, Dr. Masapollo, was recently awarded an Emerging Research Grant from the Hearing Health Foundation. The planned experiments, in collaboration with Drs. Susan Nittrouer (PHHP, SLHS) and Yonghee Oh (PHHP, SLHS), will examine the roles of auditory and somatosensory feedback in speech motor control in congenitally deaf children and adults who received cochlear implants.  The project abstract is given below.

Abstract: Cochlear implants (CIs) have dramatically improved the likelihood that children born with severe-to-profound hearing loss (HL) will be able to develop spoken language skills within the range of children with normal hearing (NH), but substantial problems persist. These problems undoubtedly arise because the degraded auditory signals available through CIs do not allow child users to develop fine-tuned internal motor models of speech (learned “mappings” of the relationship between self-generated vocal-tract configurations and their acoustic consequences). Our interdisciplinary team has conceptualized a new approach to treating this problem. The central hypothesis is that the degradation of the acoustic speech signal that occurs with CI processing leads to enhanced reliance on somatosensory inputs from the vocal tract for the development of internal motor models by children who use CIs. To test this hypothesis, we will directly examine articulator movements in children with congenital HL when auditory and/or somatosensory feedback is unavailable. This HHF project is designed as an initial exploration of inter-articulator speech coordination with congenitally deaf CI wearers, using state-of-the-art electromagnetic articulography (EMA). Two Specific Aims will be addressed. Aim 1: Determine the role of auditory feedback on inter-articulator coordination in the speech of congenitally deaf children who received CIs. In accordance with this aim, children and adults with congenital HL and CIs or NH are being recorded using EMA as they produce utterances of the form [pVCɑt] or [tVCɑt], where 4 manipulations should evoke different patterns of coordination between jaw gestures for V and tongue-tip raising gestures for C: (1) V is short [ɛ] or long [ɑ]; (2) C is voiceless [t] or voiced [d]; (3) the first syllable is unstressed or stressed; and (4) rate is fast or normal. CI users will perform the speaking task with their processors turned on or off. Individuals with NH (controls) will perform the task with and without speech-shaped masking noise to minimize auditory feedback. Patterns of inter-articulator coordination will be examined, and analyzed for potential differences between children and adults with CIs and NH. Aim 2: Determine the role of somatosensory feedback on inter-articulator coordination in the speech of congenitally deaf children who received CIs. Using EMA, we will assess patterns of inter-articulator coordination while children and adults with congenital HL and CIs or NH perform the same speaking task as in Aim 1, but with an oral anesthetic to minimize somatosensory feedback, and with and without auditory feedback. The findings will allow us to refine our hypotheses for a planned R01 proposal, and provide a framework for designing and testing mechanistically driven interventions for deaf children focused on delivering speech-like patterns of somatosensory input as they learn to listen to speech sounds through their CI processor.