Difficulties in articulating speech sounds are recurrent in people with a speech and language impairment such as aphasia. Some tasks performed during the speech-language therapy are devoted to the treatment of these difficulties. My paper analyzes the way the speech-language therapist instructs the aphasic patient about the pronunciation of linguistic items and models pronunciation through auditory, visual and tactile resources, thus configuring it as a multimodal and multi-sensory experience. The research is grounded in Multimodal Conversation Analysis (Goodwin, 2000; Mondada, 2016) and is based on the transcription and the analysis of excerpts issued of a large corpus of video-recordings (60 hours) that were made in France and in the French-speaking part of Switzerland. Data was collected in different therapeutic settings (hospital, rehabilitation clinic, private speech-therapy office) along the recovery of people who developed aphasia as a consequence of a stroke. The analysis of the data shows that the correction of the patients’ productions is initiated by the therapists verbally, through repetition of the target item with prosodic features such as emphasis and volume. When these forms of cues reveal ineffective, the therapists make relevant other types of visual and haptic cues: by using their body as an “instructional tool”, they represent, with gestures and face expressions, features of the target sound. They can also touch and manipulate the patients’ face in order to help them to correctly realize specific articulatory movements. The paper focuses not only on the multimodal resources used by the therapists in order to model pronunciation, but also on the practices used in order to enhance the patients’ visual attention towards these resources (e.g. use of pointing gestures, directives, verbal and haptic summons) (cf. Ronkainen, 2011). It offers an investigation on how multimodal therapies (whose efficacy is often claimed in aphasia literature, see Pierce et al. 2019) are concretely implemented in face-to-face interaction. Finally, a vision of the therapeutic process as an embodied, multimodal and multi-sensory experience is proposed and discussed. References Goodwin, C. (2000). Action and embodiment within situated human interaction. Journal of Pragmatics, 32(10), 1489-1522. Mondada, L. (2016). Challenges of multimodality: Language and the body in social interaction. Journal of Sociolinguistics, 20(3), 336-366. Pierce, J. E., O'Halloran, R., Togher, L., & Rose, M. L. (2019). What Is Meant by “Multimodal Therapy” for Aphasia? American Journal of Speech-Language Pathology, 28(2), 706-716. Ronkainen, R. J. (2011). Enhancing listening and imitation skills in children with cochlear implants-the use of multimodal resources in speech therapy. Journal of Interactional Research in Communication Disorders, 2(2), 245-269.
Merlino, S. (2021). Correcting and modelling pronunciation through multimodal resources in speech-language therapy.. In 17th International Pragmatics Conference.
|Titolo:||Correcting and modelling pronunciation through multimodal resources in speech-language therapy.|
MERLINO, SARA (Corresponding)
|Data di pubblicazione:||2021|
|Citazione:||Merlino, S. (2021). Correcting and modelling pronunciation through multimodal resources in speech-language therapy.. In 17th International Pragmatics Conference.|
|Appare nelle tipologie:||4.2 Abstract in Atti di convegno|