{"id":3581,"date":"2021-07-08T14:18:00","date_gmt":"2021-07-08T14:18:00","guid":{"rendered":"https:\/\/speechneurolab.ca\/?p=3581"},"modified":"2024-01-09T17:17:02","modified_gmt":"2024-01-09T17:17:02","slug":"publication-scientifique-sur-lintegration-audiovisuelle","status":"publish","type":"post","link":"https:\/\/speechneurolab.ca\/en\/publication-scientifique-sur-lintegration-audiovisuelle\/","title":{"rendered":"New scientific article on audiovisual integration"},"content":{"rendered":"\t\t<div data-elementor-type=\"wp-post\" data-elementor-id=\"3581\" class=\"elementor elementor-3581 elementor-1569\">\n\t\t\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-81a1b09 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"81a1b09\" data-element_type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-9c69737\" data-id=\"9c69737\" data-element_type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-9648775 elementor-widget elementor-widget-text-editor\" data-id=\"9648775\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p style=\"text-align: justify;\"><strong>The influence of visual information on speech perception is well known. For example, when we are in a noisy environment, lipreading makes it easier to understand speech.<\/strong><\/p><p style=\"text-align: justify;\">In this context, our brain combines the auditory and visual information related to speech, which corresponds to the phenomenon of audiovisual integration. A concrete example of this phenomenon is the <a href=\"https:\/\/speechneurolab.ca\/en\/leffet-mcgurk\/\">McGurk effect<\/a>.<\/p><p style=\"text-align: justify;\">Our laboratory\u2019s director, Pascale Tremblay, along with her French collaborators Marc Sato, Anahita Basirat, Serge Pinto and carried out a research project in which they studied this phenomenon. More specifically, they studied the impact of different types of visual cues on the ability to perceive speech, and examined whether the process of audiovisual integration changes with age.<\/p><p style=\"text-align: justify;\">Good news for the team: the scientific article presenting the results of this study has just been accepted for publication in the journal\u00a0<em>Neuropsychologia<\/em>! We are taking this opportunity to provide you with a summary of the study and its results.\u00a0<\/p><p>In the study, 34 people (17 young adults between 20 and 42 years old, as well as 17 older adults between 60 and 73 years old), undertook a speech perception test during which syllables (pa, ta or ka) were presented to them, one after the other. Syllables were presented in three sensory modalities: auditory only, visual only (video of a person pronouncing the syllable, but without sound), or audiovisual (combined audio and video), as shown in Figure\u00a01.<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-243141b elementor-widget elementor-widget-image\" data-id=\"243141b\" data-element_type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img fetchpriority=\"high\" decoding=\"async\" width=\"800\" height=\"262\" src=\"https:\/\/speechneurolab.ca\/wp-content\/uploads\/2021\/07\/figure_modes_de_presentation.jpg\" class=\"attachment-full size-full wp-image-3413\" alt=\"\" srcset=\"https:\/\/speechneurolab.ca\/wp-content\/uploads\/2021\/07\/figure_modes_de_presentation.jpg 800w, https:\/\/speechneurolab.ca\/wp-content\/uploads\/2021\/07\/figure_modes_de_presentation-300x98.jpg 300w, https:\/\/speechneurolab.ca\/wp-content\/uploads\/2021\/07\/figure_modes_de_presentation-768x252.jpg 768w, https:\/\/speechneurolab.ca\/wp-content\/uploads\/2021\/07\/figure_modes_de_presentation-540x177.jpg 540w, https:\/\/speechneurolab.ca\/wp-content\/uploads\/2021\/07\/figure_modes_de_presentation-600x197.jpg 600w\" sizes=\"(max-width: 800px) 100vw, 800px\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-5a32b08 elementor-widget elementor-widget-text-editor\" data-id=\"5a32b08\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p style=\"text-align: center;\"><strong>Figure\u00a01.\u00a0<\/strong>\u00a0Illustration of the three syllable presentation modalities during the speech perception test.\u00a0<\/p><p style=\"text-align: justify;\">The participants had to indicate which of the three syllables had been presented by pressing a button. Hints were provided to the participants on a portion of the trials. For example, a written syllable could appear onscreen for a short period of time, which indicated what syllable was played. The participant\u2019s performance on the test was measured. The participants\u2019 neurophysiological responses related to the perception of speech (and more specifically, the <a href=\"https:\/\/speechneurolab.ca\/en\/leffet-mcgurk\/\">P1-N1-P2 complex<\/a>) were also measured using <a href=\"https:\/\/speechneurolab.ca\/en\/electroencephalography-eeg\/\">electroencephalography<\/a>, by means of electrodes placed on the scalp.<\/p><p style=\"text-align: justify;\">Here is a summary of the main results:<\/p><ul><li style=\"text-align: justify;\">The ability of the elderly to read lips is significantly reduced when compared to young adults.<\/li><li style=\"text-align: justify;\">Visual information (video of the syllable being pronounced) helped both the younger and the older group to perceive speech, but the benefit afforded by the visual information was greater in the young adult group.<\/li><li style=\"text-align: justify;\">Visual cues (for example, the appearance of written syllables) facilitated speech perception in a similar manner in both young and older adults. Thus, the ability to integrate auditory and visual information seems to be preserved in older people, despite their reduced ability to read lips.<\/li><li style=\"text-align: justify;\">The neurophysiological responses differed between younger and older adults (e.g. the peaks of the P2 and N2 evoked potentials, illustrated in figure\u00a0two, were reduced in amplitude, and they occurred later in the older adult group when compared to the younger group), which indicates that age influences the cerebral activity related to the perception of speech. The processes most affected were the late integration processes, not the early auditory processes.\u00a0<\/li><li style=\"text-align: justify;\">In the elderly, the neurophysiological responses\u2014specifically, the amplitude of the P2 evoked potential\u2014were associated with the performance in the speech test.\u00a0<\/li><\/ul>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-bb19397 elementor-widget elementor-widget-image\" data-id=\"bb19397\" data-element_type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" width=\"3554\" height=\"1906\" src=\"https:\/\/speechneurolab.ca\/wp-content\/uploads\/2021\/07\/Facebook_-_Article_intA\u00a9gration_audiovisuelle_Fig_2_ENG_.png\" class=\"attachment-full size-full wp-image-3951\" alt=\"\" srcset=\"https:\/\/speechneurolab.ca\/wp-content\/uploads\/2021\/07\/Facebook_-_Article_intA\u00a9gration_audiovisuelle_Fig_2_ENG_.png 3554w, https:\/\/speechneurolab.ca\/wp-content\/uploads\/2021\/07\/Facebook_-_Article_intA\u00a9gration_audiovisuelle_Fig_2_ENG_-300x161.png 300w, https:\/\/speechneurolab.ca\/wp-content\/uploads\/2021\/07\/Facebook_-_Article_intA\u00a9gration_audiovisuelle_Fig_2_ENG_-1024x549.png 1024w, https:\/\/speechneurolab.ca\/wp-content\/uploads\/2021\/07\/Facebook_-_Article_intA\u00a9gration_audiovisuelle_Fig_2_ENG_-768x412.png 768w, https:\/\/speechneurolab.ca\/wp-content\/uploads\/2021\/07\/Facebook_-_Article_intA\u00a9gration_audiovisuelle_Fig_2_ENG_-1536x824.png 1536w, https:\/\/speechneurolab.ca\/wp-content\/uploads\/2021\/07\/Facebook_-_Article_intA\u00a9gration_audiovisuelle_Fig_2_ENG_-2048x1098.png 2048w, https:\/\/speechneurolab.ca\/wp-content\/uploads\/2021\/07\/Facebook_-_Article_intA\u00a9gration_audiovisuelle_Fig_2_ENG_-540x290.png 540w, https:\/\/speechneurolab.ca\/wp-content\/uploads\/2021\/07\/Facebook_-_Article_intA\u00a9gration_audiovisuelle_Fig_2_ENG_-860x461.png 860w, https:\/\/speechneurolab.ca\/wp-content\/uploads\/2021\/07\/Facebook_-_Article_intA\u00a9gration_audiovisuelle_Fig_2_ENG_-1170x627.png 1170w, https:\/\/speechneurolab.ca\/wp-content\/uploads\/2021\/07\/Facebook_-_Article_intA\u00a9gration_audiovisuelle_Fig_2_ENG_-600x322.png 600w\" sizes=\"(max-width: 3554px) 100vw, 3554px\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-edf58f2 elementor-widget elementor-widget-text-editor\" data-id=\"edf58f2\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p style=\"text-align: center;\"><strong>Figure\u00a02.<\/strong>\u00a0Illustration of the neurophysiological responses measured by EEG in young adults (A) and in older adults (B) during the speech perception test.<\/p><p style=\"text-align: justify;\">In short, our study shows that age influences speech perception and audiovisual integration. The results provide interesting avenues for the development of interventions aimed at improving the elderly ability to perceive speech, which could, for example, target the ability to process visual information (e.g. lipreading) and integrate this information with auditory information to compensate for hearing loss, for example.\u00a0<\/p><p>Link to the article:\u00a0<a href=\"https:\/\/speechneurolab.ca\/wp-content\/uploads\/2022\/05\/tremblay_etal_neuropsychologia_2021.pdf\" target=\"_blank\" rel=\"noopener\">*Tremblay, P., Pinto, S., Basirat, A., Sato, M. (2021) Visual prediction cues can facilitate behavioural and neural speech processing in young and older adults. Neuropsychologia, 159: 1-17.<\/a><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-f470eb9 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"f470eb9\" data-element_type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-8e39ffd\" data-id=\"8e39ffd\" data-element_type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-38cd2bd elementor-widget elementor-widget-text-editor\" data-id=\"38cd2bd\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>Suggested readings:<\/p><ul><li><a href=\"https:\/\/speechneurolab.ca\/en\/leffet-mcgurk\/\">The McGurk effect<\/a><\/li><li><a href=\"https:\/\/speechneurolab.ca\/en\/speech-perception-a-complex-ability\/\">Speech perception: a complex ability<\/a><\/li><li><a href=\"https:\/\/speechneurolab.ca\/en\/electroencephalography-eeg\/\">Electroencephalography (EEG)<\/a><\/li><li><a href=\"https:\/\/speechneurolab.ca\/en\/difference-between-speech-language-and-communication\/\">Difference between speech, language and communication<\/a><\/li><li><a href=\"https:\/\/speechneurolab.ca\/en\/comment-fonctionne-le-cerveau-humain\/\">How does the human brain work?<\/a><\/li><\/ul>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<\/div>\n\t\t","protected":false},"excerpt":{"rendered":"<p>The influence of visual information on speech perception is well known. For example, when we are in a noisy environment, lipreading makes it easier to understand speech.<\/p>\n","protected":false},"author":3,"featured_media":8021,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[41],"tags":[309,311,397,318,321,329,331,338],"ppma_author":[55,54],"class_list":["post-3581","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-publications","tag-aging","tag-audiovisual-integration","tag-brain-en","tag-eeg-2","tag-hearing","tag-perception-2","tag-speech","tag-vision-2"],"authors":[{"term_id":55,"user_id":3,"is_guest":0,"slug":"admin-marilyne","display_name":"Marilyne Joyal","avatar_url":"https:\/\/secure.gravatar.com\/avatar\/?s=96&d=mm&r=g","author_category":"","user_url":"","last_name":"Joyal","first_name":"Marilyne","job_title":"","description":""},{"term_id":54,"user_id":2,"is_guest":0,"slug":"admin-pascale","display_name":"Pascale Tremblay","avatar_url":"https:\/\/secure.gravatar.com\/avatar\/ea9e5826afc1fd507cc7b89eaca37953ea310ad30088c3920137ab8e86846244?s=96&d=mm&r=g","author_category":"","user_url":"","last_name":"Tremblay","first_name":"Pascale","job_title":"","description":""}],"_links":{"self":[{"href":"https:\/\/speechneurolab.ca\/en\/wp-json\/wp\/v2\/posts\/3581","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/speechneurolab.ca\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/speechneurolab.ca\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/speechneurolab.ca\/en\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/speechneurolab.ca\/en\/wp-json\/wp\/v2\/comments?post=3581"}],"version-history":[{"count":21,"href":"https:\/\/speechneurolab.ca\/en\/wp-json\/wp\/v2\/posts\/3581\/revisions"}],"predecessor-version":[{"id":9882,"href":"https:\/\/speechneurolab.ca\/en\/wp-json\/wp\/v2\/posts\/3581\/revisions\/9882"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/speechneurolab.ca\/en\/wp-json\/wp\/v2\/media\/8021"}],"wp:attachment":[{"href":"https:\/\/speechneurolab.ca\/en\/wp-json\/wp\/v2\/media?parent=3581"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/speechneurolab.ca\/en\/wp-json\/wp\/v2\/categories?post=3581"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/speechneurolab.ca\/en\/wp-json\/wp\/v2\/tags?post=3581"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/speechneurolab.ca\/en\/wp-json\/wp\/v2\/ppma_author?post=3581"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}