{"id":8555,"date":"2023-10-08T22:07:37","date_gmt":"2023-10-08T22:07:37","guid":{"rendered":"https:\/\/speechneurolab.ca\/?p=8555"},"modified":"2024-01-09T17:05:02","modified_gmt":"2024-01-09T17:05:02","slug":"the-cocktail-party-explained","status":"publish","type":"post","link":"https:\/\/speechneurolab.ca\/en\/the-cocktail-party-explained\/","title":{"rendered":"The cocktail party explained"},"content":{"rendered":"\t\t<div data-elementor-type=\"wp-post\" data-elementor-id=\"8555\" class=\"elementor elementor-8555\">\n\t\t\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-d1f466f elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"d1f466f\" data-element_type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-0e5f6d6\" data-id=\"0e5f6d6\" data-element_type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-64fddd0 elementor-widget elementor-widget-text-editor\" data-id=\"64fddd0\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p style=\"font-weight: 400;\"><strong>Imagine you are in a busy social event. Conversations are flowing around you as you are talking with a friend, then suddenly you hear your name being said across the room. How is it you can listen to and understand your friend whilst also hearing a conversation across the room? This is a prime example of a phenomenon called \u201cthe cocktail party effect\u201d. Understanding speech in the presence of background conversation is a challenge for all of us, but especially children and older adults. In this blog post, we discuss what the cocktail party effect is, and some of the cognitive mechanisms involved.<\/strong><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-9b9021f elementor-widget elementor-widget-image\" data-id=\"9b9021f\" data-element_type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img fetchpriority=\"high\" decoding=\"async\" width=\"300\" height=\"201\" src=\"https:\/\/speechneurolab.ca\/wp-content\/uploads\/2023\/10\/CrowdCommunication-ENG-300x201.png\" class=\"attachment-medium size-medium wp-image-8635\" alt=\"\" srcset=\"https:\/\/speechneurolab.ca\/wp-content\/uploads\/2023\/10\/CrowdCommunication-ENG-300x201.png 300w, https:\/\/speechneurolab.ca\/wp-content\/uploads\/2023\/10\/CrowdCommunication-ENG-768x515.png 768w, https:\/\/speechneurolab.ca\/wp-content\/uploads\/2023\/10\/CrowdCommunication-ENG-540x362.png 540w, https:\/\/speechneurolab.ca\/wp-content\/uploads\/2023\/10\/CrowdCommunication-ENG-585x390.png 585w, https:\/\/speechneurolab.ca\/wp-content\/uploads\/2023\/10\/CrowdCommunication-ENG.png 832w\" sizes=\"(max-width: 300px) 100vw, 300px\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-11a3b58 elementor-widget elementor-widget-text-editor\" data-id=\"11a3b58\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p style=\"font-weight: 400;\">When in a busy social event, there are many different conversations and noises being produced simultaneously. With all this noise, one could, and perhaps <em>should<\/em>, get overwhelmed with this much information. This assumption is partially correct. Humans have a limited capacity to process information at once. In the seminal paper \u201cSome Experiments on the Recognition of Speech, with One and with Two Ears,\u201d in which Colin E. Cherry (1953) first described the cocktail party effect, this idea of limited capacity was discussed. Cherry found that when we hear two messages simultaneously, we are unable to gather meaning from either message unless their content is distinct.<\/p><p style=\"font-weight: 400;\">Expanding on the work of Cherry, researchers have found that when voices are spaced out (e.g., three speakers located at 10 degrees, 0 degrees, and -10 degrees in front of a listener, Figure 1B) it is easier to understand a simultaneous message than if all voices are coming from one direction (Figure 1A) (Spieth et al., 1954, cited in Aarons, 1992). A larger difference in spatial separation between the voices (e.g., 90 degrees, 0 degrees, and -90 degrees) results in people being able to repeat back the attended message more accurately (Figure 1C). One possible explanation is that separation allows us to more easily select a stream to attend to.<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-a1b1d79 elementor-widget elementor-widget-image\" data-id=\"a1b1d79\" data-element_type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t<figure class=\"wp-caption\">\n\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" width=\"768\" height=\"287\" src=\"https:\/\/speechneurolab.ca\/wp-content\/uploads\/2023\/10\/SoundLocalisation-768x287.png\" class=\"attachment-medium_large size-medium_large wp-image-8622\" alt=\"\" srcset=\"https:\/\/speechneurolab.ca\/wp-content\/uploads\/2023\/10\/SoundLocalisation-768x287.png 768w, https:\/\/speechneurolab.ca\/wp-content\/uploads\/2023\/10\/SoundLocalisation-300x112.png 300w, https:\/\/speechneurolab.ca\/wp-content\/uploads\/2023\/10\/SoundLocalisation-1024x382.png 1024w, https:\/\/speechneurolab.ca\/wp-content\/uploads\/2023\/10\/SoundLocalisation-1536x573.png 1536w, https:\/\/speechneurolab.ca\/wp-content\/uploads\/2023\/10\/SoundLocalisation-540x201.png 540w, https:\/\/speechneurolab.ca\/wp-content\/uploads\/2023\/10\/SoundLocalisation-860x321.png 860w, https:\/\/speechneurolab.ca\/wp-content\/uploads\/2023\/10\/SoundLocalisation-1170x437.png 1170w, https:\/\/speechneurolab.ca\/wp-content\/uploads\/2023\/10\/SoundLocalisation.png 1954w\" sizes=\"(max-width: 768px) 100vw, 768px\" \/>\t\t\t\t\t\t\t\t\t\t\t<figcaption class=\"widget-image-caption wp-caption-text\">Figure 1 The sound waves represent the simultaneous signal being produced by the speakers. As you can see, the more separated, the speakers are accuracy scores improve.<\/figcaption>\n\t\t\t\t\t\t\t\t\t\t<\/figure>\n\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-5c9d6ac elementor-widget elementor-widget-text-editor\" data-id=\"5c9d6ac\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<div><p><span lang=\"EN-CA\">Maurizio Corbetta and Gorden L. Shulman (2002) at Washington University developed a dual process model of how we<\/span><span lang=\"EN-CA\">select the focus of our attention. The first process of this model consists of goal-directed attention (top-down attention). For example, your goal is to hear your friend at a crowded party, thus more attention will be dedicated to the conversation. This process is supported by brain areas in the dorsal fronto-parietal network<\/span><span lang=\"EN-CA\">. When we focus our attention on a goal (e.g., a conversation), this network may filter out irrelevant information to the task and cause this information to be unattended. However, when information is unattended this does not mean we do not hear it. Lee M. Miller in the Department of Neurobiology at the University of California (2016) suggests that we give different levels of attention to multiple signals. For example, if your goal is to have a conversation, the person you are speaking to will be given a high level of attention and the low consistent noise of a washing machine will be filtered out. By dedicating low levels of attention to background noise, we are able to better focus on the task (e.g., the conversation). And yet, in this background noise, unattended signals can break our focus and gain our attention.<\/span><\/p><\/div><div><p><span lang=\"EN-CA\">The second process in Corbetta and Shulman\u2019s model of selective attention is associated with involuntary attention (\u201cbottom-up\u201d attention). For example, imagine again that you are speaking to your friend at a party then someone shouts your name, your attention will be directed towards that signal. This process is supported by brain areas in the right ventral frontoparietal network. Corbetta &amp; Shulman suggest that this \u00a0network acts as a circuit breaker directing our attention to something that is behaviourally relevant outside of what we are focussing our processing on. When these areas detect an unattended signal, they will reorient our attention. When this network detects an infrequent stimulus or a stimulus out of the ordinary which is relevant to us, the activation in this network increases. For example, since your name relates to you and is unexpected, areas in this network could increase in activation and trigger a shift in attention. <\/span><\/p><\/div><div><span lang=\"EN-CA\">When speaking to your friend in a busy social event, our attentional system is not the only system that is engaged. Not only must we focus our attention, but we must perceive and understand what our friend is saying. We will now discuss some of the processes that help us perceive and understand what our friend is saying in a busy environment. <\/span><\/div>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-6ae60ce elementor-widget elementor-widget-text-editor\" data-id=\"6ae60ce\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p style=\"font-weight: 400;\"><u>Visual Information <\/u><\/p><p style=\"font-weight: 400;\">When in a loud environment (e.g., a cocktail party) with a lot of different signals, your visual system is essential to process a conversation. Visual speech processing refers to the ability to process the movements of the mouth, jaw, lips and (to some extent) tongue while looking at a speaker. This information helps understand speech, especially in challenging situations such as a cocktail party.<\/p><p style=\"font-weight: 400;\">The best example of visual information aiding in speech perception is known as the McGurk-McDonald effect (1976), <span lang=\"EN-CA\">often referred to as the McGurk effect. <\/span>\u00a0The McGurk effect was developed by Harry McGurk and John MacDonald in 1976 at the University of Surrey. These effects teach us that visual information can alter the way we perceive sounds. For example, when you are presented with the sound \u201cba\u201d but the visual stimulus \u201cga\u201d you are likely to hear the sound \u201cda\u201d, that is, a sound that was neither heard nor seen, but which represents a combination of the auditory and visual stimuli. This effect is caused by <a href=\"https:\/\/speechneurolab.ca\/en\/publication-scientifique-sur-lintegration-audiovisuelle\/\">integration between the auditory and visual processes<\/a>\u00a0(Figure 3). For a more detailed look at the McGurk effect see our <a href=\"https:\/\/speechneurolab.ca\/en\/leffet-mcgurk\/\">blog post<\/a>.<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-f60dc9c elementor-widget elementor-widget-image\" data-id=\"f60dc9c\" data-element_type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t<figure class=\"wp-caption\">\n\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" width=\"768\" height=\"454\" src=\"https:\/\/speechneurolab.ca\/wp-content\/uploads\/2023\/10\/PerceptionaudiovisuelleENG-768x454.png\" class=\"attachment-medium_large size-medium_large wp-image-8566\" alt=\"\" srcset=\"https:\/\/speechneurolab.ca\/wp-content\/uploads\/2023\/10\/PerceptionaudiovisuelleENG-768x454.png 768w, https:\/\/speechneurolab.ca\/wp-content\/uploads\/2023\/10\/PerceptionaudiovisuelleENG-300x177.png 300w, https:\/\/speechneurolab.ca\/wp-content\/uploads\/2023\/10\/PerceptionaudiovisuelleENG-540x319.png 540w, https:\/\/speechneurolab.ca\/wp-content\/uploads\/2023\/10\/PerceptionaudiovisuelleENG-860x508.png 860w, https:\/\/speechneurolab.ca\/wp-content\/uploads\/2023\/10\/PerceptionaudiovisuelleENG.png 978w\" sizes=\"(max-width: 768px) 100vw, 768px\" \/>\t\t\t\t\t\t\t\t\t\t\t<figcaption class=\"widget-image-caption wp-caption-text\">Figure 2. speech is multimodal (auditory and visual)<\/figcaption>\n\t\t\t\t\t\t\t\t\t\t<\/figure>\n\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-99be8b0 elementor-widget elementor-widget-text-editor\" data-id=\"99be8b0\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p style=\"font-weight: 400;\"><u>Individual Differences in Speech Production<\/u><\/p><p style=\"font-weight: 400;\">When you are struggling to understand what your friend is saying due to the loud background noise, the way your friend articulates can give insight into what they are saying. Inversely, discussing with a stranger will be more challenging.<\/p><p style=\"font-weight: 400;\">Indeed, we all speak in a unique way; these differences (for example, a person\u2019s accent, intonation, speech rate, way of producing certain sounds, language tics) make understanding speech a difficult task. Our brains must find a way to process these interindividual differences. Shannon Heald, Serena Klos, and Howard Nusbaum (2016) from the University of Chicago suggest that we can take into account previous experiences to help our understanding. For example, your friend has a regional accent in which they rarely pronounce the \u201ct\u201d sound; you can use this knowledge to help process their speech in a loud environment. So, at a loud cocktail party, it is easier to discuss with a known person than a person you are less familiar with!<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-76f4928 elementor-widget elementor-widget-text-editor\" data-id=\"76f4928\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p style=\"font-weight: 400;\"><u>Maintaining Context<\/u><\/p><p style=\"font-weight: 400;\">Another factor to consider is how we take into account the context of the word whilst processing. For example, take the commonly used words \u201cthere\u201d \u201cthey\u2019re\u201d, and \u201ctheir\u201d. These words all sound and are articulated the same, however, depending on the context, you will be able to differentiate between them almost automatically (Figure 2).<\/p><p style=\"font-weight: 400;\">Phonological working memory is used to maintain speech sounds or words in memory while listening to a sentence (Perrachione, Ghosh, Ostrovskaya, Gabrieli &amp; Kovelman, 2017). This allows us to retain the context of the sentence which helps with understanding. Alan D Baddeley and Graham J. Hitch from the University of York have suggested that there are two processes that allow us to differentiate between homophones; these processes form what Baddeley and Hitch call the <em>phonological loop<\/em>. <span lang=\"EN-CA\">The first process is essential for maintaining information. It allows us to rehearse on or several words in our heads and to break them down to understand each syllable and each phoneme. The second process is used to make judgments on homophones (e.g., they\u2019re or there). These two processes work together <\/span>to determine which homophone is correct is a specific context (Figure 2).<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-f0d931e elementor-widget elementor-widget-image\" data-id=\"f0d931e\" data-element_type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t<figure class=\"wp-caption\">\n\t\t\t\t\t\t\t\t\t\t<img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"399\" src=\"https:\/\/speechneurolab.ca\/wp-content\/uploads\/2023\/10\/Homophones-1024x399.png\" class=\"attachment-large size-large wp-image-8558\" alt=\"\" srcset=\"https:\/\/speechneurolab.ca\/wp-content\/uploads\/2023\/10\/Homophones-1024x399.png 1024w, https:\/\/speechneurolab.ca\/wp-content\/uploads\/2023\/10\/Homophones-300x117.png 300w, https:\/\/speechneurolab.ca\/wp-content\/uploads\/2023\/10\/Homophones-768x299.png 768w, https:\/\/speechneurolab.ca\/wp-content\/uploads\/2023\/10\/Homophones-1536x599.png 1536w, https:\/\/speechneurolab.ca\/wp-content\/uploads\/2023\/10\/Homophones-540x211.png 540w, https:\/\/speechneurolab.ca\/wp-content\/uploads\/2023\/10\/Homophones-860x335.png 860w, https:\/\/speechneurolab.ca\/wp-content\/uploads\/2023\/10\/Homophones-1170x456.png 1170w, https:\/\/speechneurolab.ca\/wp-content\/uploads\/2023\/10\/Homophones.png 1916w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/>\t\t\t\t\t\t\t\t\t\t\t<figcaption class=\"widget-image-caption wp-caption-text\">Figure 3. Example of how we can decide between homophones<\/figcaption>\n\t\t\t\t\t\t\t\t\t\t<\/figure>\n\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-934e00a elementor-widget elementor-widget-text-editor\" data-id=\"934e00a\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p style=\"font-weight: 400;\"><u>Transition Probability<\/u><\/p><p style=\"font-weight: 400;\">When talking to a friend in a busy social event, the background noise will degrade the speech stream. This degradation of the speech stream can cause us to struggle to understand individual words. So how are we able to break up this stream to understand what our friend is saying? Our brain uses a type of information that is referred to as \u201ctransition probabilities\u201d. That is, we calculate how likely one syllable is to follow or precede another. Each language in the World possesses its own set of statistics. A low probability usually indicates the end of a word; this information can be used to decode the speech stream.<\/p><p style=\"font-weight: 400;\">For example, the probability that the phoneme \u201cq\u201d is followed by a \u201cu\u201d is extremely high in English, so it can be assumed they are part of the same word. Whereas the probability that the phoneme \u201cd\u201d is followed by the phoneme \u201cf\u201d within the same word is very low in English, therefore it can be assumed that this is the start of a new word (Dal Ben, Souza &amp; Hay, 2021). Findings from our lab suggest that the left anterior and middle planum temporale areas of the brain may be recruited for the predictive aspects of the process (Tremblay, Baroni &amp; Hasson, 2012). Using transition probability can help us break down the speech stream and can help us restore the missing signal caused by the loud environment.<\/p><p style=\"font-weight: 400;\"><u>Conclusion<\/u><\/p><p style=\"font-weight: 400;\">The cocktail party effect occurs thanks to numerous different processes acting simultaneously. More and more work is being done to explore the intricate details of these processes as well as factors that can affect these processes, such as age, hearing and vision, language background and many others. In our lab, we investigate how age and musical experience affect the perception of speech in noise and methods to reduce this effect such as brain stimulation.<\/p><p style=\"font-weight: 400;\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0\u00a0<img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-medium wp-image-8560\" src=\"https:\/\/speechneurolab.ca\/wp-content\/uploads\/2023\/10\/FormalGathering-300x260.png\" alt=\"\" width=\"300\" height=\"260\" srcset=\"https:\/\/speechneurolab.ca\/wp-content\/uploads\/2023\/10\/FormalGathering-300x260.png 300w, https:\/\/speechneurolab.ca\/wp-content\/uploads\/2023\/10\/FormalGathering-1024x888.png 1024w, https:\/\/speechneurolab.ca\/wp-content\/uploads\/2023\/10\/FormalGathering-768x666.png 768w, https:\/\/speechneurolab.ca\/wp-content\/uploads\/2023\/10\/FormalGathering-540x468.png 540w, https:\/\/speechneurolab.ca\/wp-content\/uploads\/2023\/10\/FormalGathering-860x746.png 860w, https:\/\/speechneurolab.ca\/wp-content\/uploads\/2023\/10\/FormalGathering-1170x1015.png 1170w, https:\/\/speechneurolab.ca\/wp-content\/uploads\/2023\/10\/FormalGathering.png 1298w\" sizes=\"(max-width: 300px) 100vw, 300px\" \/><\/p><p style=\"font-weight: 400;\">\u00a0<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-81c2108 elementor-widget elementor-widget-text-editor\" data-id=\"81c2108\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>R\u00e9f\u00e9rences :<\/p><div>\u00a0<\/div><div><p style=\"font-weight: 400;\">Arons, B. (1992). A review of the cocktail party effect. <em>Journal of the American Voice I\/O Society<\/em>, <em>12<\/em>(7), 35-50.<\/p><p style=\"font-weight: 400;\">Cherry, E. C. (1953). Some experiments on the recognition of speech, with one and with two ears. <em>Journal of the Acoustical Society of America<\/em>, <em>25<\/em>, 975-979.<\/p><p style=\"font-weight: 400;\">Corbetta, M., &amp; Shulman, G. L. (2002). Control of goal-directed and stimulus-driven attention in the brain.\u00a0<em>Nature reviews neuroscience<\/em>,\u00a0<em>3<\/em>(3), 201-215.<\/p><p style=\"font-weight: 400;\">Dal Ben, R., Souza, D. D. H., &amp; Hay, J. F. (2021). When statistics collide: The use of transitional and phonotactic probability cues to word boundaries.\u00a0<em>Memory &amp; Cognition<\/em>, 1-11.<\/p><p style=\"font-weight: 400;\">Heald, S., Klos, S., &amp; Nusbaum, H. (2016). Understanding speech in the context of variability. In\u00a0<em>Neurobiology of language<\/em>\u00a0(pp. 195-208). Academic Press.<\/p><p style=\"font-weight: 400;\">McGurk, H., &amp; MacDonald, J. (1976). Hearing lips and seeing voices.\u00a0<em>Nature<\/em>,\u00a0<em>264<\/em>(5588), 746-748.<\/p><p style=\"font-weight: 400;\">Miller, L. M. (2016). Neural mechanisms of attention to speech. In\u00a0<em>Neurobiology of language<\/em>\u00a0(pp. 503-514). Academic Press.<\/p><p style=\"font-weight: 400;\">Perrachione, T. K., Ghosh, S. S., Ostrovskaya, I., Gabrieli, J. D., &amp; Kovelman, I. (2017). Phonological working memory for words and nonwords in cerebral cortex.\u00a0<em>Journal of Speech, Language, and Hearing Research<\/em>,\u00a0<em>60<\/em>(7), 1959-1979.<\/p><p style=\"font-weight: 400;\">Tremblay, P., Baroni, M., &amp; Hasson, U. (2013). Processing of speech and non-speech sounds in the supratemporal plane: Auditory input preference does not predict sensitivity to statistical structure.\u00a0<em>Neuroimage<\/em>,\u00a0<em>66<\/em>, 318-332.<\/p><\/div>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-df748cf elementor-widget elementor-widget-text-editor\" data-id=\"df748cf\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p style=\"font-weight: 400;\">Further Readings :<\/p><ul><li><div><a href=\"https:\/\/speechneurolab.ca\/en\/leffet-mcgurk\/\"><span lang=\"EN-US\">McGurk effect<\/span><\/a><\/div><\/li><li><a href=\"https:\/\/speechneurolab.ca\/en\/decouper-le-langage-pour-mieux-letudier\/\">The SyllabO project<\/a><\/li><li><a href=\"https:\/\/speechneurolab.ca\/en\/audiology-and-the-work-of-audiologists\/\">Audiology and the work of audiologists<\/a><\/li><li><a href=\"https:\/\/speechneurolab.ca\/en\/le-cortex-auditif\/\">The auditory cortex<\/a><\/li><li><a href=\"https:\/\/speechneurolab.ca\/en\/the-peripheral-auditory-system\/\">The peripheral auditory system<\/a><\/li><li><div><a href=\"https:\/\/speechneurolab.ca\/en\/le-systeme-auditif-central-sous-cortical\/\"><span lang=\"EN-US\"><span lang=\"EN-US\">The central subcortical auditory system<\/span><\/span><\/a><\/div><\/li><li><a href=\"https:\/\/speechneurolab.ca\/en\/speech-perception-a-complex-ability\/\">Speech perception: a complex ability<\/a><\/li><li><div><div><a href=\"https:\/\/speechneurolab.ca\/en\/difference-between-speech-language-and-communication\/\"><span lang=\"EN-US\">Difference between speech, language and communication<\/span><\/a><\/div><div>\u00a0<\/div><\/div><\/li><\/ul>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<\/div>\n\t\t","protected":false},"excerpt":{"rendered":"<p>When in a busy social event, there are many different conversations and noises being produced simultaneously. With all this noise, one could get overwhelmed with this much information. Yet we are still able to follow a conversation. This phenomenon is called the cocktail party effect. <\/p>\n","protected":false},"author":2,"featured_media":8635,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[44],"tags":[397,321,323,329,331],"ppma_author":[564,54],"class_list":["post-8555","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-vulgarisation-scientifique","tag-brain-en","tag-hearing","tag-language-2","tag-perception-2","tag-speech"],"authors":[{"term_id":564,"user_id":0,"is_guest":1,"slug":"keir-lawley","display_name":"Keir Lawley","avatar_url":"https:\/\/secure.gravatar.com\/avatar\/?s=96&d=mm&r=g","author_category":"","user_url":"","last_name":"","first_name":"","job_title":"","description":""},{"term_id":54,"user_id":2,"is_guest":0,"slug":"admin-pascale","display_name":"Pascale Tremblay","avatar_url":"https:\/\/secure.gravatar.com\/avatar\/ea9e5826afc1fd507cc7b89eaca37953ea310ad30088c3920137ab8e86846244?s=96&d=mm&r=g","author_category":"","user_url":"","last_name":"Tremblay","first_name":"Pascale","job_title":"","description":""}],"_links":{"self":[{"href":"https:\/\/speechneurolab.ca\/en\/wp-json\/wp\/v2\/posts\/8555","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/speechneurolab.ca\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/speechneurolab.ca\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/speechneurolab.ca\/en\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/speechneurolab.ca\/en\/wp-json\/wp\/v2\/comments?post=8555"}],"version-history":[{"count":24,"href":"https:\/\/speechneurolab.ca\/en\/wp-json\/wp\/v2\/posts\/8555\/revisions"}],"predecessor-version":[{"id":9876,"href":"https:\/\/speechneurolab.ca\/en\/wp-json\/wp\/v2\/posts\/8555\/revisions\/9876"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/speechneurolab.ca\/en\/wp-json\/wp\/v2\/media\/8635"}],"wp:attachment":[{"href":"https:\/\/speechneurolab.ca\/en\/wp-json\/wp\/v2\/media?parent=8555"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/speechneurolab.ca\/en\/wp-json\/wp\/v2\/categories?post=8555"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/speechneurolab.ca\/en\/wp-json\/wp\/v2\/tags?post=8555"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/speechneurolab.ca\/en\/wp-json\/wp\/v2\/ppma_author?post=8555"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}