Jump to content

Selective auditory attention

From Wikipedia, the free encyclopedia

Selective auditory attention, or selective hearing, is a process of the auditory system where an individual selects or focuses on certain stimuli for auditory information processing while other stimuli are disregarded.[1] This selection is very important as the processing and memory capabilities for humans have a limited capacity.[2] When people use selective hearing, noise from the surrounding environment is heard by the auditory system but only certain parts of the auditory information are chosen to be processed by the brain.

Most often, auditory attention is directed at things people are most interested in hearing.[3] Selective hearing is not a physiological disorder but rather it is the capability of most humans to block out sounds and noise. It is the notion of ignoring certain things in the surrounding environment.

Bottleneck effect

[edit]

In an article by Krans, Isbell, Giuliano, and Neville (2013), selective auditory attention can be seen through the process of the bottleneck effect, a process of the brain that inhibits processing of multiple stimuli. For example, a student is focused on a teacher giving a lesson and ignoring the sounds of classmates in a rowdy classroom (p. 53). As a result, the information given from the teacher is stored and encoded in the student's long term memory and the stimuli from the rowdy classroom is completely ignored as if it weren't present in the first place. A brain simply cannot for a sustained period collect all sensory information that is occurring in a chaotic real-world environment, so only the most relevant and important information is thoroughly processed by the brain.[4]

History

[edit]

Early researches on selective auditory attention can be traced back to 1953, when Colin Cherry introduced the "cocktail party problem".[5] At the time, air traffic controllers at the control tower received messages from pilots through loudspeakers. Hearing mixed voices through a single loudspeaker made the task very difficult.[6] In Cherry's experiment, mimicking the problem faced by air traffic controllers, participants had to listen to two messages played simultaneously from one loudspeaker and repeat what they heard.[5] This was later termed the dichotic listening task.[7]

Though introduced by Colin Cherry, Donald Broadbent is often regarded as the first to systematically apply dichotic listening tests in his research.[8] Broadbent used the method of dichotic listening to test how participants selectively attend to stimuli when overloaded with auditory stimuli; Broadbent used his findings to develop the filter model of attention in 1958.[9] Broadbent theorized that the human information processing system has a "bottleneck" due to limited capacity and that the brain performs an "early selection" before processing auditory information.[10] Broadbent proposed that auditory information enters an unlimited sensory buffer and that one stream of information is filtered out and passes through the bottleneck to be cohesive, while all others that are not selected quickly decay in salience and are not processed.[11] Broadbent's model contradicts with the cocktail party phenomenon because Broadbent's model predicts that people would never respond to their names from unattended sources since unattended information is discarded before being processed.

Deutsch & Deutsch's late selection model that was proposed in 1963 is a competing model to Broadbent's early selection model.[12] Deutsch & Deutsch's model theorizes that all information and sensory input are attended to and processed for meaning.[12] Later in the processing routine, just before information enters the short-term memory, a filter analyzes the semantic characteristics of the information and lets stimuli containing relevant information pass through to short-term memory and removes irrelevant information. Deutsch & Deutsch's model for selective auditory attention suggests that weak response to unattended stimuli comes from an internal decision on informational relevance, where more important stimuli are prioritized to enter the working memory first.

In 1964, Anne Treisman, a graduate student of Broadbent, improved Broadent's theory and proposed her own attenuation model.[13] In Treisman's model, unattended information is attenuated, tuned down compared to attended information, but still processed. For example, imagine that you are exposed to three extraneous sources of sound in a coffee shop while ordering a drink (chatter, coffee brewer, music), Treisman's model indicates that you would still pick up on the latter three sounds while attending to the cashier, just that these extraneous sources of noise would be muffled as if their "volumes" were turned down. Treisman also suggests that a threshold mechanism exists in selective auditory attention in which words from the unattended stream of information can grab one's attention. Words of low threshold, higher level of meaning and importance, such as one's name and "watch out", redirects one's attention to where it is urgently required.[13]

Development in youth

[edit]

Selective auditory attention is a component of auditory attention, which also includes arousal, orienting response, and attention span. Examining selective auditory attention has been known to be easier in children and adults compared to infants due to the limited ability to use and understand verbal commands. As a result, most of the understanding of auditory selection in infants is derived from other research, such as speech and language perception and discrimination.[14] However, small amounts of selection in infants has been recorded with preference over an infant's mother's voice compared to another female,[15] one's native language over a foreign one,[16] and speech directed towards infants instead of speech in between adults.[17]

As through age, older children have an increased ability to detect and select auditory stimuli compared to their younger counterparts. This suggests that selective auditory attention is an age dependent ability that increases based on improvements in automatic processing of information.[18]

As children of lower ages demonstrate a lesser ability to detect and select auditory stimuli compared to their older counterparts, the ability to discriminate irrelevant information from relevant has shown to be lower in those of younger ages than in older ages. The ability to allocate attention to one message among interfering messages increases with age, particularly between the ages 5 through 12 and eventually evening out after that.[19]

Factors that have shown to contribute to these heightened abilities include increased language ability and word familiarity as age increases.[19]

Another factor could be that older children are more equipped to understand a task and the reward and/or punishment for being able to understand and complete a task, thus eliminate unnecessary stimuli more frequently.[20] Using the incidental learning paradigm, it was measured that children ages 11 and up begin to be less likely to process incidental stimuli due to the development of strategies to actively process relevant information over irrelevant.[21]

All in all, the inability to filter out irrelevant information and/or allocate attention to relevant information leads back to developmentally immature attention allocation.[22]

Functional brain imaging studies of auditory attention

[edit]

In recent years, neuroimaging tools such as PET (Positron Emission Tomography) and fMRI (Functional Magnetic Resonance Imaging) have been very successful in neural operations with high spatial resolution. Specifically, fMRI has been used to find evidence for attention effects in the auditory cortex in multiple studies. Another study based on "classical" dichotic selective listening paradigms has been proven to be successful as well. The findings showed that the effects were larger in the cortex contralateral to the direction of attention[23][24][25][26] and were interpreted as "selective tuning of the left or right auditory cortices according to the direction of attention"[26]

Prevalence

[edit]

The prevalence of selective hearing has not been clearly researched yet. However, there are some that have argued that the proportion of selective hearing is particularly higher in males than females. Ida Zündorf, Hans-Otto Karnath and Jörg Lewald carried out a study in 2010 which investigated the advantages and abilities males have in the localization of auditory information.[27] A sound localization task centered on the cocktail party effect was utilized in their study. The male and female participants had to try to pick out sounds from a specific source, on top of other competing sounds from other sources. The results showed that the males had a better performance overall. Female participants found it more difficult to locate target sounds in a multiple-source environment. Zündorf et al. suggested that there may be sex differences in the attention processes that helped locate the target sound from a multiple-source auditory field. While men and women do have some differences when it comes to selective auditory hearing, they both struggle when presented with the challenge of multitasking, especially when tasks that are to be attempted concurrently are very similar in nature (Dittrich, and Stahl, 2012, p. 626).[28]

Disorder status

[edit]

Selective hearing is not known to be a disorder of the physiological or psychological aspect. Under the World Health Organization (WHO), a hearing disorder happens when there is a complete loss of hearing in the ears. It means the loss of the ability to hear. Technically speaking, selective hearing is not "deafness" to a certain sound message. Rather, it is the selectivity of an individual to attend audibly to a sound message. The whole sound message is physically heard by the ear but the brain systematically filters out unwanted information to focus on relevant important portions of the message. Therefore, selective hearing should not be confused as a physiological hearing disorder.[29] Selective auditory attention is a normal sensory process of the brain, and there can be abnormalities related to this process in people with sensory processing disorders such as autism, attention deficit hyperactive disorder,[30] post traumatic stress disorder,[31] schizophrenia,[30] selective mutism,[32] and in stand-alone auditory processing disorders.[33]

Target speech hearing

[edit]

Target speech hearing has been proposed for hearable devices like headsets and hearing aids to gives wearers the ability to hear a target person in a crowd.[34][35] This technology use real-time neural networks to learn the voice characteristics of the target speaker, which is later used to focus on their speech while removing other speakers and noise.[36][37] The deep learning-based device lets the wearer to look at the target speaker for three to five seconds to enroll them.[35] The hearable device can then cancel all other sounds in the environment and play just the enrolled speaker’s voice in real time even as the listener moves around and no longer faces the speaker.[36] This could benefit individuals with hearing loss as well as sensory processing disorders.

Sound bubbles

[edit]

Neural networks combined with noise-canceling technology has been utilized to develop headsets with customizable auditory zones—referred to as sound bubbles—that enable wearers to focus on speakers within a designated area while suppressing external sounds.[38] The core of the technology is a neural network optimized to process and analyze audio signals in real time (within one-hundredth of a second) on resource-limited headsets. This lightweight network is then trained to identify the number of sound sources both inside and outside the sound bubble, isolate these sounds, and estimate the distance of each source—a task that is believed to be highly demanding, even for the human brain.[38][39] The neural networks are embedded in noise-canceling headsets equipped with multiple microphones, resulting in a system capable of generating a sound bubble with a programmable radius ranging from 1 to 2 meters.[39] These sound bubble headsets can help wearers selectively focus on sounds that are spatially closer while suppressing those at greater distances.

See also

[edit]

References

[edit]
  1. ^ Gomes, Hilary; Molholm, Sophie; Christodoulou, Christopher; Ritter, Walter; Cowan, Nelson (2000-01-01). "The development of auditory attention in children". Frontiers in Bioscience (Landmark Edition). 5 (3): 108–120. doi:10.2741/gomes. ISSN 2768-6701. PMID 10702373.
  2. ^ Schneider, Walter; Shiffrin, Richard M. (January 1977). "Controlled and automatic human information processing: I. Detection, search, and attention". Psychological Review. 84 (1): 1–66. doi:10.1037/0033-295X.84.1.1. ISSN 1939-1471.
  3. ^ Bess FH, Humes L (2008). Audiology: The Fundamentals. Philadelphia: Lippincott Williams & Wilkins.
  4. ^ Karns, Christina M.; Isbell, Elif; Giuliano, Ryan J.; Neville, Helen J. (June 2015). "Auditory attention in childhood and adolescence: An event-related potential study of spatial selective attention to one of two simultaneous stories". Developmental Cognitive Neuroscience. 13: 53–67. doi:10.1016/j.dcn.2015.03.001. PMC 4470421. PMID 26002721.
  5. ^ a b Cherry C (5 May 1953). "Some experiments on the recognition of speech, with one and two ears" (PDF).
  6. ^ Kantowitz BH, Sorkin RD (1983). Human factors : understanding people-system relationships. New York: Wiley. ISBN 0-471-09594-X. OCLC 8866672.
  7. ^ Revlin R (2013). Cognition : theory and practice. New York, NY: Worth Publishers. ISBN 978-0-7167-5667-5. OCLC 793099349.
  8. ^ Hugdahl K (2015). "Dichotic Listening and Language: Overview". International Encyclopedia of the Social & Behavioral Sciences. Elsevier. pp. 357–367. doi:10.1016/b978-0-08-097086-8.54030-6. ISBN 978-0-08-097087-5.
  9. ^ Moray N (1995). "Donald E. Broadbent: 1926-1993". The American Journal of Psychology. 108 (1): 117–21. PMID 7733412.
  10. ^ Goldstein S, Naglieri JA (19 November 2013). Handbook of Executive Functioning. New York, NY. ISBN 978-1-4614-8106-5. OCLC 866899923.{{cite book}}: CS1 maint: location missing publisher (link)
  11. ^ Broadbent DE (22 October 2013). Perception and communication. Oxford, England. ISBN 978-1-4832-2582-1. OCLC 899000591.{{cite book}}: CS1 maint: location missing publisher (link)
  12. ^ a b Deutsch JA, Deutsch D (January 1963). "Some theoretical considerations". Psychological Review. 70: 80–90. doi:10.1037/h0039515. PMID 14027390.
  13. ^ a b Treisman AM (May 1969). "Strategies and models of selective attention". Psychological Review. 76 (3): 282–99. doi:10.1037/h0027242. PMID 4893203.
  14. ^ Gomes, Hilary; Molholm, Sophie; Christodoulou, Christopher; Ritter, Walter; Cowan, Nelson (2000-01-01). "The development of auditory attention in children". Frontiers in Bioscience (Landmark Edition). 5 (3): 108–120. doi:10.2741/gomes. ISSN 2768-6701. PMID 10702373.
  15. ^ DeCasper, Anthony J.; Fifer, William P. (1980-06-06). "Of Human Bonding: Newborns Prefer Their Mothers' Voices". Science. 208 (4448): 1174–1176. Bibcode:1980Sci...208.1174D. doi:10.1126/science.7375928. ISSN 0036-8075. PMID 7375928.
  16. ^ Mehler, Jacques; Jusczyk, Peter; Lambertz, Ghislaine; Halsted, Nilofar; Bertoncini, Josiane; Amiel-Tison, Claudine (1988-07-01). "A precursor of language acquisition in young infants". Cognition. 29 (2): 143–178. doi:10.1016/0010-0277(88)90035-2. ISSN 0010-0277. PMID 3168420.
  17. ^ Fernald, Anne (1985-04-01). "Four-month-old infants prefer to listen to motherese". Infant Behavior and Development. 8 (2): 181–195. doi:10.1016/S0163-6383(85)80005-9. ISSN 0163-6383.
  18. ^ Goodman, Judith C.; Nusbaum, Howard C., eds. (1994-03-08). The Development of Speech Perception: The Transition from Speech Sounds to Spoken Words. The MIT Press. doi:10.7551/mitpress/2387.001.0001. ISBN 978-0-262-27408-1.
  19. ^ a b Maccoby, Eleanor E. (1967), "Selective Auditory Attention in Children", Advances in Child Development and Behavior Volume 3, Advances in Child Development and Behavior, vol. 3, Elsevier, pp. 99–124, doi:10.1016/s0065-2407(08)60452-8, ISBN 978-0-12-009703-6, retrieved 2024-05-04
  20. ^ Gibson, Eleanor; Rader, Nancy (1979), Hale, Gordon A.; Lewis, Michael (eds.), "Attention", Attention and Cognitive Development, Boston, MA: Springer US, pp. 1–21, doi:10.1007/978-1-4613-2985-5_1, ISBN 978-1-4613-2985-5, retrieved 2023-10-23
  21. ^ Lane, David M.; Pearson, Deborah A. (1982). "The Development of Selective Attention". Merrill-Palmer Quarterly. 28 (3): 317–337. ISSN 0272-930X. JSTOR 23086119.
  22. ^ Pearson, Deborah A.; Lane, David M.; Swanson, James M. (1991-08-01). "Auditory attention switching in hyperactive children". Journal of Abnormal Child Psychology. 19 (4): 479–492. doi:10.1007/BF00919090. ISSN 1573-2835. PMID 1757713.
  23. ^ Pugh, Kenneth R.; Shaywitz, Bennett A.; Shaywitz, Sally E.; Fulbright, Robert K.; Byrd, Dani; Skudlarski, Pawel; Shankweiler, Donald P.; Katz, Leonard; Constable, R.Todd; Fletcher, Jack; Lacadie, Cheryl; Marchione, Karen; Gore, John C. (December 1996). "Auditory Selective Attention: An fMRI Investigation". NeuroImage. 4 (3): 159–173. doi:10.1006/nimg.1996.0067. ISSN 1053-8119. PMID 9345506.
  24. ^ O'Leary, Daniel S.; Andreasen, Nancy C.; Hurtig, Richard R.; Hichwa, Richard D.; Watkins, G.Leonard; Boles Ponto, Laura L.; Rogers, Margaret; Kirchner, Peter T. (April 1996). "A Positron Emission Tomography Study of Binaurally and Dichotically Presented Stimuli: Effects of Level of Language and Directed Attention". Brain and Language. 53 (1): 20–39. doi:10.1006/brln.1996.0034. ISSN 0093-934X. PMID 8722897.
  25. ^ Tzourio, N.; El Massioui, F.; Crivello, F.; Joliot, M.; Renault, B.; Mazoyer, B. (January 1997). "Functional Anatomy of Human Auditory Attention Studied with PET". NeuroImage. 5 (1): 63–77. doi:10.1006/nimg.1996.0252. ISSN 1053-8119. PMID 9038285.
  26. ^ a b Alho, Kimmo; Medvedev, Sviatoslav V.; Pakhomov, Sergei V.; Roudas, Marina S.; Tervaniemi, Mari; Reinikainen, Kalevi; Zeffiro, Thomas; Näätänen, Risto (January 1999). "Selective tuning of the left and right auditory cortices during spatially directed attention". Cognitive Brain Research. 7 (3): 335–341. doi:10.1016/s0926-6410(98)00036-6. ISSN 0926-6410. PMID 9838184.
  27. ^ Zündorf IC, Karnath HO, Lewald J (June 2011). "Male advantage in sound localization at cocktail parties". Cortex; A Journal Devoted to the Study of the Nervous System and Behavior. 47 (6): 741–9. doi:10.1016/j.cortex.2010.08.002. PMID 20828679. S2CID 206983792.
  28. ^ Dittrich K, Stahl C (June 2012). "Selective impairment of auditory selective attention under concurrent cognitive load". Journal of Experimental Psychology. Human Perception and Performance. 38 (3): 618–27. doi:10.1037/a0024978. PMID 21928926.
  29. ^ "Deafness and hearing impairment". World Health Organization. WHO. 2012.
  30. ^ a b Vlcek P, Bob P, Raboch J (2014-07-14). "Sensory disturbances, inhibitory deficits, and the P50 wave in schizophrenia". Neuropsychiatric Disease and Treatment. 10: 1309–15. doi:10.2147/ndt.s64219. PMC 4106969. PMID 25075189.
  31. ^ Javanbakht A, Liberzon I, Amirsadri A, Gjini K, Boutros NN (October 2011). "Event-related potential studies of post-traumatic stress disorder: a critical review and synthesis". Biology of Mood & Anxiety Disorders. 1 (1): 5. doi:10.1186/2045-5380-1-5. PMC 3377169. PMID 22738160.
  32. ^ Arie, Miri; Henkin, Yael; Lamy, Dominique; Tetin-Schneider, Simona; Apter, Alan; Sadeh, Avi; Bar-Haim, Yair (February 1, 2007). "Reduced Auditory Processing Capacity During Vocalization in Children With Selective Mutism". Biological Psychiatry. 61 (3): 419–421. doi:10.1016/j.biopsych.2006.02.020. PMID 16616723. S2CID 21750355. Retrieved July 12, 2020.
  33. ^ American Academy of Audiology. "Clinical Practice Guidelines: Diagnosis, Treatment and Management of Children and Adults with Central Auditory" (PDF). Retrieved 16 January 2017.
  34. ^ "Noise-canceling headphones use AI to let a single voice through". MIT Technology Review. Retrieved 2024-05-26.
  35. ^ a b Veluri, Bandhav; Itani, Malek; Chen, Tuochao; Yoshioka, Takuya; Gollakota, Shyamnath (2024-05-11). "Look Once to Hear: Target Speech Hearing with Noisy Examples". Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM. pp. 1–16. arXiv:2405.06289. doi:10.1145/3613904.3642057. ISBN 979-8-4007-0330-0.
  36. ^ a b "AI headphones let wearer listen to a single person in a crowd, by looking at them just once". UW News. Retrieved 2024-05-26.
  37. ^ Zmolikova, Katerina; Delcroix, Marc; Ochiai, Tsubasa; Kinoshita, Keisuke; Černocký, Jan; Yu, Dong (May 2023). "Neural Target Speech Extraction: An overview". IEEE Signal Processing Magazine. 40 (3): 8–29. arXiv:2301.13341. Bibcode:2023ISPM...40c...8Z. doi:10.1109/MSP.2023.3240008. ISSN 1053-5888.
  38. ^ a b Ma, Dong (2024-11-14). "Creating sound bubbles with intelligent headsets". Nature Electronics: 1–2. doi:10.1038/s41928-024-01281-2. ISSN 2520-1131.
  39. ^ a b Chen, Tuochao; Itani, Malek; Eskimez, Sefik Emre; Yoshioka, Takuya; Gollakota, Shyamnath (2024-11-14). "Hearable devices with sound bubbles". Nature Electronics: 1–12. doi:10.1038/s41928-024-01276-z. ISSN 2520-1131.