Jump to content

Frank H. Guenther

From Wikipedia, the free encyclopedia

Frank H. Guenther (born April 18, 1964, Kansas City, MO) is an American computational and cognitive neuroscientist whose research focuses on the neural computations underlying speech, including characterization of the neural bases of communication disorders and development of brain–computer interfaces for communication restoration. He is currently a professor of speech, language, and hearing sciences and biomedical engineering at Boston University.

Education

[edit]

Frank Guenther received a B.S. in electrical engineering from the University of Missouri in Columbia (1986), graduating summa cum laude and ranking first overall in the College of Engineering. He received an M.S. in electrical engineering from Princeton University (1987) and a Ph.D. in cognitive and neural systems from Boston University (1993).

Professional

[edit]

In 1992, Guenther joined the faculty of the Cognitive & Neural Systems Department at Boston University, receiving tenure in 1998. In 2010 he became associate director of the graduate program for neuroscience and director of the computational neuroscience PhD specialization at Boston University. He joined the Department of Speech, Language, & Hearing Sciences at BU that same year. In addition to his Boston University appointments, Guenther was a research affiliate in the Research Laboratory of Electronics at Massachusetts Institute of Technology from 1998 to 2011, and in 2011 he became a research affiliate in the Picower Institute for Learning and Memory at MIT. Since 1998 he has been a member of the Speech and Hearing Bioscience and Technology PhD program in the Harvard University – MIT Division of Health Sciences and Technology, and since 2003 he has been a visiting scientist in the Department of Radiology at Massachusetts General Hospital. Guenther has given numerous keynote and distinguished lectures worldwide and has authored over 55 refereed journal articles concerning the neural bases of speech and motor control as well as brain–computer interface technology.

Research

[edit]

Frank Guenther's research is aimed at uncovering the neural computations underlying the processing of speech by the human brain. He is the originator of the Directions Into Velocities of Articulators (DIVA) model, which is currently the leading model of the neural computations underlying speech production.[1][2][3][4][5] This model mathematically characterizes the computations performed by each brain region involved in speech production as well as the function of the interconnections between these regions. The model has been supported by a wide range of experimental tests of model predictions, including electromagnetic articulometry studies investigating speech movements,[6][7][8][9][10] auditory perturbation studies involving modification of a speaker's feedback of his/her own speech in real time,[11][12][13][14] and functional magnetic resonance imaging studies of brain activity during speech,[12][15][16][17] though some parts of the model remain to be experimentally verified. The DIVA model has been used to investigate the neural underpinnings of a number of communication disorders, including stuttering[18][19] apraxia of speech,[20][21] and hearing-impaired speech.[8][9][10]

In addition to computational modeling and experimental research investigating the neural bases of speech, Guenther directs the Boston University Neural Prosthesis Laboratory, which focuses on the development of technologies that can decode the brain signals of profoundly paralyzed individuals, particularly those with locked-in syndrome, in order to control external devices such as speech synthesizers, mobile robots, and computers. Guenther's team received widespread press coverage in 2009, when they developed a brain–computer interface for real-time speech synthesis that allowed locked-in patient Erik Ramsey to produce vowel sounds in collaboration with Dr. Philip Kennedy (inventor of the neurotrophic electrode used in the study) and Dr. Jonathan Brumberg.[22] He has also made headlines for his research into non-invasive brain–computer interfaces for communication.[23][24] In 2011, Guenther founded the Unlock Project, a non-profit project aimed at providing free brain–computer interface technology to patients with locked-in syndrome.

Media

[edit]

Frank Guenther's research has been covered extensively in the science and mainstream media, including television spots on CNN News,[25] PBS NewsHour,[23] and Fox News;[26] articles in popular science magazines Nature News,[27] New Scientist,[28][29] Discover,[30][31] and Scientific American;[32][33][34] and mainstream media coverage in Esquire,[35] Wired,[36] The Boston Globe,[37] MSNBC,[38] and BBC News.[39]

References

[edit]
  1. ^ Guenther, Frank H. (1 November 1994). "A neural network model of speech acquisition and motor equivalent speech production". Biological Cybernetics. 72 (1): 43–53. doi:10.1007/BF00206237. PMID 7880914. S2CID 1763440.
  2. ^ Guenther, FH (July 1995). "Speech sound acquisition, coarticulation, and rate effects in a neural network model of speech production". Psychological Review. 102 (3): 594–621. doi:10.1037/0033-295x.102.3.594. PMID 7624456. S2CID 10405448.
  3. ^ Guenther, Frank H.; Hampson, Michelle; Johnson, Dave (1998). "A theoretical investigation of reference frames for the planning of speech movements". Psychological Review. 105 (4): 611–633. doi:10.1037/0033-295x.105.4.611-633. PMID 9830375. S2CID 11179837.
  4. ^ Guenther, Frank H.; Ghosh, Satrajit S.; Tourville, Jason A. (March 2006). "Neural Modeling and Imaging of the Cortical Interactions Underlying Syllable Production". Brain and Language. 96 (3): 280–301. doi:10.1016/j.bandl.2005.06.001. PMC 1473986. PMID 16040108.
  5. ^ Golfinopoulos, E.; Tourville, J.A.; Guenther, F.H. (September 2010). "The integration of large-scale neural network modeling and functional brain imaging in speech motor control". NeuroImage. 52 (3): 862–874. doi:10.1016/j.neuroimage.2009.10.023. PMC 2891349. PMID 19837177.
  6. ^ Perkell, J.S., Guenther, F.H., Lane, H., Matthies, M.L., Stockmann, E., Tiede, M., and Zandipour, M. (2004). The distinctness of speakers’ productions of vowel contrasts is related to their discrimination of the contrasts. Journal of the Acoustical Society of America, 116(4) Pt. 1, pp. 2338-2344.
  7. ^ Perkell, J.S., Matthies, M.L., Tiede, M., Lane, H., Zandipour, M., Marrone, N., Stockmann, E., and Guenther, F.H. (2004). The distinctness of speakers’ /s-sh/ contrast is related to their auditory discrimination and use of an articulatory saturation effect. Journal of Speech, Language, and Hearing Research, 47, pp. 1259-1269.
  8. ^ a b Lane, H., Denny, M., Guenther, F.H., Matthies, M.L., Menard, L., Perkell, J.S., Stockmann, E., Tiede, M., Vick, J., and Zandipour, M. (2005). Effects of bite blocks and hearing status on vowel production. Journal of the Acoustical Society of America, 118, pp. 1636-1646.
  9. ^ a b Lane, H, Denny, M., Guenther, F.H., Hanson, H., Marrone, N., Matthies, M.L., Perkell, J.S., Burton, E., Tiede, M., Vick, J., and Zandipour, M. (2007). On the structure of phoneme categories in listeners with cochlear implants. Journal of Speech, Language, and Hearing Research, 50, pp. 2-14.
  10. ^ a b Lane, H., Matthies, M.L., Denny, M., Guenther, F.H., Perkell, J.S., Stockmann, E., Tiede, M., Vick, J., and Zandipour, M. (2007). Effects of short- and long-term changes in auditory feedback on vowel and sibilant contrasts. Journal of Speech, Language, and Hearing Research, 50, pp. 913-927.
  11. ^ Villacorta, V.M., Perkell, J.S., and Guenther, F.H. (2007). Sensorimotor adaptation to feedback perturbations of vowel acoustics and its relation to perception. Journal of the Acoustical Society of America, 122, pp. 2306-2319.
  12. ^ a b Tourville, J.A., Reilly, K.J., and Guenther, F.H. (2008). Neural mechanisms underlying auditory feedback control of speech. NeuroImage, 39, pp. 1429-1443.
  13. ^ Patel, R., Niziolek, C., Reilly, K.J., and Guenther, F.H. (2011). Prosodic adaptations to pitch perturbation in running speech. Journal of Speech, Language, and Hearing Research, 54, pp. 1051-1059.
  14. ^ Cai, S., Ghosh, S.S., Guenther, F.H., and Perkell, J.S. (2011). Focal manipulations of formant trajectories reveal a role of auditory feedback in the online control of both within-syllable and between-syllable speech timing. Journal of Neuroscience, 31, pp. 16483-90.
  15. ^ Ghosh, S.S., Tourville, J.A., and Guenther, F.H. (2008). A neuroimaging study of premotor lateralization and cerebellar involvement in the production of phonemes and syllables. Journal of Speech, Language, and Hearing Research, 51, pp. 1183-1202.
  16. ^ Bohland, J.W. and Guenther, F.H. (2006). An fMRI investigation of syllable sequence production. NeuroImage, 32, pp. 821-841.
  17. ^ Peeva, M.G., Guenther, F.H., Tourville, J.A., Nieto-Castanon, A., Anton, J.-L., Nazarian, B., and Alario, F.-X. (2010). Distinct representations of phonemes, syllables, and supra-syllabic sequences in the speech production network. NeuroImage, 50, pp. 626-638.
  18. ^ Max, L., Guenther, F.H., Gracco, V.L., Ghosh, S.S., and Wallace, M.E. (2004). Unstable or insufficiently activated internal models and feedback-biased motor control as sources of dysfluency: A theoretical model of stuttering. Contemporary Issues in Communication Science and Disorders, 31, pp. 105-122.
  19. ^ Civier, O., Tasko, S.M., and Guenther, F.H. (2010). Overreliance on auditory feedback may lead to sound/syllable repetitions: Simulations of stuttering and fluency-inducing conditions with a neural model of speech production. Journal of Fluency Disorders, 35, pp. 246-279.
  20. ^ Terband, H., Maassen, B, Guenther, F.H., and Brumberg, J. (2009). Computational neural modeling of speech motor control in childhood apraxia of speech. Journal of Speech, Language, and Hearing Research, 52, pp. 1595-1609.
  21. ^ Maas, E., Mailend, M.-L., Story, B.H., and Guenther, F.H. (2011). The role of auditory feedback in apraxia of speech: Effects of feedback masking on vowel contrast. 6th International Conference on Speech Motor Control, Groningen, The Netherlands.
  22. ^ Guenther, F.H., Brumberg, J.S., Wright, E.J., Nieto-Castanon, A., Tourville, J.A., Panko, M., Law, R., Siebert, S.A., Bartels, J.L., Andreasen, D.S., Ehirim, P., Mao, H., and Kennedy, P.R. (2009). A wireless brain-machine interface for real-time speech synthesis. PLoS ONE, 4(12), pp. e8218+.
  23. ^ a b “Brain-Powered Technology May Help Locked-In Patients” PBS News Hour, October 14, 2011, https://www.pbs.org/newshour/rundown/2011/10/brain-powered-technology-may-help-locked-in-patients.html Archived 2014-01-22 at the Wayback Machine
  24. ^ "Science for the Public". www.scienceforthepublic.org. Retrieved 2023-08-11.
  25. ^ Lee, Y.S. “Scientists seek to help 'locked-in' man speak.” CNN 14 December 2007. [1][dead link]
  26. ^ Underwood, C. “Brain Implants May Let ‘Locked-In’ Patients Speak” Fox News 23 May 2008. [2]
  27. ^ Smith, Kerri (2008-11-21). "Brain implant allows mute man to speak". Nature. doi:10.1038/news.2008.1247. ISSN 1476-4687.
  28. ^ "Locked-in man controls speech synthesiser with thought". New Scientist. Retrieved 2023-08-11.
  29. ^ "Telepathy machine reconstructs speech from brainwaves". New Scientist. Retrieved 2023-08-11.
  30. ^ Weed, W. S. “The Biology of…Stuttering.” Discover Magazine 1 November 2002. http://discovermagazine.com/2002/nov/featbiology
  31. ^ Baker, S “The Rise of the Cyborgs: Melding humas and machines to help the paralyzed walk, the mute speak and the near-dead return to life.” Discover Magazine 26 September 2008. http://discovermagazine.com/2008/oct/26-rise-of-the-cyborgs
  32. ^ "From Mouth to Mind". Scientific American. Retrieved 2023-08-11.
  33. ^ Svoboda, Elizabeth. "How to Avoid Choking under Pressure". Scientific American. Retrieved 2023-08-11.
  34. ^ Brown, Alan S. "Putting Thoughts into Action: Implants Tap the Thinking Brain". Scientific American. Retrieved 2023-08-11.
  35. ^ "The Unspeakable Odyssey of the Motionless Boy". Esquire. 2008-10-02. Retrieved 2023-08-11.
  36. ^ Keim, Brandon. "Wireless Brain-to-Computer Connection Synthesizes Speech". Wired. ISSN 1059-1028. Retrieved 2023-08-11.
  37. ^ Rosenbaum, S. I. “Out of Silence, the sounds of hope.” Boston Globe 27 July 2008. http://www.boston.com/news/health/articles/2008/07/27/out_of_silence_the_sounds_of_hope/
  38. ^ "Device turns thoughts into speech". NBC News. 2009-12-31. Retrieved 2023-08-11.
  39. ^ "Paralysed man's mind is 'read'". 2007-11-15. Retrieved 2023-08-11.
[edit]