The Duke Institute for Brain Sciences (DIBS) recently announced the 2019 Germinator and Incubator Awards. These are internal competitive awards meant to catalyze cross-disciplinary collaborations. Faculty John Pearson, PhD will be engaged on research teams from three separate awards.
The first team is a research Incubator Award for “Decoding of Speech for Neural Prostheses Using High-density Electrocorticography and Machine Learning with Greg Cogan, PhD (Neurosurgery) Saurabh Sinha, MD, PhD (Neurology), Derek Southwell, PhD, (Neurosurgery) and Jonathan Viventi, PhD (Biomedical Engineering). Patients who suffer from debilitating neuromuscular disorders have difficulty communicating through language. Current technologies that provide some ability to communicate are slow and cumbersome. This group will explore a promising new technology that constructs speech directly from the brain. The team will develop pattern analysis techniques to extract speech and language information directly from brain signals, while also measuring the brain signals at much higher resolution than previously done. The team hopes to use this information to create better-quality speech sounds for patients with neuromuscular disorders, helping them speak more clearly, allowing them to communicate more effectively.
The second team is a research Incubator Award for “Computational Links between Visual and Linguistic Perception” with PI Marc Sommer, PhD (Biomedical Engineering Pratt School of Engineering), and Elika Bergelson, PhD (Psychology & Neuroscience) Trinity College of Arts & Sciences. Theories of perception either assume that the brain constructs a model of the world that merges past experiences with current evidence, or that it relies on simple, flexible systems to classify patterns. This research group has recently shown that for visual perception, humans switch between the two strategies. This switch in strategies might be special to vision or general to all perception. The group will therefore perform similar experiments in the domain of language. Finding computational commonalities between vision and language will help reveal general principles of brain function and provide insight into perceptual disorders.
The final team is a Germinator Award for “The Spatiotemporal Dynamics of Self-Regulation Learning in Real-time fMRI Neurofeedback with Rachael Wright, (Psychology & Neuroscience – Trinity College of Arts & Sciences); Alison Adcock, MD, PhD, (Psychiatry & Behavioral Sciences), and Kevin LaBar, PhD, (Psychology & Neuroscience). Neurofeedback is a promising method for examining the relationship between brain function and behavior. In neurofeedback, individuals are shown a graphical representation of a specific brain signal and learn to control that brain signal through practice. Scientists can then measure whether regulation of the targeted brain signal impacts thoughts, feelings, and behaviors. Clinicians have also applied neurofeedback to help remedy symptoms of psychiatric or neurological disorders, yet scientists still lack an understanding of the neural mechanisms by which the process occurs. To answer this important question, it is critical to investigate how different regions throughout the brain interact during training to help individuals learn to control a specific brain signal. In this project, we develop a new approach to understand how brain states change during neurofeedback learning using advanced brain imaging technology and computational analysis tools. Ultimately, this project will improve our understanding of how neurofeedback works and promote advances in its design and application.