2001 — 2003 |
Simon, Jonathan Z |
R03Activity Code Description: To provide research support specifically limited in time and amount for studies in categorical program areas. Small grants provide flexibility for initiating studies which are generally for preliminary short-term projects and are non-renewable. |
Coincidence Detection Models in Auditory Brainstem @ University of Maryland College Pk Campus
DESCRIPTION (from applicant's abstract): The long-term objective of this research is to characterize coincidence-detector neurons used for binaural sound localization in the avian and mammalian brainstem, using computational, biophysically accurate, models of individual neurons. The specific research goals are to: 1) build and characterize models of avian brainstem Nucleus Laminaris (NL) neurons using physiological data from chicks (Gallus domesticus), capable of reproducing the behavior and capabilities of real chick NL neurons; 2) build and characterize models of NL neurons in barn owls (Tyto alba) capable of reproducing the behavior and superior capabilities of real barn owl NL neurons, using physiological data from barn owls (supplemented by data from chicks); 3) modify and generalize the models of (1) and (2) to examine coincidence detection in mammalian brainstem Medial Superior Olive (MSO) cells, using mammalian data. The models will be designed to determine and analyze the anatomical and physiological features crucial to coincidence detection. The fundamental features are similar in chicks and barn owls, but crucial differences will give the barn owl its significantly greater abilities. The differences between NL and MSO are even larger, but their functions as coincidence detectors are sufficiently similar to permit MSO models to be built analogously to the NL models, modified according to anatomical and physiological differences. The research design and methods are based on constructing the models in the neural modeling environment NEURON (freely available from Yale University), with voltage-dependent conductance mechanisms and synaptic mechanisms written using the programming language C, for speed. The anatomical and physiological parameters describing the model cells are taken from the literature. Each model neuron has multiple dendrites, a soma, an axon hillock, a myelinated segment, and a node of Ranvier. The number, length, and branching of dendrites are free parameters and may vary tonotopically. The dendritic and somatic sections have low voltage activated (LVA) and high voltage activated (HVA) K+ conductances. The axon hillock and node of Ranvier have spike generating/propagating voltage-dependent ion channels. Non-phase-locked, diffuse, depolarizing inhibition is incorporated. Input vector strength is linked to the best frequency (BF) of the neuron, and NL versions of the model also link BF with dendritic length. The model automatically generates physiologically useful statistics (e.g. spike rate and vector strength) in numerical and graphical forms. The health relatedness of this research comes from the gained understanding of the neural mechanisms of sound localization. Understanding those mechanisms will substantially aid the quest for hearing aids and cochlear implants that would allow their users to localize sounds far better than current technology.
|
0.988 |
2008 — 2012 |
Simon, Jonathan Z |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
The Neural Basis of Perceptually-Relevant Auditory Modulations in Humans @ Univ of Maryland, College Park
DESCRIPTION (provided by applicant): One of the goals of auditory neuroscience is to understand how speech and other natural sounds are analyzed and encoded in the human auditory cortex. One major finding is that perception and speech processing are crucially affected by temporal modulations in the acoustic signal. However, identifying in humans the physiological mechanisms that underlie the analysis of perceptually-relevant temporal modulations presents a considerable technical challenge. Extracellular recording methods are ideal for the investigation of time-based neural coding mechanisms, but they are typically limited to a single auditory area and cannot be generally used in human subjects. Magnetoencephalography (MEG) is a non-invasive tool, suitable for use in humans that records high-speed neural signals from the entire brain, though at the cost of significantly coarser spatial resolution. Fortunately, recent work has shown that investigations of the neural coding of acoustic modulations can indeed be conducted using MEG with human subjects. Thus MEG and extracellular recording can both be employed, in complementary ways, to investigate how temporal modulations are encoded by auditory cortex. The goal of this proposed research program is to understand how these acoustic modulations, the building blocks of speech and other natural sounds are encoded in auditory cortex. The acoustic modulations whose encoding is investigated are either embedded in a noisy background, as in a natural auditory scene, or modulated in both frequency and amplitude, independently and simultaneously, as in speech. The research program employs parallel sets of experiments: one set using MEG to record from human auditory cortex, and the other using extracellular recording methods in an animal model. With recordings from individual neurons, from the extracellular local field potential, and from the whole cortex, it may be possible to unify the different schemes used to neurally encode acoustic modulations, up and down the neural hierarchy. PUBLIC HEALTH RELEVANCE: Recent research suggests that a variety of hearing and cognitive impairments result from problems in temporal processing of sounds (e.g. developmental language disorders, and hearing impairment due to aging, auditory neuropathy, or cochlear implants). The present work will provide novel experimental approaches, as well as a rich empirical database on cortical temporal processing in normal human subjects, that can then be used in clinical settings.
|
0.988 |
2015 — 2019 |
Simon, Jonathan Z |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Auditory Scene Analysis and Temporal Cortical Computations @ Univ of Maryland, College Park
? DESCRIPTION (provided by applicant): When many people in a room are talking at the same time, the sounds of their voices mix with each other before ever arriving at our ears. Despite the fact that sorting out this sound mixture into individual voices is a profoundly difficult mathematical problem, our brain routinely accomplishes this task, and often with little apparent effort. The neural underpinnings of this nonetheless difficult task are not at all well understood. Furthermore, when this ability declines, e.g., due to hearing loss or aging, it is not known which specific mechanisms of the neural processing are the most critical in preserving remaining aspects of this ability. In order to address these issues, this proposed research uses magnetoencephalography (MEG) to record from the auditory cortex of behaving human subjects, specifically the temporally dynamic neural responses to individual sound elements and their mixtures. Linking the neural responses with their auditory stimuli and attentional state allows us to infer neural representations of these sounds. These neural representations are temporal: the neural processing unfolds in time in response to the ongoing acoustic dynamics. This research program will use these temporal representations to investigate how complex auditory scenes are neurally encoded, from the broad mixture of the entire acoustic scene to separated individual sources, in different areas of auditory cortex, and with a special emphasis on speech. Its overarching hypothesis is that auditory cortex employs a universal neural encoding scheme, genuinely temporal in nature, which underlies not only general auditory processing but also auditory scene segregation. The first specific aim will determine how auditory cortex neurally represents speech in difficult listening situations. One example is of speech in noise in a reverberant environment, a very relevant combination which can strongly undermine speech intelligibility. Another example is listening to a speaker in the presence of several competing speakers. In this case, understanding how the background (the mixture of the competing speakers) is neurally represented is of particular interest, and of direct relevance in determining how the brain segregates the foreground speech from the background. The second specific aim will determine analogs of these neural speech representations for dynamic non-speech sounds, especially when the sounds are separate components of a larger acoustic scene. This will generalize what is known about speech segregation to a wider class of sounds (while speech is very important for human listeners, most sounds are not speech). The third specific aim investigates the detailed neural mechanisms by which auditory cortex identifies and isolates individual speakers in a complex acoustic scene. Pitch and timbre, two acoustic cues known to be important for this task, are separately and independently modified, so that their individual contributions to the neural process of auditory scene segregation of speech may be determined.
|
0.987 |
2017 — 2021 |
Babadi, Behtash (co-PI) [⬀] Simon, Jonathan |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ncs-Fo: Extracting Functional Cortical Network Dynamics At High Spatiotemporal Resolution @ University of Maryland College Park
This project is funded by Integrative Strategies for Understanding Neural and Cognitive Systems (NSF-NCS), a multidisciplinary program jointly supported by the Directorates for Computer and Information Science and Engineering (CISE), Education and Human Resources (EHR), Engineering (ENG), and Social, Behavioral, and Economic Sciences (SBE). Neuroscientists have been remarkably successful in understanding the function of numerous brain regions by studying them in isolation and characterizing their individual roles in behavior. Growing evidence in recent years, however, suggests that sophisticated brain function emerges from the co-activation of multiple brain regions that exhibit networked activity. These networks organize rapidly in order to allow the brain to adapt to changes in the environment, resulting in robust behavior. Deciphering the neural mechanisms underlying these network dynamics is therefore crucial in understanding how the brain carries out cognitive processes such as attention, decision-making and learning. Recent technological advances in noninvasive neuroimaging have largely addressed the experimental challenges in studying these dynamic networks in humans and have provided abundant neural data under countless clinical and experimental conditions. However, the sheer high-dimensionality of these data together with the complexity of these networks has created various bottlenecks in data analysis, modeling, and statistical inference. In order to exploit the unique window of opportunity provided by the abundance of noninvasive neural data, this project is (1) developing a unified methodology for inferring the dynamics and statistical characteristics of these cortical networks, in a computationally efficient fashion, and (2) applying this methodology to magnetoencephalography (MEG) data from behaving human subjects to address several fundamental questions about auditory processing. This work brings new insight as to the dynamic organization of brain networks at unprecedented spatiotemporal resolutions, and can thereby affect technology in the areas of brain-computer interfacing and neuromorphic engineering. It also allows for the creation of engineering solutions for early detection and monitoring of cognitive disorders involving auditory perception and attention. The outcome of this project will be disseminated to the broader scientific community in the form of publicly accessible data analysis toolboxes accompanied with tutorials and webinars. The research plan is complemented by educational activities at the K-12, undergraduate, and graduate levels, including workshops, undergraduate projects, and course development, with an emphasis on the involvement of women and underrepresented minorities.
The existing paradigm for extracting cortical functional network dynamics faces challenges, including loss of temporal resolution due to the common sliding window processing, loss of spatial resolution due to the constraints of noninvasive recording, and statistical bias due to the heavy usage of linear estimation techniques given that network properties are intrinsically non-linear. This project provides a unified research plan for addressing these challenges, by combining high temporal resolution non-invasive recordings with high spatial resolution in a statistically robust way, using modern signal processing techniques. This methodology will specifically be applied to MEG data acquired from behaving human subjects, and will be used to decipher the neural mechanisms of adaptive auditory processing.
|
0.915 |
2017 — 2021 |
Simon, Jonathan Z |
P01Activity Code Description: For the support of a broadly based, multidisciplinary, often long-term research program which has a specific major objective or a basic theme. A program project generally involves the organized efforts of relatively large groups, members of which are conducting research projects designed to elucidate the various aspects or components of this objective. Each research project is usually under the leadership of an established investigator. The grant can provide support for certain basic resources used by these groups in the program, including clinical components, the sharing of which facilitates the total research effort. A program project is directed toward a range of problems having a central research focus, in contrast to the usually narrower thrust of the traditional research project. Each project supported through this mechanism should contribute or be directly related to the common theme of the total research effort. These scientifically meritorious projects should demonstrate an essential element of unity and interdependence, i.e., a system of research activities and projects directed toward a well-defined research program goal. |
Signal Processing and Data Analysis Core @ Univ of Maryland, College Park
CORE B: Signal Processing and Data Analysis - Abstract The Signal Processing and Data Analysis (SP&DA) core supports enhancements and new developments within PPG laboratories and facilitates collaboration among the projects that comprise this PPG. The Core provides services to each of the three projects (and associated labs) beyond what is available to individual PIs through the individual projects and increases effectiveness by sharing expertise and centralizing labor- intensive tasks. The availability of such resources increases the efficiency, sophistication, and innovation of the individual projects and enables us to pursue collaborative research efforts that would not otherwise be possible. The first specific aim is to provide customized hardware assistance to the individual laboratories participating in the three projects; upgrade and maintain stimulus generation and real-time data acquisition systems; and design and fabricate specialized equipment for use in PPG laboratories. The second specific aim is to facilitate and enable opportunities for collaboration across laboratories/projects by providing cross- laboratory programming support for data acquisition, stimulus generation, signal and image processing, data analysis, and support communication, sharing, and training across all laboratories and participants in the PPG, and across projects. The third specific aim is to provide support for sophisticated statistical experimental design and analysis for modeling data obtained across all three projects and substantially advance our knowledge of improvements in neuroplasticity with aging as a result of auditory/cognitive training.
|
0.987 |
2017 — 2021 |
Simon, Jonathan Z |
P01Activity Code Description: For the support of a broadly based, multidisciplinary, often long-term research program which has a specific major objective or a basic theme. A program project generally involves the organized efforts of relatively large groups, members of which are conducting research projects designed to elucidate the various aspects or components of this objective. Each research project is usually under the leadership of an established investigator. The grant can provide support for certain basic resources used by these groups in the program, including clinical components, the sharing of which facilitates the total research effort. A program project is directed toward a range of problems having a central research focus, in contrast to the usually narrower thrust of the traditional research project. Each project supported through this mechanism should contribute or be directly related to the common theme of the total research effort. These scientifically meritorious projects should demonstrate an essential element of unity and interdependence, i.e., a system of research activities and projects directed toward a well-defined research program goal. |
Speech Perception With High Cognitive Demand @ Univ of Maryland, College Park
ABSTRACT Understanding speech in background noise is a common, yet effortful task that involves interactive auditory and cognitive processes. Age-related declines in the neural systems that underlie these functions may explain why speech-in-noise recognition is especially difficult for older adults, even for those with little to no hearing loss. This project will design, implement, and test interventions that train top-down, executive functions within an auditory task in order to improve speech recognition in noise for older normal-hearing adults. The current proposal systematically tests the overarching hypothesis that speech recognition in noise can be improved via training that specifically targets the interactive neural systems that underlie changes in auditory-cognitive processing with age. Three sets of experiments will identify patterns of cortical activity that 1) are associated with age-related changes in speech recognition in noise, 2) are engaged with candidate training exercises, and 3) that change with auditory-cognitive training. Behavioral, neural (using magnetoencephalography, MEG), and pupil dilation measures will be obtained in healthy younger and older adults with normal hearing. Specifically, Aim 1 tests the hypothesis that age-related declines in speech recognition in noise are predicted by changes in top-down modulation of auditory cortex activity. The proposed study will examine the extent to which age- related changes in the neural representation of speech in auditory cortex are associated with poorer speech recognition and greater cognitive effort. Aim 1 also tests whether these age-related declines are associated with a weaker relationship between frontal and temporal cortex activity. Aim 2 evaluates the extent to which specific auditory-cognitive tasks elicit top-down modulation of auditory cortex activity. Tasks that place greater demands on domain-general executive functions (e.g., inhibitory control, working memory) in the context of an auditory processing task are predicted to more robustly modulate frontal-temporal connectivity. Aim 3 tests the extent to which changes in top-down modulation of temporal cortex activity following auditory-cognitive training predict improvements in speech recognition in noise. In contrast to previous studies that have employed passive control groups, Aim 3 will test the prediction that an integrated auditory-cognitive training program yields increased frontal-temporal connectivity and thus better speech recognition in noise and reduced listening effort compared to an active training control group. By increasing our understanding of the aging brain, we aim to develop effective interventions to improve speech recognition and the overall quality of life of older adults.
|
0.987 |