2004 — 2007 |
Limp, W. Frederick Vranich, Alexei Shi, Jianbo (co-PI) [⬀] Daniilidis, Kostas [⬀] Biros, George |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Computing and Retrieving 3d Archaeological Structures From Subsurface Surveying @ University of Pennsylvania
Today's archaeological excavations are slow and the cost for conservation can easily exceed the cost of excavation. This project is investigating and developing methods for the recovery of 3D underground structures from subsurface non-invasive measurements obtained with ground penetrating radar, magnetometry, and conductivity sensors. The results will not only provide hints for further excavation but also 3D models that can be studied as if they were already excavated. The three fundamental challenges investigated are the inverse problem of recovering the volumetric material distribution, the segmentation of the underground volumes, and the reconstruction of the surfaces that comprise interesting structures. In the recovery of the underground volume, high-fidelity geophysics models are introduced in their original partial differential equation form. Partial differential equations from multiple modalities are simultaneously solved to yield a material distribution volume. In segmentation, a graph spectral method for estimating graph cuts finds clusters of underground voxels with tight connections within partitions and loose connections between partitions. A method based on multi-scale graph cuts significantly accelerates the process while the grouping properties of the normalized cuts help in clustering together multiple fragments of the same material. In surface reconstruction, boundaries obtained from segmentation or from targeted material search are converted from unorganized voxel clouds to connected surfaces. A bottom-up approach is introduced that groups neighborhoods into facets whose amount of overlap guides the triangulation process. The archaeology PIs are providing prior knowledge on what structures are expected to be found which can lead the segmentation and the reconstruction steps.
The geoscience and archaeology PIs lead the effort of data acquisition at the Tiwanaku site in Bolivia. All original data as well as recovered 3D models will be made available to the public.
|
0.937 |
2004 — 2009 |
Davatzikos, Christos (co-PI) [⬀] Biros, George |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr: Collaborative Research - Ase - (Sim+Dmc): Image-Based Biophysical Modeling: Scalable Registration and Inversion Algorithms and Distributed Computing @ University of Pennsylvania
A multidisciplinary team of researchers from Argonne National Laboratory, Carnegie Mellon University, Columbia University, University of Chicago, Emory University, and University of Pennsylvania, with collaborators from the Universities of Graz and Lubek, will initiate a long term research project on image-driven, inversion-based biophysical modeling. The team includes expertise in numerical algorithms and scientific computing, fluid and solid biomechanics, PDE optimization, inverse problems, medical image analysis and processing, and distributed and grid computing necessary to tackle this class of problems.
This project aims to create a framework for assimilating multimodal dynamic medical image data to produce highly-resolved, physically-realistic, patient-specific biomechanics models. While the computational and algorithmic aspects of the project are widely applicable, the target application will be the construction of patient-specific cardiac biomechanics models from 4D image datasets of heart motion. Such models are useful for medical diagnosis and surgical planning. This places a premium on quick turnaround of the computations, which mean they must be fast, scalable, and capable of exploiting grid-based computing.
Research will focus on three key areas that undergird the project's overall goals: registration, inversion, and distributed computing. The registration research component will create multilevel algorithms to extract cardiac deformation histories from time-varying medical image datasets via the solution of sequences of 3D image registration problems. The inversion research component will develop multilevel algorithms that use these deformation field histories as virtual observations to solve inverse problems for cardiac biomechanical parameters. The distributed computing research component will create tools for performance prediction and resource scheduling that support simulations across distributed computational resources.
Dovetailing with the research components, the project will undertake an educational program designed to communicate the fruits of its work and of the wider benefits of the integration of the biomedical sciences, computing sciences, and computational sciences, to a more general audience of students, disciplinary researchers, and the lay public. The professional activities of the team members in the inversion, image registration, grid computing, and computational science communities will be parlayed to organize workshops and international meetings, edit volumes, teach summer schools, develop university and short courses, and engage in outreach activities---as they have done in the past---but with greater emphasis on the field of computational biomedicine. The proposed image-based cardiac biomechanics modeling application will provide an excellent opportunity to demonstrate the benefits to health and welfare that advances in optimization-based registration and inversion algorithms and Grid computing can provide.
|
0.937 |
2005 — 2010 |
Biros, George |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Dddas-Tmrp: Mips: a Real-Time Measurement Inversion Prediction Steering Framework For Hazardous Events @ Georgia Tech Research Corporation
The project will develop a multiscale, data-driven, high performance computational framework for real-time reconstruction of hazardous events from sparse measurements, and consequent probabilistic prediction of the evolution of the hazard. The framework is distinguished by four phases that are performed continually with dynamically-obtained data over the lifetime of the hazardous event. (1) Measurement: Distributed sensors provide dynamic measurements over a specified time horizon that will be used to reconstruct the initial conditions of the event. (2) Inversion: Driven by the sparse measurements, an inverse problem is solved to estimate the initial conditions for the equations governing the evolution of the hazard. (3) Prediction: Statistical analysis of the inversion results permits estimation of the uncertainty in the initial conditions, which is propagated into a prediction of the evolution of the hazard and its uncertainty. (4) Steering: Sensors are steered to new locations based on an effectivity index that incorporates sensitivities of the inversion with respect to sensor location, estimated uncertainty in the prediction, and population density factors. Continual application of the measure-invert-predict-steer (MIPS) framework described above results in updated predictions of the evolving hazard with built-in uncertainty estimates, as well as revised sensor deployment strategies that refine the predictions to reduce their uncertainty. The methods developed consider two time scales of decision making at which the MIPS framework must execute. The seconds-to-minutes decision-making scale is required by first responders to begin immediate response efforts. For such time scales, high-fidelity models in the form of partial differential equations (PDEs) are too formidable. Instead, the proposed methods will construct reduced-order models of the PDEs to facilitate realtime execution of the MIPS framework. The minutes-to-hours decision-making scale permits more careful and measured response by emergency officials using high-fidelity, high-resolution PDE models. To enable rapid execution of the MIPS cycle for such models, the project will develop fast, scalable, parallel algorithms for inversion and prediction. To demonstrate, assess, harden, robustify, and the resulting framework, will be validated on a specific application testbed: prediction of the urban/regional dispersion of intentionally- or accidentally-released atmospheric contaminants from sparse measurements.
|
0.937 |
2006 — 2008 |
Biros, George |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Fast High-Order Methods For Vesicle-Fluid and Membrane-Fluid Interaction and Adhesion @ University of Pennsylvania
The investigators develop numerical algorithms for simulation of incompressible deformable membranes interacting with Stokesian fluids, other membranes and rigid walls. The goal of this work is to enable high-accuracy fast simulations of systems consisting of large numbers of interacting vesicles. Investigated numerical methods are based on a boundary integral formulation which only uses fields defined on the surface and therefore eliminates the need for discretization of 3D space, which considerably simplifies simulation of deformable boundaries. The dense linear systems of equations resulting from discretization and linearization of the boundary integral formulation are solved using extensions of kernel-independent fast multipole algorithms previously developed by the investigators. The work includes two main directions: discretization schemes for deformable surfaces, based on constructive manifold structures on meshes and localized spectral bases, and boundary integral and membrane-fluid interaction linear and nonlinear solvers.
The investigators aim to create efficient computational tools for solving problems arising in a variety of biological and biomedical contexts. Potential applications include simulation of blood cell behavior in blood flow, formation of membranes in a fluid and design of targeted drug delivery mechanisms based on vesicles. Development of efficient tools for such problems makes it possible to test scenarios involving complex boundaries, large numbers of deforming cells or vesicles.
|
0.937 |
2007 — 2013 |
Biros, George |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Petascale Algorithms For Particulate Flows @ University of Pennsylvania
Project proposes a petaflop-scalable computational infrastructure for the direct simulation of particulate flows, in particular the simulation of spatio-temporal dynamics of platelet aggregation. Better understanding of microcirculation of blood and platelet rheology will impact clinical needs in thrombosis risk assessment, anti-coagulation therapy, and stroke research. The proposed method comprises two algorithmic components: (1) integral equation solvers for Stokesian flows with dynamic interfaces; and (2) scalable fast multipole algorithms. Why do we need petaflop-scale computing power to tackle this problem? One microliter of blood contains millions of red blood cells(RBCs) and a few hundred thousand platelets. Discretizations with O(100 points/cell and O(1000) time steps result in more than a trillion space-time unknowns. Solving problems of such size will require 50K-core machines. Computational tools that achieve such scalability, will enable direct numerical simulation of several microliters of blood, once million-core computing platforms are available.
|
0.937 |
2007 — 2011 |
Ghattas, Omar [⬀] Biros, George |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Understanding the Dynamics of the Earth: High-Resolution Mantle Convection Simulation On Petascale Computers @ University of Texas At Austin
NSF PetaApps 0749334/0748898/0749045:
Understanding the Dynamics of the Earth: High-Resolution Mantle Convection Simulation on Petascale Computers
George Biros (Penn), Omar Ghattas (UT-Austin), Michael Gurnis (CalTech), Shijie Zhong (CU-Boulder)
Mantle convection is the principal control on the thermal and geological evolution of the Earth. It is central to our understanding of the origin and evolution of tectonic deformation, the evolution of the thermal and compositional states of the mantle, and ultimately the evolution of the Earth as a whole. Despite its central importance to our understanding of the dynamics of the solid Earth, simulation of global mantle convection at realistic Rayleigh numbers down to the scale of faulted plate boundaries is currently intractable, due to the wide range of time and length scales involved.
This project will capitalize on upcoming petascale computing systems to carry out the first high resolution mantle convection simulations that can resolve thermal boundary layers and faulted plate boundaries, which will enable the first inverse solutions that can incorporate historical plate motions. These simulations will lead to breakthroughs in understanding the dynamics of the solid Earth. However, to make effective use of the upcoming petascale systems, new scalable algorithms and implementations are needed.
To enable these simulations, this project will: (1) tune, improve the performance of, and scale up to the petascale the parallel open-source mantle convection code CitcomS; (2) develop, implement, robustify, and incorporate new parallel algorithms for adaptive mesh refinement and inverse solution that can scale to hundreds of thousands of processor cores; and (3) release the resulting mantle convection codes to the geosciences community via the Computational Infrastructure for Geodynamics (CIG), an NSF center that develops and maintains software for several earth science communities.
|
0.934 |
2007 — 2008 |
Biros, George |
P41Activity Code Description: Undocumented code - click on the grant title for more information. |
Parallel Algorithms For Medical Image Registration @ Carnegie-Mellon University |
1 |
2009 — 2013 |
Shoemaker, Deirdre Laguna, Pablo Biros, George |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cdi-Type I: Collaborative Research: Observational Data as Central Engine of Binary Black Hole Simulations @ Georgia Tech Research Corporation
Cyber-Enabled Discovery and Innovation (CDI)
Proposal Number: 0941417 / 0940924 PIs: Pablo Laguna / Lee Finn Institutions: Georgia Tech Research Corporation - Georgia Institute of Technology / Pennsylvania State Univ University Park Title:CDI-Type I: Collaborative Research: Observational Data as Central Engine of Binary Black Hole Simulations
A gravitational wave astrophysics driven by observations is just around the corner. Interferometric gravitational-wave detectors such as LIGO and its partners have reached design sensitivity. The general consensus is that first detections will take place in the very near future. Beyond detection, one of the grand challenges in this new astronomy is the characterization of sources from the information encoded in the signals buried in the noisy data. The degree of success in this enterprise will determine the extent to which gravitational wave observations can be used as a tool of discovery. The proposed project is motivated by the premise that recognizing and interpreting the data collected through the window of gravitational wave observations requires a set of skills spanning several disciplines, thus presenting an opportunity to develop transformative and multidisciplinary research and to open the door to innovations and advances in this new astronomy, in which gravitational waves act as messengers.
The central theme of this project is using numerical-relativity tools and data-analysis methodologies to solve the inverse problem in gravitational physics for one of the most important sources of gravitational radiation, the inspiral and merger of a binary black hole system. The solution to this problem is essential for harnessing the predictive power of general relativity and for enhancing the conversation between the data collected by gravitational wave interferometric detectors and the questions posed by astrophysics.
The project approach is development of efficient numerical algorithms for assimilating observational data, applying deterministic and stochastic parameter estimation techniques to address the source characterization problem. Activities supported by this proposal are intended to produce tools and methodologies that enable the marriage of numerical simulation, gravitational wave observation and algorithms used in large inverse problems outside gravitational physics. These tools will enable acquiring new knowledge through the analysis of large and rich multi-spectral gravitational observations, multi-scale numerical relativity simulations, catalyzed by the combination of the still-growing pool of computational resources and related advances in the understanding of computational Bayesian inference.
Among its broader impacts, this multidisciplinary team intends to develop a new paradigm for using numerical relativity tools and an innovative approach to the inverse problem in gravitational physics. The tools produced by this collaboration will be made freely available and will be directly applicable to other disciplines. The proposed work exploits algorithms and methodology to tackle inverse problems in areas such as global climate change, weather forecasting, medical imaging and reservoir simulations. Students will have opportunities to build expertise in data analysis, numerical algorithms, high-performance computing, optimization, and inverse problems.
|
0.906 |
2010 — 2014 |
Biros, George |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cdi Type Ii/Collaborative Research: Ultra-High Resolution Dynamic Earth Models Through Joint Inversion of Seismic and Geodynamic Data @ University of Texas At Austin
Seismic and geodynamic observational data will be employed to infer a unified dynamic earth model through solution of the joint nonlinear inverse problem governed by high-resolution mantle convection and seismic wave propagation models. This will lead to a merging of the diverse data used to constrain plate tectonics and mantle convection, providing a new 4-D picture of earth's surface and interior over the last 50 million years. The inverse problem entails severe mathematical, computational, and geophysical challenges, which conventional methods are incapable of addressing. There are several parts of this project that are planned to overcome these challenges. We will devise inverse methods that can extract full information from the large volumes of seismic and geodynamic data by creating algorithms to solve the coupled full waveform seismic and geodynamic inverse problem, and use those methods to invert for global and regional earth models from broadband seismic and geodynamic data. We will develop inversion algorithms that can scale to the large numbers of CPU cores and complex memory hierarchies characterizing emerging multi-petaflop systems. We will also extend adaptive mesh refinement (AMR) ideas from large-scale forward simulation to the setting of large-scale inverse problems. Beyond the scientific impact, the project has a program of outreach and education that is highlighted by dissemination of 4-D animations of dynamic earth models. This project fits within the "From Data to Knowledge" and "Understanding Complexity" themes of the CDI Program.
The earth is a four-dimensional dynamic system where mantle convection drives plate tectonics and continental drift and, in turn, controls much activity ranging from the occurrence of earthquakes and volcanoes to mountain building and long-term sea level change. Despite the central role mantle convection plays in our understanding of earth, we have enormous first-order gaps in our knowledge, with questions that are as basic as what are the principal driving and resisting forces on plate tectonics to what is the energy balance of the planet as a whole. However, rapidly-expanding volumes of geophysical data, the arrival of the petaflop computing era, and the emergence of high-resolution forward model simulation capabilities now provide an opportunity to merge the geophysical data into dynamic earth models to greatly enhance our understanding of earth structure. This project could catalyze a shift in the field of geodynamics, since it will lead to rigorous inference of earth models from data employing high-resolution forward models. Moreover, the project could be transformative for many other fields with similar needs, through the development of parallel mesh algorithms for large-scale inverse problems, scalable methods for large-scale nonlinear inverse problems, and inverse methods for joint inversion of data for large complex multi-physics forward models. All of these computational/mathematical advances will benefit a much wider community of scientists working on a much broader set of problems than the ones encountered in this project.
|
0.934 |
2010 — 2013 |
Biros, George |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Si2-Sse: Software For Integral Equation Solvers On Manycore and Heterogeneous Architectures @ University of Texas At Austin
We propose to develop and deploy mathematical software for boundary-value problems in three-dimensional complex geometries. The algorithms in the library will be based on integral equation formulations. The library will be designed to scale on novel computing platforms that comprise special accelerators and manycore architectures.
Integral equations can be used to conduct simulations on many problems in science and engineering with significant societal impact. Three example applications on which the proposed simulation technologies will have an impact in this project are microfluidic chips, biomolecular electrostatics, and plasma physics. First, microfluidic chips are submillimeter-sized devices used for medical diagnosis and drug design. Optimizing the function of such devices at low cost requires efficient computer simulation tools, such as the ones we propose to develop. Second, understanding the structure and function of biomolecules such as DNA and proteins is crucial in biotechnology. The proposed technologies can be used to resolve bimolecular electrostatic interactions. Third, plasma physics, which is related to fusion nuclear reactors, includes electrostatic interactions in complex geometries, and the proposed work will enable large-scale three-dimensional simulations.
The key features of the proposed software are: (1) parallel fast multipole methods, (2) efficient geometric modeling techniques for complex geometries, (3) simple library interfaces that allow use of the proposed software by non-experts, and (4) scalability on heterogeneous architectures.
Along with our research activities, an educational and dissemination program will be designed to communicate the results of this work to students and researchers. Several postdoctoral, graduate, and undergraduate students will be involved with the project. Additional educational activities will include research experiences for undergraduates, leveraging ongoing programs such as NSF REUs. We will encourage participation by women, minorities, and underrepresented groups.
|
0.934 |
2013 — 2017 |
John, Lizy (co-PI) [⬀] Van De Geijn, Robert Gerstlauer, Andreas (co-PI) [⬀] Biros, George |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Xps: Dsd: A2ma - Algorithms and Architectures For Multiresolution Applications @ University of Texas At Austin
The goal of this research project is to devise novel methodologies and devices for problems in computational science and engineering that require high-intensity of arithmetic operations (also known as Floating-point Operations Per Second, or ``FLOPS''). Among the many hurdles faced in research and development of such compute-intensive technologies is achieving energy-efficient utilization of the available computing resources. Similar to the miles/gallon metric used in automotive design, one is interested in a metric that can be used in the design of new computing technologies: optimizing FLOPS/watt. This research will be on designing novel algorithms and architectures that optimize this metric. These algorithms and architectures will be customized to a particular class of scientific computing problems: tree-based finite element methods and N-body problems.
It is possible to devise algorithms that parallelize well and are energy efficient (i.e., produce high ``percentage-of-peak'' measurements). Often, however, such algorithms sacrifice work optimality. It is much more difficult to design algorithms that do so while achieving both work optimality and energy efficiency. This on-node utilization wall---a chronic problem since the early nineties---not only remains unresolved but has become more acute with the emergence of deeper memory hierarchies and manycore and heterogeneous architectures. At the same time, there is a large untapped potential by not only adapting algorithms to architectural changes, but instead driving architecture design from algorithm requirements. This research will identify the design space for tree-based algorithms (under the constraints of work-optimality and maximum concurrency), evaluate performance of state-of-the-art codes, and explore custom algorithm/hardware platforms. A number of broader impacts are anticipated from this project. The target methodologies find applications in earth sciences, engineering, cosmology, biology, and data analysis. Along with the research activities, an educational and dissemination program will be designed to communicate the results of this work to both students and researchers, as well as a more general audience of computational and application scientists.
|
0.934 |
2017 — 2020 |
John, Lizy (co-PI) [⬀] Gerstlauer, Andreas (co-PI) [⬀] Biros, George |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Spx: Cisit: Computing in Situ and in Memory For Hierarchical Numerical Algorithms @ University of Texas At Austin
High performance computing holds an enormous promise for revolutionizing science, technology, and everyday life through modeling and simulation, statistical inference, and artificial intelligence. Despite the numerous successes in software and hardware technologies, energy efficiency barriers have become a major hurdle towards more powerful computers -- from mobile devices all the way to supercomputers. The originally natural separation between the memory subsystem and the central processing unit (CPU) of a computer has emerged as one the main reasons for energy inefficiency. Data movement between the memory and the CPU requires orders of magnitude more energy than the computations themselves. To address these challenges, this project will consider novel architectural design paradigms and algorithms that are aimed at blurring these traditional boundaries between separated memory and computation subsystems and, by distributing computations to be performed directly in the memory or as part of the memory data transfers, achieve order of magnitude gains inenergy efficiency and performance. This project will investigate such novel approaches in the context of a class of methods in computational mathematics, which appear at the core of many problems in computational science, large-scale data analytics, and machine learning.
Specifically, this project will focus on data-driven rather than compute-driven co-design of algorithms and architectures for the construction, approximation, and factorization of hierarchical matrices. The end-goal of the project is the design of a novel architecture, CISIT (for ``Computing In Situ and In Transit''), that specifically aims to address acceleration of both computation and data movement in the context of hierarchical matrices. CISIT will uniquely combine traditional general-purpose CPU and GPU cores with: (1) acceleration of core algorithmic primitives using custom hardware; (2) in-situ computing capabilities that will comprise both processing in or near main memory as well as computing within on-chip caches and memory close to the cores; (3) novel in-transit compute capabilities that will enable cutting down on and in many cases completely eliminating unnecessary roundtrip data transfers by processing of data transparently as it is transferred between main memory and local compute cores across the cache hierarchies. Upon success, CISIT will influence future architectural implementations. Along with the research activities, an educational and dissemination program will be designed to communicate the results of this work to both students and researchers, as well as a more general audience of computational and application scientists.
|
0.934 |
2018 — 2021 |
Biros, George |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Shf: Small: Algorithms and Software For Scalable Kernel Methods @ University of Texas At Austin
Scientists and engineers are increasingly interested in using machine learning methods on huge datasets that cannot be processed on a single workstation. At the same time public and private institutions are making significant investments on high-performance computing (HPC) clusters equipped with thousands of leading edge processors and network connectivity. However, despite the availability of such HPC systems, data analysis tasks are mostly restricted to a single or a few workstations. The reason is that, with few exceptions, existing machine learning software does not scale efficiently on HPC systems. The need to process in-situ large scientific and engineering datasets is not met with current software and significant downsampling is required in order to use existing tools. A serious bottleneck in current artificial intelligence (AI) workflows is the significant cost of training for large scale problems. The slow convergence of existing methods and the large number of calibration hyper-parameters (learning rate, batch size, and other knobs that control the performance of the AI system) make training extremely expensive. Design and analysis of scalable optimization algorithms for faster training, that is the fitting of the machine learning (ML) model parameters to the data, are needed for analytics in real time and at scale, which is the goal of this project.
The proposed research will introduce novel numerical methods and parallel algorithms for second-order/Newton methods that will be tailored to machine learning (ML) models and will be many orders of magnitude faster than the existing state of-the-art (first-order methods like steepest descent). The researchers plan to design, analyze, and implement robust approximations for covariance matrices, a class of matrices in AI and computational statistics, used in statistical analysis (e.g., sampling, risk assessment, and uncertainty quantification). The investigators plan to design, analyze, and implement scalable fast algorithms in the context of high-performance computing for the so called nearest-neighbor problem, a particular method in ML, data analysis, and information retrieval. The resulting software library will provide a means for end-to-end tools for discovery and innovation and provide new capabilities in the NSF XSEDE infrastructure project. Along with research activities, an educational and dissemination program is designed to communicate the results of this work to both students and researchers, as well as a more general audience of computational and application scientists.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.934 |
2021 |
Biros, George |
R21Activity Code Description: To encourage the development of new research activities in categorical program areas. (Support generally is restricted in level of support and in time.) |
Neuroimage-Driven Biophysical Inverse Problems For Atrophy and Tau Propagation @ University of Texas, Austin
PROJECT SUMMARY/ABSTRACT Early detection and stratification of Alzheimer?s disease (AD) offers numerous medical, emotional and financial benefits. A critical research direction is to develop methods for earlier diagnosis and patient classification, with the hope of developing treatment? before cognitive damage sets in. Neuroimaging has made it possible to derive key biomarkers in vivo: measures of atrophy using Magnetic Resonance Imaging (MRI) and accumulation of misfolded A? and tau deposits using Positron Emission Tomography (PET). Community efforts have created high-quality datasets with 1000s of cases that comprise multimodal imaging scans, cognitive evaluations, lab work, and genetic information. However, the heterogeneity of the data is a challenge for traditional statistical methods. Complementary to existing quantitative analysis techniques, we propose to use biophysical mathematical models of disease progression. Prior work has shown that mathematical models of protein misfolding in degenerative disorders can capture spatiotemporal disease dynamics and can enhance image interpretation by providing clinically useful biomarkers in terms of model parameters and disease progression. In this project, we will integrate a novel partial differential equation model of tau propagation with state-of-the-art parameter calibration methods developed in our group. Our hypothesis is that this model will provide novel diagnostic and prognostic value. First, we will work on the development of a new tau propagation simulator. This model will account for the progression of tau and its coupling to atrophy. Second, we will develop model calibration algorithms that use multiparametric Magnetic Resonance Imaging, Diffusion Tensor Imaging, and tau PET. We will conduct a retrospective validation study using images from the Alzheimer?s Disease Neuroimaging Initiative and the Harvard Aging Brain Study datasets.
|
0.937 |
2022 — 2025 |
Biros, George |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cds&E: Ai-Rheo: Learning Coarse-Graining of Complex Fluids @ University of Texas At Austin
Stokesian complex fluid flows describe transport phenomena at small scales. Examples include flows inside biological cells, blood flow in capillaries and microfluidic devices, DNA hydrodynamics, lab-on-a-chip industrial and medical devices, bacterial flows, and polymer flows. Understanding and predicting the behavior of Stokesian complex fluids using numerical simulations is fundamental in understanding mechano-biological mechanisms, design of microfluidic devices, medical robotics, and many other applications. Complex fluid flows are challenging to simulate because they involve solid-fluid interaction, moving interfaces and complex geometries, non-local and multiscale couplings in both space and time, and highly nonlinear and often chaotic dynamics. This project focuses on the development of methods that will significantly advance the state-of-the-art of complex fluid simulation technologies.
The specific goals of this project include the following. (A) The design of high-performance computing (HPC) algorithms that integrate dimension reduction and deep learning methods with integral equation methods and result in orders-of-magnitude speedups of predictive simulations of Stokesian complex fluid flows. (B) The design and deployment of HPC software infrastructure that automates configuration sampling, operator splitting, deep network training, and inference for a large class of complex fluids. (C) Evaluation of the proposed methodology on three problems: calculation of effective properties, parameter estimation, and shape optimization of microfluidic devices. The complex fluid solvers developed in this project, will impact a broad spectrum of disciplines in sciences and engineering that involve problems with moving interfaces and microstructure evolution. Furthermore, the project aims to engage a number of graduate and undergraduate students at the interface of HPC, machine and deep learning, integral equations, and complex fluids.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.934 |