2006 — 2009 |
Martinsson, Per-Gunnar |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Fast Direct Solvers For Boundary Integral Equations @ University of Colorado At Boulder
The proposed research seeks to develop fast, accurate, and robust computational techniques for solving a class of mathematical equations known as "linear boundary-value problems". Such equations are ubiquitous in engineering and science, and the task of finding approximate solutions to them is frequently the most expensive component of numerical simulations.
There currently exists a multitude of computational techniques for solving linear boundary-value problems, including some that are both highly accurate and very fast. The emergence of such methods over the last two decades has vastly increased our ability to simulate complex phenomena in science, engineering, medicine, and many other fields. However, existing high-performance computational techniques tend to be limited in their applicability, and somewhat fickle, in the sense that software needs problem-specific tuning to perform well. The principal goal of the proposed research is to eliminate these drawbacks for a particular class of high-performance techniques, thus making such algorithms accessible to a wide range of important computational problems.
Technically speaking, the proposed research is concerned with a class of computational techniques based on formulating the problem as an equation on the boundary of the computational domain. It is known that the resulting equations can in some environments be solved extraordinarily rapidly. Existing techniques for this task are based on so-called "iterative solvers", which construct a sequence of approximate solutions that gradually approach the exact solution. The proposed research seeks to develop "direct solvers" for solving the boundary equations. Loosely speaking, a "direct solver" manipulates the mathematical equation to produce an algorithm that determines the unknown variables from the given data in one shot. Direct solvers are generally preferred to iterative ones, but they have in many environments appeared to be prohibitively expensive. However, recent developments indicate that it is possible to construct direct methods that are as fast as, and sometimes even faster than, existing iterative ones.
Many benefits would accrue from the development of direct methods for solving the boundary equations associated with linear boundary-value problems; these include: (1) The ability to solve certain problems that are beyond the reach of existing fast algorithms. An example is the accurate solution of electromagnetic and acoustic scattering problems involving large objects at wave frequencies close to resonant frequencies of the scatterer. (2) An increase in computational speed in environments where the same equation needs to be solved multiple times for different data. Preliminary experiments involving the modeling of biochemical processes and large scattering problems indicate that a speed-up of one or two orders of magnitude is to be expected. (3) The availability of high-performance computational techniques that are sufficiently robust to be incorporated into general purpose software packages.
|
1 |
2008 — 2014 |
Martinsson, Per-Gunnar |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Career: Fast Direct Solvers For Differential and Integral Equations @ University of Colorado At Boulder
Over the last several decades, the development of powerful computers and fast algorithms has dramatically increased our capability to computationally model a broad range of phenomena in science and engineering. Our newfound ability to design complex systems (cars, new materials, city infrastructures, etc.) via computer simulations rather than physical experiments has in many fields led to both cost savings and profound improvements in performance. Intense efforts are currently being made to extend these advances to biochemistry, physiology, and several other areas in the biological and medical sciences. The goal of the proposed research is to develop faster and more accurate algorithms for computing approximate solutions to a class of mathematical equations called "partial differential equations" (PDEs) that lie at the core of models of physical phenomena such as heat transport, deformation of elastic bodies, scattering of electro-magnetic waves, and many others. The task of solving such equations is frequently the most time-consuming part of computational simulations, and is the part that determines which problems can be modeled computationally, and which cannot. Technically speaking, most existing numerical algorithms for solving large PDE problems use "iterative methods," which construct a sequence of approximate solutions that gradually approach the exact solution. The proposed research seeks to develop "direct methods" for solving many PDEs. Loosely speaking, a direct method computes the unknown data from the given data in one shot. Direct methods are generally preferred to iterative ones since they are more robust, are more suitable for incorporation in general-purpose software, and work for important problems that cannot be solved with known iterative methods. The reason that they are typically not used today is that they are often prohibitively expensive. However, recent results by the PI and other researchers indicate that it is possible to construct direct methods that are competitive in terms of speed with the very fastest known iterative solvers. In fact, in several important applications, the direct methods appear to be one or two orders of magnitude faster than existing iterative methods. In addition to constructing faster algorithms, a core goal of the proposed research is to demonstrate the capabilities of the new methods by applying them to a number of technologically important problems that are not amenable to existing techniques. These problems include scattering problems at frequencies close to a resonance frequency of the scatterer, modeling of crack propagation in composite materials, and the modeling of large bio-molecules in ionic solutions. An integral part of the proposed work is the development of new educational material on computational techniques. Specific goals include: (1) The development of a textbook on so called "Fast Multipole Methods." (2) The development of a new graduate-level class on fast algorithms for solving PDEs. (3) The updating of the standard curriculum of undergraduate classes in numerical analysis to reflect new modes of thinking about numerical algorithms (specifically the development of methods that are not "convergent" in the classical sense, but that can solve specified tasks to any preset accuracy). This work is to be undertaken in close collaboration with Leslie Greengard at NYU and Vladimir Rokhlin at Yale University.
|
1 |
2009 — 2014 |
Meyer, Francois [⬀] Martinsson, Per-Gunnar |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cdi-Type I: Geometrical Image Processing With Fast Randomized Algorithms @ University of Colorado At Boulder
The objective of this research is to develop general tools for reducing the effective size of massive datasets and to demonstrate the capabilities of these tools by applying them to the efficiently represent images and streaming video. The approach is based on a new class of randomized algorithms that have proven capable of accelerating key scientific computations, while simultaneously making the algorithms easier to implement on multi-core computers.
With respect to intellectual merit, this project addresses the fundamental scientific question of how to extract information from large and noisy datasets. These datasets are created by many branches of society in quantities and at a rate that make the analysis of the data extremely difficult with state-of-the-art methods. This research is based on the belief that the scale of the problem can be reduced with new algorithms that discover the underlying structure of a dataset by randomly examining chunks of data. Formally, mathematical results on random projections in high-dimensional spaces will be turned into fast algorithms that build low-dimensional representations amenable to knowledge discovery.
With respect to broader impacts, this project has the potential to create a new generation of algorithms for organizing and searching huge databases of images and videos. It could pave the way for novel algorithms to manage the massive datasets created by web searches, biomedical research, cyber security, and other domains. The investigators are mentoring students from the existing "SMART" program, which attracts outstanding students from underrepresented groups into graduate school. Undergraduate students are involved via the existing "Mentoring Through Critical Transition Points" program.
|
1 |
2012 — 2014 |
Martinsson, Per-Gunnar Minsky, Yair [⬀] Nahmod, Andrea Singer, Amit (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Challenges in Geometry, Analysis and Computation: High Dimensional Synthesis
This award will provide support to defray expenses of participants, especially junior investigators, women and mathematicians from under-respresented groups in the sciences to attend the conference "Challenges in Geometry, Analysis and Computation: High Dimensional Synthesis" to be held at Yale University during June 4-6, 2012. The conference is motivated by the development in recent years of new powerful tools for overcoming the challenges associated with problems set in high dimensional spaces. These tools include compressive sensing, random projections, diffusion maps for parameterizing high-dimensional data sets, fast multipole methods, and many more. The premise of the meeting is that these different techniques share an intellectual heritage in a body of work originating from harmonic and functional analysis. A major problem confronting scientists nowadays is dealing with massive quantities of data. Over the last decades, the development of powerful computers has been very useful in treating many computational problems. Nonetheless severe limitations occur when the amount of data becomes too large. The interaction of Harmonic Analysis, Computational Mathematics, Combinatorial Geometry, Probability and Operator Theory has initiated a host of powerful methods to deal both with the processing and organizing of massive data sets, as well as efficient Numerical Analysis. More importantly they reveal new mathematical structures and open fundamental questions which need to be addressed, questions related to the analysis of functions in high dimensions, their sampling, effective descriptions, and approximations for efficient computability. Any further progress will have an enormous impact on our digital data driven world.
The program for the conference features: (1) Lectures by leaders in the field. The conference will feature about 15 lectures by distinguished researchers . (2) Panel discussion on future directions. The conference will feature a panel discussion with some of the principal investigators from the major federal research agencies and national laboratories (ONR, DARPA, AFOSR, ARO, Livermore National Lab, Pacific Northwest Lab, Oak Ridge Laboratory) joined by Ronald R. Coifman, Peter W. Jones, Vladimir Rokhlin whose research influenced and laid much of the mathematical foundations of the field. The purpose is to identify fields of research that are likely to be both in demand by applications, and intellectually ripe for substantial progress. (3) Poster sessions. Younger researchers will be invited to present posters during the meeting with prime slots in the schedule reserved for poster sessions, and for a `poster blitz' presentation. A major goal of this conference is to educate the younger generation about critical needs in the analysis of data, important open problems and underlying areas of mathematical research. The scientific techniques under discussion are likely to have a profound impact on important applications. In particular help overcome key challenges in computing today and in the future (the "deluge of data" problem and the communication speed bottleneck). The involvement of young researchers will contribute to work force development in key STEM areas.
|
0.97 |
2013 — 2016 |
Martinsson, Per-Gunnar |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Scalable and Accurate Direct Solvers For Integral Equations On Surfaces @ University of Colorado At Boulder
The goal of the proposed research is to develop faster and more accurate algorithms for computing approximate solutions to a broad class of equations that model physical phenomena such as heat transport, deformation of elastic bodies, scattering of electromagnetic waves, and many others. The task of solving such equations is frequently the most time consuming part of computational simulations, and is the part that determines which problems can be modeled computationally, and which cannot. Dealing with complicated shapes (e.g. scattering from complex geometry or flow through channels of complicated shape) adds difficulty to the computational task.
Technically speaking, most existing large-scale numerical algorithms for solving partial differential and integral equations on complex geometries are based on so called "iterative methods" which construct a sequence of approximate solutions that gradually approach the exact solution. The proposed research seeks to develop "direct methods" for solving equations. A "direct method" computes the unknown data from the given data in one shot. When available, direct methods are often preferred to iterative ones since they are more robust, and can be used in a "black-box" way. As a result these are more suitable for incorporation in general purpose software, and in many cases work for important problems that cannot be solved with existing iterative methods. The reason that they are today typically not used is that existing direct methods for many problems are often prohibitively expensive. However, recent results by the PIs and other researchers have proven that it is possible to construct direct methods that are competitive in terms of speed with the very fastest existing iterative solvers. The new algorithms will be applied to the simulation of fluid flows and biomolecular simulations, and their performance will be demonstrated by the execution of simulations on complex geometries.
|
1 |
2014 — 2017 |
Sain, Stephan Dougherty, Anne [⬀] Anderson, Kenneth (co-PI) [⬀] Meyer, Francois (co-PI) [⬀] Martinsson, Per-Gunnar |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Extreems - Qed: Directions in Data Discovery (Data Cubed) in Undergraduate Education @ University of Colorado At Boulder
The Data Cubed project will prepare students for the challenges posed by the analysis of large datasets. As governmental, scientific, and business enterprises collect, store, and process more data, many technological challenges are encountered. The analysis of big datasets requires a collaborative effort between mathematicians, statisticians, computer scientists, and domain experts. Computation (including algorithmic development), modeling (including dimensionality/complexity reduction), and visualization are all needed. The Data Cubed project will identify talented students early in their academic career and give them appropriate mentoring and increasingly advanced statistical and computational coursework. The students will proceed to data discovery research under the guidance of faculty members and partner scientists. The ultimate goal of the Data Cubed project is to increase the number of highly qualified undergraduate students who are able to apply their skills as they enter the scientific workforce and data analytics careers and to share the results of this project with the broader community.
Students will learn mathematical and statistical techniques and software systems to collect, generate, store, analyze and visualize large amounts of data. In the Data Cubed project, several new courses will be created to train students in the core computational and statistical areas that underpin the analysis of large datasets; the students will be provided with significant research opportunities in the areas of geophysical modeling, analysis of unstructured social media data, and dimensional reduction techniques and modeling. One of the research projects will use large geophysical datasets from the National Center for Atmospheric Research (NCAR) and will involve modeling heat stress in urban environments and its relationship to public health. Another will examine the role of oceans as a primary reservoir of heat for our planet, which plays a significant role in the dynamics of climate change. Yet another project will combine heuristics of social media data (e.g., tweets) during times of mass emergency with a user's social graph to develop a more comprehensive picture of the situation. These and other projects all require fundamental knowledge and understanding of how to analyze large datasets.
|
1 |
2016 — 2019 |
Martinsson, Per-Gunnar |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Randomized Algorithms For Matrix Computations @ University of Colorado At Boulder
This project will develop mathematical techniques for accelerating computational tasks such as simulating electromagnetic scattering, medical imaging, extracting useful information from large datasets, machine learning, and many others. In all these computations, the step that tends to be the most time-consuming, and which therefore limits how large problems can be solved, concerns the manipulation of large square or rectangular arrays of numbers, called "matrices". Many of the matrices that arise in practical applications have redundancies, and can be compressed to enable them to be stored using less space. Using the compressed format, computations involving the matrix can also be greatly accelerated. The problems that will be addressed are deterministic in nature, but the algorithms that will be developed are probabilistic. It turns out that by exploiting certain mathematical properties of large ensembles of independent random numbers, one can build algorithms for compressing matrices that are much faster than traditional deterministic techniques. The new randomized algorithms can in theory fail, but the likelihood of failure can be shown to be lower than 1 time out of 10,000,000,000 runs in typical applications. Randomized algorithms of this type have recently attracted much interest due to the fact that they perform particularly well on emerging computing platforms such as mobile computing (where conserving energy is the key priority), computing using graphical processor units (where the vast numbers of computational cores create challenges), and distributed memory parallel computers. The methods also perform very well when applied to massively large datasets that must be stored on hard drives, or on large server farms. The project will train one doctoral student, and will lead to the release of a publicly available software package that implements the methods that will be developed.
From a technical point of view, the objective of the project is to develop efficient algorithms for factorizing matrices and for solving large linear systems of algebraic equations. The algorithms will be based on randomized sampling, and will exploit remarkable mathematical properties of random matrices and random orthogonal projections. Such randomized algorithms require less communication than traditional methods, which makes them particularly attractive for modern applications involving multicore processors, distributed computing, out-of-core computing, etc. Specifically, the project will address the following problems: (1) Computing full matrix factorizations (e.g. the so called "column pivoted QR factorization") which are core building blocks in scientific computing. Preliminary numerical experiments demonstrate speed-ups of close to an order of magnitude compared to state-of-the-art software packages. (2) Solving linear systems involving many unknowns and many equations. We expect to achieve substantial practical acceleration, and are cautiously optimistic about the possibility to develop solvers with substantially better asymptotic complexity than the cubic complexity achieved by standard techniques. (3) Developing randomized methods for accelerating computational simulations of phenomena such as electro-statics, composite materials, biochemical processes, slow fluid flows, Gaussian processes in 2 and 3 dimensions, etc. Technically, this will be achieved by developing randomized methods for compressing so called "data-sparse" or "rank-structured" matrices.
|
1 |
2020 — 2023 |
Martinsson, Per-Gunnar |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Nonoscillatory Phase Methods For the Variable Coefficient Helmholtz Equation in the High-Frequency Regime @ University of Texas At Austin
The importance of the numerical simulation of physical phenomena by computers cannot be overstated. Such computations have become essential tools both in scientific research and in industrial research and development. This project concerns the numerical simulation of the scattering of waves. Such simulations have applications to sonar and radar, as well as in medical imaging, geophysics, and many other applications. Wave phenomena become more complicated to model as the frequency of the wave increases, and our current ability to accurately model high-frequency waves is quite limited. This project seeks to develop new methods for modeling high-frequency waves efficiently and to high accuracy. The project provides research training opportunities for graduate students.
The numerical simulation of the scattering of waves from inhomogeneous media has important applications in radar and sonar, medical imaging, geophysics, and a host of other scientific applications. In many cases of interest, such simulations can be performed by solving the variable coefficient Helmholtz equation. The solutions of this equation are oscillatory, and the difficulty of calculating them using conventional approaches grows quickly with the frequency of the oscillations. Recently, one of the investigators developed a new class of solvers for the variable coefficient Helmholtz equation that achieve extremely high accuracy and have run times that scale much more slowly with increasing frequency than conventional solvers. They operate by solving the nonlinear Riccati equation that is satisfied by the logarithms of solutions of the Helmholtz equation. Currently, these solvers only apply in special cases. This project aims to extend them to the general case to develop a method for the variable coefficient Helmholtz equation that is significantly faster than current techniques.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.943 |