2001 — 2004 |
Fedkiw, Ronald |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: High Order Numerical Schemes For Multi-Dimensional Systems of Conservation Laws and For Simulations of Multi-Phase Fluids
The main theme of the proposed project is the construction of high order accurate numerical schemes for solving multi-dimensional hyperbolic systems of conservation laws, and in particular the construction of numerical schemes for simulations of multi-phase fluid flows. This includes numerical methods for compressible flow, incompressible flow and heat transfer. Recently, the PI's introduced a boundary condition capturing method for variable coefficient Poisson equation in the presence of interfaces. The method is implemented using a standard finite difference discretization on a Cartesian grid making it simple to apply in several spatial dimensions. Furthermore, the resulting linear system is symmetric positive definite allowing for straightforward application of standard "black box" solvers, for example, multi-grid methods. Most importantly, this new method does not suffer from the numerical smearing. Using this method, the PI's extended the Ghost Fluid Method to treat two-phase incompressible flows, in particular those consisting of water and air. The numerical experiments show that the new numerical method performs quite well in both two and three spatial dimensions. Currently, they are working on extending this method to treat a wide range of problems, including for example combustion. Of particular interest is the extension of this method to include interface motion governed by the Cahn-Hilliard equation which models the non-zero thickness interface with a molecular force balance model.
This proposed research on computational fluid dynamics is focused on the design, implementation and testing of new methods for simulating fluids such as water and gas using the computer. In particular, this work addresses problems where more than one type of one phase of fluid exist, e.g. mixtures of water and air. Our interest lies in improving the current state of the art algorithms so that they are better able to treat the interface that separates two fluids such as oil and water. The results of this research should be of interest to both the military (e.g. many naval applications involve the study of water and air mixtures) and to private industry. A particularly interesting example involves the interaction of water and oil in an underground oil recovery process. The research covered in this proposal has implications for math and science education as well. Not only will the PI's be working with and training graduate students in applied mathematics and engineering, but their research in extending these techniques to other fields, such as computer graphics, can play a role attracting the next generation of young scientists. For example, figure 7 in "Foster and Fedkiw, Practical Animation of Liquids, SIGGRAPH 2001" shows the lovable character "Shrek", from the feature film of the same name, taking a bath in mud.
|
0.915 |
2003 — 2009 |
Fedkiw, Ronald |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research - Itr-High Order Partial Differential Equations:Theory, Computational Tools, and Applications in Image Processing, Computer Graphics, Biology, and Fluids
This project seeks to develop a comprehensive research and education program in the area of computational methods and simulations of physical systems described by high order Partial Differential Equations (PDEs). The program will unify algorithmic, visualization, theoretical, and experimental efforts as well as address applications in areas of science and technology, including computer graphics, image processing, biology, and fluids. Intellectual merit of the proposed activity This project advances knowledge in the area of high order PDEs, with particular emphasis on curved surface data, and produces enabling technology to address fundamental problems in biology, image processing, computer graphics, and fluids in general. The novel science is in the computational techniques, experimental research, and diverse applications addressed by a multi-disciplinary team. This project brings together the five fields of computer science, applied mathematics, mechanical engineering, physics, and electrical and computer engineering. Broader impacts of the proposed activity With the increasing interest in high order PDEs, the computational tools and experience resulting from this project impact beyond the particular applications in this proposal. Students will receive unusually broad interdisciplinary training and the workshop planned further brings experts from different fields together. New public domain software incorporating the developed algorithms enables researchers from different fields using higher order PDEs to perform state-of-the-art numerical simulations and graphics rendering of their application of interest. Educational initiatives of this research program include: (1) new interdisciplinary training of graduate students and postdocs through co-mentoring by PIs in different fields; (2) new interdisciplinary courses in computer graphics, numerical analysis, and modeling/simulation of physical phenomena described by higher order PDEs; (3) a workshop bringing together for the first time diverse scientific researchers using high order PDEs.
|
0.915 |
2006 — 2010 |
Fedkiw, Ronald |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Algorithm Design For Motion Simulation of the Human Musculoskeletal System
lgorithm Design for Motion Simulation of the Human Musculoskeletal System Ronald Fedkiw, Stanford Computer Science Department
Possibly the most rewarding scientific studies focus on human beings and their strengths and weakness, for example consider sports and illness. Computers have recently evolved to the level where they can start being used to model human beings in a simulated environment. One can study virtual human models as opposed to humans themselves, opening the floodgates for new research and discovery that is likely to have a profound effect on everyday lives. Unfortunately, researchers are still severely lacking methods for programming computers to carry out these types of simulations. This proposal is focused on constructing these numerical methods, with a particular focuson those related to human motion. Targeting biomechanics, application areas that will benefit from this research include: automobile crash dummies, surgical planning, animated humans for computer graphics and the entertainment industry, models for ergonomics studies, education, etc.
Simulation of human motion is a paradigm of cross-disciplinary studies, requiring combined knowledge of computer graphics and biomechanics in order to simulate accurately with a physics-based musculoskeletal system. This work addresses the need for new algorithms to continue generating advances in human motion. Biomechanics and medicine are in need of virtual musculoskeletal systems in order to test medical devices and surgical procedures. With such a model, prosthetics and other exoskeleton-type equipment to help paraplegics and quadriplegics walk could be tested and refined without endangering humans. However, to enable these technologies, significant advances in the algorithms used to simulate the human musculoskeletal system and control its locomotion are needed. In addition to algorithms for accurately representing the components of the musculoskeletal system such as bone, muscles, skins, tendons, algorithms are needed to model the dynamics of how those components interact with each other and the environment to generate human movement.
|
0.915 |
2006 — 2011 |
Levitt, Michael [⬀] Fedkiw, Ronald Pande, Vijay (co-PI) [⬀] Sidow, Arend Batzoglou, Serafim (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Mri: Acquisition of a Hybrid Shared-Memory / Massively-Parallel Commodity Cluster For Cost-Effective Super-Computing At Stanford
This project, acquiring a 96-node/1536-core Opteron cluster with Infiniband interconnect and 10TB storage, facilitates a rich diversity of research at the interface of computer science and biology. The research to be enabled has many applications with a remarkable range of scale, from the sub-molecular to the organismal. The work is motivated by a common desire to push novel computational approaches to the limit that most significant problems can be tackled with available computational resources (both in terms of algorithmic advances and in terms of solving the largest). The project represents a broad range of methods, from physics-based simulation, to genomics and proteomics, to biostatistics, to joint experimental/ computational methodology. The enabled research can be grouped in four areas:
-Simulation and modeling of macromolecular structures, -Analysis of sequence and genomic/proteomic datasets, -Modeling of very large datasets, and -Fundamental computer science.
Besides enabling research, the instrument is an advance in commodity computing combining the low cost of Linux clusters with the power of shared memory machines. Out comes a supercomputer with a very low total cost of ownership. Highlights include:
-Molecular dynamics simulation based on quantum mechanically derived force fields: to understand hydrophobic effect that drives protein folding and to get closer to the goal of accurately modeling protein folding -Modeling of structure water in ribosome: to understand protein structure and function in the cellular milieu -Integration of genetic networks using genome sequence and experimental data: to appropriately combine disparate information into a single unifying framework based on common gene function and evolutionary descent -Whole genomic alignments and inference of evolutionary constraints: to predict impact of population genetic variation on the function of the organism, at the interface of population genetics and evolutionary theory -Simulation of human motion based on accurate, yet tractable, models of the neuromusculoskeletal system and simulation of blood flow in aorta: areas known for applied value -Development of algorithms for fluid dynamics, solid mechanics, graphics, segmentation, computer vision: areas of computer science with a strong mathematical component, as well as applied aspects such as movie animations
|
0.915 |
2010 — 2012 |
Fedkiw, Ronald |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Eager: Iis (G&V):Scalable High-Fidelity Solids Simulation
Abstract
Fundamental changes in the underlying algorithms of physical simulation are needed to bring about the orders of magnitude improvement in performance required to bring physical simulation to resource-constrained applications such as real-time games or surgical training. This research is investigating three issues: frame-rate simulation, asynchronous evolution, and multilevel solvers. The approach to frame-rate simulation consists of ensuring stability by careful energy management principles to address both stability and damping. Paying special attention to the amount of artificial damping in the system enables one to avoid excessively damped simulations, a problem that limits the time step size possible with existing techniques. Such a frame-rate simulator enables the next step: developing an asynchronous evolution scheme that allows small time steps to be taken in regions of high-frequency motion and large time steps to be taken elsewhere. Other attempts at developing asynchronous schemes have difficulty with excessive damping when taking large time-steps. This research is investigating an asynchronous method around the proposed frame-rate simulator in order to alleviate this problem, as the amount of damping can be explicitly controlled. Finally, to reduce the cost of a single time step, we are using multilevel methods to reduce the time spent solving large linear systems typical of physical simulation.
Multilevel methods have a long history, but they are new to the physical simulation community. Finding suitable coarsening and refinement operators are key to multilevel methods and, although difficult to formulate, have the potential to create asymptotic improvements in the convergence rates of solving these systems. The methods being investigated represent a radically different approach from current techniques and this exploratory research is seeking to demonstrate the feasibility of this potentially transformative change.
|
0.915 |
2014 — 2018 |
Fedkiw, Ronald Levis, Philip [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Csr: Medium: a Computing Cloud For Graphical Simulation
Today, many graphical simulations run on a single powerful server or a small cluster of high-performance, high-cost nodes. This research aims to answer the question -- is it possible to run graphical simulations in the computational cloud? -- by designing and implementing Nimbus, a software for graphical simulation in the computing cloud. The goal is to be able to run large, complex simulations using on-demand cloud computing systems. Nimbus supports PhysBAM, an open-source graphical simulation package developed and maintained by Principal Investigator Fedkiw. The project will collaborate with existing PhysBAM users to support the Nimbus software for broader use and adoption.
Nimbus focuses on three important principles to support graphical simulations running on hundreds to thousands of cloud servers. First is decoupling data access and layout. Nimbus represents data in three layers: program, logical, and physical. These layers separate the units which a program operates on (program) from the units which the Nimbus software manages and transfers (logical) from how they are laid out in actual computer memory (physical). Second is non-uniform, geometry-aware data placement. Nimbus uses the fact that simulations have a basic underlying geometry to intelligently place data and computation. This geometry is explicit in the Nimbus software, which knows that nearby regions of the simulation should be placed on nearby computers. Third is dynamic assignment and load balancing: Graphical simulations today divide the simulation volume equally across computers, despite the fact that some regions require much more computation than others. Nimbus divides a simulation into a larger number of smaller partitions, which it dynamically assigns and moves as load changes to reduce running time while considering inter-partition communication. These three principles allow Nimbus to provide tremendous flexibility. The system breaks a simulation into small pieces that a controller computer sends to worker computers to compute. These worker computers decide when to schedule these simulation pieces and how to assign processors to different pieces. The runtime automatically moves data in the most efficient manner possible as needed, compressing data and replicating it when having multiple copies for different pieces increases performance. Discovering how these applications can be run on modern data center computing systems will help bring arithmetically intensive scientific computing to the cloud. As Exascale and other supercomputing efforts gain momentum, their scale will need to deal with the same issues cloud systems have been tackling for the past decade, stragglers, failures, and heterogeneity. By focusing on one particular compelling application, this work will establish an intellectual framework for future, broader efforts.
|
0.915 |