1990 — 1993 |
Norman, Michael |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Numerical Astrophysics With a Multidimensional Radiation Magnetohydrodynamics Code @ University of Illinois At Urbana-Champaign
A training and research plan is described for the continued development and application of an already existing two- dimensional radiation magnetohydrodynamics (MHD) code to problems in astrophysics. Applications will be performed for three important problems: accretion disk dynamics, molecular cloud collapse during star formation, and the dynamics of the solar corona. Planned improvements to the treatment of the microphysics in the code is expected to produce more realistic results. Utilizing the already implemented algorithm for solving the radiation transfer equation in two-dimensions, self- consistent radiation intensity maps will be computed during the course of these simulations for direct comparison with observations, forming the basis of an interactive "numerical observatory". The code will be implemented on a variety of machines at the NCSA to benchmark the performance of different architectures on the algorithms for MHD and radiation transfer. IN particular, the code will be implemented on the massively parallel Connection Machine.
|
0.915 |
1993 — 1995 |
Norman, Michael Mihalas, Dimitri (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Laboratory For Computational Astrophysics @ University of Illinois At Urbana-Champaign
This award will establish a Laboratory for Computational Astrophysics (LCA) at the University of Illinois, Urbana-Champaign as a joint venture between the National Center for Supercomputing Applications and the Department of Astronomy. At the heart of this laboratory is a new suite of application codes for astrophysical hydrodynamic, magnetohydrodynamic and radiation hydrodynamic simulations in one, two and three spatial dimensions implemented on NCSA's supercomputers. Several of these codes are already in use by national researchers on a variety of astrophysical problems. The central goal of the LCA is the development, dissemination and support of community codes for astrophysical simulation of general use to astronomical researchers.
|
0.915 |
1993 — 1996 |
Norman, Michael |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Implicit Adaptive Mesh Radiation Hydrodynamics On Mpp Architectures @ University of Illinois At Urbana-Champaign
9308955 Norman I propose to design and implement a scalable adaptive mesh radiation hydrodynamics simulation code for use on MPP platforms. This code will be quite general in the sense that can be used in many astrophysical situations to solve "Grand Challenge" level problems. Additionally, the algorithm and code rely on the solution of large parallel sets of equations, a situation for which MPP computing technology is well suited. My investigations will be oriented towards developing schemes for the efficient solution of these large parallel sets of equations.
|
0.915 |
1994 — 1995 |
Norman, Michael Mac Low, Mordecai-Mark [⬀] Zahnle, Kevin (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Impact of Comet Shoemaker-Levy 9 On Jupiter: Modelling the First 200 Seconds @ University of Illinois At Urbana-Champaign
9322509 Mac Low The fragments of Comet Shoemaker-Levy 9 will strike Jupiter in July 1994. Semi-analytic methods and full hydrodynamics codes will be will be utilized to model Jupiter's atmosphere during the first 200 seconds after the impact of a fragment. These models provide initial conditions for the airburst. The key point is that the energy released by these impacts is large enough that the explosion breaks out of the atmosphere. the focus will be on developing detailed numerical simulations of the airburst and the fireball. The primary goal will be to produce light curves, as might be observable from the Galileo and Voyager 2 spacecraft, or as reflected off well-placed Jovian satellites. Other goals are to predict the amount and velocity of material thrown out of the atmosphere (and where it might be expected to re-enter the atmosphere), the amount of mass raised to high altitudes where unusual clouds might form, and the energy deposited at various depths in the atmosphere. M #PD 9322509 Mac Low The fragments of Comet Shoemaker-Levy 9 will strike Jupiter in July 1994. Semi-analytic methods and full q s q s ! !; !3 F s s Times New Roman Symbol & Arial Roman Wingdings B Script 1 MS LineDraw 2 Modern 1 Courier 5 Courier New Fences MT Extra " h " e( e( e . Claudette Merrick mrenninger
|
0.915 |
1997 |
Gannon, Dennis (co-PI) [⬀] Norman, Michael Baden, Scott (co-PI) [⬀] Chrisochoides, Nikos (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Workshop: Workshop On Structured Adaptive Mesh Refinement Grid Methods At the University of Minnesota in Minneapolis, Mn, March 1997 @ University of Illinois At Urbana-Champaign
Workshop on Structured Adaptive Mesh Refinement Grid Methods, being held March 12-13, 1997 at the Institute for Mathematics and Its Applications at the University of Minnesota, Minneapolis, MN. Structured Adaptive Mesh Refinement (SAMR) methods are playing an increasing role in tackling difficult scientific applications, including compressible flows, cosmology, and electronic structures arising in local spin density calculations. Providing adequate software support for SAMR codes is challenging even on sequential implementations. In parallel implementations, program complexity increases qualitatively due to computation and communication requirements that are dynamic, data-dependent, and irregular. The rapid evolution in target parallel systems only further complicates design decisions. A workshop is proposed that will bring together experts in applications, numerical methods, and software development from academia, national labs, and industry. The goal of the workshop is to identify common ground in the application and implementation of SAMR, as well as issues requiring specialization. The specific objectives of the workshop are : (i) to improve the general understanding of the application of SAMR to practical problems, (ii) to identify issues critical to efficient and effective implementation on high performance computers, (iii) to stimulate the development of a community code repository for software including benchmarks to assist in the evaluation of software and compiler technologies. The proceedings of the workshop will be published by Springer-Verlag as part of the IMA (Institute for Mathematical and its Applications) Book Series.
|
0.915 |
1998 — 2002 |
Ostriker, Jeremiah Bahcall, Neta Bertschinger, Edmund Norman, Michael Hernquist, Lars (co-PI) [⬀] Anninos, Peter Cen, Renyue (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Astrophysical Cosmology: a Multi-Epoch Approach @ University of Illinois At Urbana-Champaign
AST-9803137 Norman This multi-institution project is a coordinated effort by the former Grand Challenge Cosmology Consortium to explore systematically the observational consequences of theories of structure formation across multiple epochs, including recombinations, first structure formation, reionization, and the formation of galaxies, clusters, and large scale structure. This multi-epoch approach to cosmology is meaningful because cold dark matter cosmogonies are mathematically precise, making definite predictions at all stages of cosmic evolution z less than or equal to 10^9. The approach is now feasible by virtue of a new suite of multiphysics, high resolution hydrodynamic cosmology codes the team has developed for the coming generation of teraflopscale parallel supercomputers. The physics of the gas and radiation, while computationally challenging, is nonetheless understood, and we are able to draw upon decades of classical astrophysics research in building our models. The anticipated increase in computer power will improve the reliability of the models considerably. This project is supported by NSF's Division of Astronomical Science, the Division of Physical Sciences, and the Knowledge and Distributed Intelligence Initiative. ***
|
0.915 |
2007 — 2011 |
Mischo, William Schlembach, Mary Bracke, Paul Habing, Thomas Norman, Michael |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Metasearch Gateway Services For the Distributed Nsdl Community @ University of Illinois At Urbana-Champaign
This project is developing and implementing a set of metasearch gateway services for the distributed NSDL community that use broadcast search technologies to provide access to selected scientific and engineering publisher full-text repositories, abstracting and indexing services, university institutional repositories, open access full-text journal and report sites, and NSDL Pathways projects. As a component of the NSDL Core Integration services the gateways provide custom federated search access to critical distributed information resources that support the instructional and research needs of middle school, high school, undergraduate and graduate students, and faculty. The intellectual merit of this project lies in its use of standards-based frameworks such as the NISO MXG (Metasearch XML Gateway) framework, the OpenSearch 1.1 standard, and the Open Archives Initiative protocols for metadata harvesting (OAI-PMH) and object reuse and exchange (OAI-ORE). In addition the project team features a collaboration of information science researchers with personnel from the Digital Library Federation (DLF) Aquifer project and an international component involving two Joint Information Systems Committee (JISC) funded initiatives in the United Kingdom: the PerX project at Heriot-Watt University and the CREE project headquartered at the University of Hull. The latter connections speak to the broader impacts of this project on the global educational digital libraries environment. The use of standards-based approaches also facilitates this broader impact.
|
0.915 |
2007 — 2010 |
Norman, Michael |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Toward Petascale Simulations of Early Cosmic Evolution @ University of California-San Diego
AST-0708960/0707505/0708309 Norman/Ostriker/Ricotti
This is a three-award collaborative project, led by Dr Norman, to harness the power of future petascale supercomputers for self-consistent simulations of early structure formation in cosmologically representative volumes. Astronomers' expectation of what will be observed at high redshifts (above 7) by future major facilities is largely based on theoretical and numerical models of the early growth of cosmic structure in a standard Lambda-CDM cosmological framework. The new simulations will span local and global scales using adaptive mesh refinement (AMR) technology developed specifically for petascale computers. This enables simulations with ten billion particles, a spatial dynamic range of a hundred thousand, and complex baryonic physics including radiative and chemical feedback. As theories are at present poorly constrained by observations, an equally important effort will be a rigorous attempt at uncertainty quantification and sensitivity analysis. The study will thus address forefront questions in cosmology using the most complete physical models running on the most powerful computers, analyzed using best practices. The result will be comprehensive models of early cosmic evolution along with a rigorous assessment of their predictive value.
This work will substantially advance the state-of-the-art in numerical multi-physics cosmological simulations, which can be expected to continue previous successful impacts of code availability, both within and outside of the astrophysics community. The new petascale methodologies will also be publicized well outside the usual astronomy circles, and the enhanced version of the Enzo community code will be released to the public. In addition, both the numerical results and the synthetic observations will be published to the international research community using mechanisms provided by the National Virtual Observatory.
|
0.915 |
2008 — 2012 |
Norman, Michael Reynolds, Daniel |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
New Capabilities For the Enzo Community Code in the Petascale Era @ University of California-San Diego
AST-0808184 Norman
Enzo is a three-dimensional parallel adaptive mesh refinement hybrid (fluid plus particle) open-source community code for astrophysical and cosmological simulations developed at the Laboratory for Computational Astrophysics (LCA) at the University of California-San Diego. Enzo underpins the research of many astronomers in areas as diverse as primordial star formation, galaxy formation, the intergalactic medium, galaxy clusters, interstellar turbulence, and Galactic star formation. The arrival of petascale computing platforms will take these and other researches into a new realm of physical realism, but not without appropriate parallel application software, whose development requires effort by an interdisciplinary team of domain scientists, applied mathematicians, and computer scientists, such as has been assembled at the LCA. The present project will add radiation magnetohydrodynamics capabilities to Enzo within a new parallel framework designed for petascale platforms, and will include three annual software releases and two user workshops. These capabilities will enable larger-scale and higher dynamic range simulations in cosmological structure formation, interstellar turbulence, and star formation, as well as open up new possibilities wherever complex physics and high spatial dynamic range are needed.
In addition to direct research impacts, this work will serve as a springboard for inter-departmental collaborations, bringing together graduate students from mathematics, physics and computer science into a coordinated program. The community-code nature of Enzo, distributed under UCSD policies for wide access under copyright, and with the user-friendly packaging and support which are part of this project, will enable a wide range of multiscale applications throughout astrophysics and other disciplines to harness petascale computing.
|
0.915 |
2009 — 2017 |
Pfeiffer, Wayne Norman, Michael Snavely, Allan (co-PI) [⬀] Swanson, Steven (co-PI) [⬀] Strande, Shawn |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Flash Gordon: a Data Intensive Computer @ University of California-San Diego
ABSTRACT UCSD 0910847 Norman, Michael L.
This project supports the acquisition, deployment and operation of a new supercomputing system suitable for data-intensive applications. The system, to be known as Flash Gordon, will be deployed by the University of California at San Diego at the San Diego Supercomputer Center and integrated into the TeraGrid. The system, which has been designed by Appro International Incorporated, with partners Intel and ScaleMP, seeks to bridge the widening latency gap between main memory and rotating disk storage in modern computing systems. It uses flash memory to provide a level of dense, affordable, low-latency storage that can be configured as either extended swap space or a very fast file system. The system will consist of very large shared virtual-memory, cache-coherent "super-nodes" to support a versatile set of programming paradigms. Peak performance will exceed 200 teraflops/s in double precision.
Flash Gordon's large addressable virtual memory, low-latency flash memory, and user-friendly programming environment will provide a step-up in capability for data-intensive applications that scale poorly on current large-scale architectures, providing a resource that will enable transformative research in many research domains. Even sequential codes will be able to address up to terabytes of fast virtual memory.
Examples of scientific challenges, as described in the proposal, that this resource will allow researchers to tackle, include the following.
De Novo Genome Assembly: Gene sequencers produce information about many small fragments of a genome. Some recent assembly algorithms use a graph-based approach, much more readily executed on a shared-memory system. Using Flash Gordon, researchers will be able to rapidly assemble complex genomes such as mammalian genomes.
Astronomy: Modern astronomy databases can be large; for example, the Sloan Digital Sky Survey is approximately six terabytes in size. Typically, the analysis algorithms that researchers use to perform complex searches for astronomical phenomena can be implemented more easily on shared-memory systems. Flash Gordon will enable researchers to load a copy of the Sloan Digital Sky Survey into the flash memory associated with a super-node, greatly extending the types of analyses astronomers can make.
Astrophysics: Cosmological simulations produce many terabytes of output describing the simulated universe. Detailed analysis of the results of these simulations, to find features such as collapsed halos, galaxy mergers, dwarf galaxies, and galaxy clusters, often requires density-based cluster analysis that does not parallelize well. With Flash Gordon, these analyses can be accelerated by exploiting the large SMP partitions and fast flash memory.
Interaction Networks: Interaction networks, graphs representing the relationships between objects, are used in research in areas such as epidemiology, phylogenetics, systems biology, and population biology. These interaction networks can represent relationships between types of data stored in different databases; for example, the combination of social network databases with medical records and genomic profiles to explore questions such as genetic resistance to disease. Flash Gordon will speed analysis of large interaction networks because the databases can be stored on the solid-state disks, greatly reducing access time and permitting more complex types of analysis.
The project team will leverage a number of ongoing educational activities at UCSD to expand and diversify the community of users that can utilize this computational resource, including successful outreach programs for women and minorities from underrepresented groups in science and engineering. The project will also create a summer training program for undergraduates.
|
0.915 |
2011 — 2015 |
Norman, Michael |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Development of Cello/Enzo-P: An Extremely Scalable Adaptive Mesh Refinement Framework and Code For Astrophysics and Cosmology @ University of California-San Diego
This award supports the development of Cello, a new parallel adaptive mesh refinement (AMR) software framework. The purpose of Cello is to enable researchers to write multi-physics AMR applications that can harness the enormous computing power of current and future world-class high- performance computing (HPC) platforms. The distinguishing characteristic relative to existing AMR frameworks is the aggressive pursuit at the onset of extreme scalability, both in terms of software data structures and hardware parallelism. Integral to developing Cello will be developing Enzo-P, a petascale astrophysics and cosmology application built on top of the Cello framework. Enzo-P will not only help drive development of the underlying Cello framework, but it will serve as a highly scalable variant of the Enzo terascale astrophysics and cosmology community code. Both the Enzo-P science application, and the underlying independent Cello parallel AMR software framework, will be released and supported as community software.
Software sustainability will be realized under the dual support of the Laboratory for Computational Astrophysics (LCA) at the University of California San Diego, and the San Diego Supercomputer Center, organizations devoted to the long-term maintenance of---and user support for---community scientific codes and HPC cyberinfrastructure. Software self-management will be an integral component of the Cello software design, with software resilience a high priority. The Cello framework will be designed to detect hardware and software faults, identify performance and numerical issues, and dynamically reconfigure to always perform with the highest possible efficiency on currently available hardware components. Energy efficiency can be considered implicit in the underlying adaptive mesh refinement approach, which dynamically targets computational resources where they are required, and avoids expending resources where they are not. The adaptivity of AMR translates directly to energy savings. This project synthesizes the best practices of existing parallel AMR frameworks and adds several innovations that improve the quality of the mesh for various types of simulations. A hierarchical approach to parallelism and load balancing is taken while enforcing locality to the maximum while relaxing global synchronization to a minimum. This will enable the construction of AMR applications of unprecedented spatial and temporal dynamic range on tomorrow's hierarchical HPC architectures. Adaptive mesh refinement has proven to be a powerful numerical technique in a wide variety of disciplines in the pure and applied sciences and engineering. Existing frameworks will probably not be scalable to tens and hundreds of millions of cores, meaning that existing applications built on them will need to move to more scalable and fault tolerant frameworks. Cello is being built with these applications in mind. The broader impacts of this software development will come through the science and engineering applications that Cello supports, as well as novel design principles it embodies.
|
0.915 |
2011 — 2015 |
Norman, Michael |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: the First Galaxies and Their Stellar Populations @ University of California-San Diego
AST-1109243/1109008 Norman/Reynolds
Adaptive mesh refinement cosmological radiation hydrodynamic simulations pioneered by this group have recently expanded to include sufficient physics and enough resolution to simulate directly the prior effects of Population III star-forming mini-halos on the first galaxies. This technical advance has occurred just when observations from both ground and space are providing astrophysical data on galaxies in the previously unimaginable range of redshifts from 6 to 8.5. These observations indicate substantial evolution in the bright end of the luminosity function, and a steep faint end slope consistent with the notion that high-redshift dwarfs dominate the reionization photon budget. They also reveal substantial evolution in stellar populations over this redshift interval. The advance in simulation capability is also well timed for the start of operations of the first NSF-supported sustained petaflops supercomputer, Blue Waters.
Combining these software, hardware, and data advances, this project will simulate the formation and evolution of the first galaxies across the entire range of masses and luminosities observed, and compare the results to the data through simulated images and spectra. This will lead to inferences about the Population III initial mass function, feedback effects, the growth of supermassive black holes, and even the dark matter power spectrum, and place important constraints on the accretion rate of gas onto galaxies. The simulations will improve estimates of the escape of ionizing radiation into the intergalactic medium, and directly address whether high-redshift galaxies are capable of reionizing the Universe. This is the very first time such near-first-principles simulations are feasible, due to the convergence of petascale computing hardware, simulation software, and algorithmic advances, and also the first time that relevant observational constraints are being obtained.
Simulations at this scale have a strong impact on the astronomical community, the high performance computing community, and the larger science and technology-curious public. They drive the development of the hardware, software, and networking infrastructure. This project is one of six selected for first access to NSF's new petascale supercomputer, and the required code enhancements will be made generally available. There are also plans to include these scientific results in a new IMAX film by the Reuben H.Fleet Science Museum in Balboa Park, San Diego, anticipated in 2013.
|
0.915 |
2012 — 2016 |
Norman, Michael Chourasia, Amit |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Seed: Swiftly Encode Explore Disseminate @ University of California-San Diego
In many scientific disciplines, computational simulations form the third pillar of research complementing theory and experiment. With simulations increasing in complexity, resolution and duration, visualization of intermediate results has become important in rapid assessment of simulation data. Moreover with the ubiquity of sensing devices, joint analysis of simulation and observational data is emerging within research communities.
This project capitalizes on recent video standardization strides emerging within web communities. It explores the use,adaptation, and optimization of these video capabilities within the cyberinfrastructure required for advanced computational science. The resulting visualization aims to promote collaboration and facile sharing of scientific results among research and non-research communities.
|
0.915 |
2013 — 2019 |
Norman, Michael Baru, Chaitanya Papadopoulos, Philip Moore, Richard (co-PI) [⬀] Wilkins-Diehr, Nancy Sinkovits, Robert Strande, Shawn |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Gateways to Discovery: Cyberinfrastructure For the Long Tail of Science @ University of California-San Diego
The University of California at San Diego will provide a ground-breaking new computing facility, Wildfire, that will be made available to the research community to both well established users of high end computing (HEC) and especially to new user communities that are less familiar with how HEC can advance their scientific and engineering goals. The distinguishing features of Wildfire are: (i) Deliver 1.8-2.0 Petaflop/s of long sought capacity for the 98% of XSEDE jobs (50% of XSEDE core hours) that use fewer than 1,000 cores and also support larger jobs. The exact number will depend on the speed of the processor being delivered by Intel but cannot be less that 1.8 Petaflop/s. (ii) Provide 7 PB of Lustre-based Performance Storage at 200 GB/s bandwidth for both scratch and allocated storage as well as 6 PB of Durable Storage (iii) Ensure high throughput and responsiveness using allocation/scheduling using proven policies on earlier deployed systems such as Trestles and Gordon (iv) Establish a rapid-access queue to provide new accounts within one day of the request (v) Enable community-supported custom software stacks via virtualization for communities that are unfamiliar with HPC environments. These virtual clusters will be able to perform at or near native InfinBand bandwidth/latency
Wildfire will provide novel approaches for resource allocation, scheduling, and user support, queues with quicker response for high-throughput computing, medium-term storage allocations, virtualized environments with customized software stack, dedicated allocations of physical/virtual machines, support for Science Gateways and bandwidth reservations on high-speed networks. Wildfire has been designed to efficiently serve the 98% of XSEDE jobs that need fewer than 1,000 cores, while also supporting larger jobs. The award leverages but also enhances the services available through the XSEDE project.
The Wildfire acquisition will work to increase the diversity of researchers able to effectively make use of advanced computational resources and establish a pipeline of potential users through virtualization, science gateways and educational activities focused on the undergraduate, graduate and post-graduate levels.
|
0.915 |
2014 — 2017 |
Norman, Michael Chourasia, Amit |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cif21 Dibbs: Ubiquitous Access to Transient Data and Preliminary Results Via the Seedme Platform @ University of California-San Diego
Computational simulations have become an indispensible tool in a wide variety of science and engineering investigations. Quick and effective assessments of the resulting data are necessary for efficient use of researcher time and computation resources, but this process is complicated when a large collaborating team is geographically dispersed and/or some team members do not have direct access to the computation resource and output data. Current methods for sharing and assessing transient data and preliminary results are cumbersome and labor intensive; each research team must create their own scripts and ad hoc procedures to push data from system to system and user to user. Better tools and cyberinfrastructure are needed to support preliminary results sharing for collaborating computational science teams.
This project develops web-based building blocks and cyberinfrastructure to enable easy sharing and streaming of transient data and preliminary results from computing resources to a variety of platforms, from mobile devices to workstations, making it possible to quickly and conveniently view and assess results and provide an essential missing component in High Performance Computing and cloud computing infrastructure.
|
0.915 |
2014 — 2016 |
Norman, Michael Polichar, Valerie (co-PI) [⬀] Hutton, Thomas |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cc-Nie Networking Infrastructure: Configurable, High-Speed, Extensible Research Bandwidth (Cherub) @ University of California-San Diego
The Configurable, High-speed, Extensible Research Bandwidth (CHERuB) project connects the University of California, San Diego (UCSD) and the San Diego Supercomputer Center (SDSC) to national research networks at 100Gbps for collaborative research in data-driven science and engineering. CHERuB supports multi-institutional data transit over research networks such as Internet2 and ESnet for analysis and distributed computing, such as that provided by SDSC's Gordon project, and collaboration, such as that facilitated by UCSD's LHC Tier-2 site. It includes an upgrade of the campus gateway to support CENIC's newly-installed 100G pipe between San Diego and Pacific Wave's 100G regional research network collective, and intra-campus infrastructure to facilitate high-speed access by targeted research activities.
CHERuB enables advancement of knowledge in large cosmology, atmospheric science, high-energy physics and other fields by providing high-bandwidth connectivity between the researcher and national networks. The project allows scientific instruments to efficiently transport big data, removing the network as bottleneck in the course of scientific discovery. Simulation, visualization, measurement, and interactive experimentation is streamlined. Exploration of OpenFlow/OSCARS coordination with real-world data flows will help other research users understand how best to set up projects for large pipes/big data.
Beyond the initial science drivers, CHERuB establishes infrastructure to serve additional research projects on campus and across the country. UCSD shares what we learn via CHERuB about transmission and processing of big data across the national network infrastructure with other campuses. High bandwidth makes possible new instructional activities that cross institutions and locations, such as telemedicine and distributed distance learning.
|
0.915 |
2014 — 2017 |
Norman, Michael |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Si2-Sse: Petascale Enzo: Software Infrastructure Development and Community Engagement @ University of California-San Diego
The purpose of this project is to develop an astrophysics and cosmology software application "Enzo-P", built on the highly scalable parallel adaptive mesh refinement (AMR) software framework "Cello" that is being developed concurrently. The Enzo-P application will be capable of running extreme scale numerical simulations to investigate frontier questions in star formation, molecular cloud turbulence, interstellar medium dynamics, galaxy formation, intergalctic medium, formation of the first stars and galaxies, galaxy clusters, and cosmic reionization. This new software will empower the current large and diverse Enzo user/developer community to take full advantage of current and future high performance computer (HPC) systems. The Cello AMR framework can be used independently of Enzo-P, thus enabling researchers in other diverse scientific fields to develop AMR applications capable of running on "Petascale-and-beyond" HPC platforms.
The novel approach used for Cello is to implement a "forest-of-octree" AMR scheme using the Charm++ parallel programming system. Octree-based AMR has been shown to be among the highest scaling AMR approaches, with demonstrated scaling to over 200K CPU cores. The Charm++ object-oriented parallel programming language supports data-driven asynchronous execution, is inherently latency-tolerant and automatically overlaps computation with communication, and provides support for developing Exascale applications, including in-memory distributed checkpointing and sophisticated dynamic load balancing schemes. Enzo-P development will be directed by the vibrant Enzo open development community, who will migrate Enzo's self-gravity, cosmology, chemistry and cooling, MHD, and radiation hydrodynamics capabilities to use the Cello scalable AMR framework.
|
0.915 |
2015 — 2017 |
Norman, Michael Bordner, James |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Realistic Simulations of the Intergalactic Medium: the Search For Missing Physics @ University of California-San Diego
In the past decade new, more precise observations of the intergalactic medium (IGM)--the hydrogen and helium gas between the galaxies produced in the Big Bang--have revealed a discrepancy with the well-established predictions of our computational models. In particular, precision observations of the IGM using the Keck telescopes in Hawaii show that the temperature and ionization state of the IGM is not what our standard cosmological simulations predict: the IGM is either somewhat hotter than ultraviolet radiation from stars in galaxies can make it, or the IGM is distributed differently in space than the simulations predict, or both. There could be missing sources of heat in our models, such as energy injection by decaying dark matter particles. The discrepancy is perplexing since the standard model predicts the galaxy distribution exceedingly well. The discrepancy suggests that the standard model lacks some essential ingredient which we refer to simply as "missing physics". The significance of this project to the nation is that it promotes the progress of science in the fundamental field of cosmology where the US is a world leader. The project is addressing the issue of whether we are overlooking a key component of the mass-energy content of the universe. Precise answers require powerful tools, and the Blue Waters supercomputer is the tool for the job.
In this work, the investigators will systematically explore how the discrepancy between models and observations can be removed. They will carry out on the Blue Waters supercomputer simulations of unprecedented size and physical realism to hunt down what the missing physics is. Since the IGM comprises most of the volume and matter in the universe, a small discrepancy may translate into a big discovery. Very large 3D simulations are required because the range of scales in each dimension exceeds 4000:1. They will examine whether additional heating sources such as X-rays and ultraviolet light from early galaxies and quasars can make up the heat deficit. Alternatively, they will also examine whether the limited spatial resolution in current simulations explains the discrepancy by carrying out the first large-scale simulations of the IGM that also simulates the gas near to galaxies in detail.
|
0.915 |
2015 — 2018 |
Norman, Michael Robinson, Erin Kirkpatrick, Christine |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Bd Hubs: Collaborative Proposal: West: a Big Data Innovation Hub For the Western United States @ University of California-San Diego
The Big Data Innovation Hub for the Western United States will join stakeholders from academia, industry, non-profit institutions and the community who share common challenges and innovative approaches related to the acquisition, storage, analysis and integration of large or "messy" data, commonly referred to as Big Data. The West's Innovation Hub (Hub) will serve 13 states with Montana, Colorado and New Mexico marking the eastern boundary. This project will develop the organizational and governance structures for the Hub, and initiate efforts toward defining spoke activities for subsequent phases of the data innovation hubs program.
The initial themes include Big Data technology, data-enabled scientific discovery and learning, managing natural resources and hazards, metro data science, and precision medicine. Partnerships fostered through the Hub will enable the use of Big Data to assess risks related to regional and long-term decisions. The Hub's structure will enable impact in later phases of the Hubs program that may include data-driven models for managing natural resources to tools for integrating self-collected patient data for more precise care options. Through coordination activities that inspire the action of its members, the Hub has the potential to facilitate the improved flow of commercial technologies in ways that maximize competitiveness for member organizations, such as universities, and vice versa: the Hub has the potential to expand the impact of its members' technologies through greater adoption or via start-ups. The Hub will be impactful by facilitating cross-discipline approaches to Big Data innovation and problem solving, influencing the next generation of thought leaders and data scientists. The partnerships enabled by the Hub will lead to professional certificate programs and student internships, creating a pipeline of graduates from partner institutions to impact corporations, public and governmental agencies, national labs, resource-planning agencies, and regulatory commissions.
Project URL: BDHub.SDSC.edu
|
0.915 |
2016 — 2019 |
Xu, Hao Norman, Michael |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Cds&E: Renaissance Simulations Laboratory to Model and Explore the First Galaxies in the Universe @ University of California-San Diego
A great challenge in astrophysics is to understand in detail how the initial smooth distribution of matter in the early Universe formed the first galaxies. Complementing observations of real galaxies, researchers use computational simulations to model the early Universe and study the results. This process allows one to learn how these first galaxies might have formed. However, the sheer size and complexity of such galaxy simulations present their own challenge as a single research group lacks the capacity to explore them fully. As a result, maximizing the scientific value of simulations demands new tools and services designed to foster the growth of a collaborative, multi-group research community. This project aims to develop and use a new virtual laboratory to enable transformative scientific inquiry on new and existing galaxy simulations, some of which were produced by prior NSF support. Enabling public access and unrestricted analysis and fostering a collaborative environment for sharing technology and results will ensure that galaxy simulations continue to be valuable within and beyond the research group that originally conducted them. This project addresses the national imperative to develop US cyber infrastructure and to develop US leadership in scientific research in astrophysics.
More technically, with prior NSF support the investigators used the Blue Waters supercomputer at the National Center for Supercomputing Applications (NCSA) to perform the Renaissance Simulations: among the largest, most detailed simulations of the formation and evolution of the first galaxies to date. The Renaissance Simulations yielded a sample of over 3,000 high redshift (20 > z > 7) galaxies and were specifically designed to simulate the dominant sources responsible for preheating the intergalactic medium and reionization at an unprecedented level of detail. This project plans two related activities: (1) further analysis of the Renaissance Simulations; and (2) the creation of an open-data access portal, the Renaissance Simulation Laboratory (RSL), which will provide open access to this unique data corpus along with associated data analysis tools for the astronomical research community. Driving its design and utility, the investigators will use the RSL to carry out their own research investigations. The investigators' research processes will become part of the RSL in the form of executable "Jupyter" notebooks. The Jupyter Notebook, an evolution of the iPython Notebook, is a fundamental part of the RSL and is ushering in the era of open science. The investigators' use of Docker containers is equally compelling. By containerizing Jupyter Notebooks which execute pre-programmed data analysis workflows the investigators are able to bring computation to the data and share how they obtain their scientific results, a key step toward reproducible science. Users will have the option of downloading notebooks and associated data to their own platforms or executing them on San Diego Supercomputer Center and NCSA high performance platforms.
|
0.915 |
2018 — 2021 |
Norman, Michael Bordner, James |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research:Framework:Software:Nsci:Enzo For the Exascale Era (Enzo-E) @ University of California-San Diego
The earliest stages of the formation of galaxies and quasars in the universe are soon to be explored with a powerful new generation of ground and space-based observatories. Broad and deep astronomical surveys of the early universe beginning in the next decade using the Large Synoptic Survey Telescope and the James Webb Space Telescope will revolutionize our understanding of the origin of galaxies and quasars, and help constrain the nature of the dark matter which is the dominant matter constituent in the universe. Detailed physical simulations that model the formation of these objects are an indispensible aid to understanding the coming glut of observational data, and to maximize the scientific return of these instruments. In this project, investigators at the Univ. California San Diego, Columbia Univ., Georgia Tech, and Michigan State Univ. are collaborating with the goal of developing a next generation community simulation software framework for the coming generation of supercomputers for cosmological simulations of the young universe. Undergraduate and graduate students will be directly involved in the software development as well as its application to several frontier cosmological research topics. The software framework that will be produced will be disseminated as open source software to enable a much broader range of scientific explorations of astrophysical topics.
The project brings together the key developers of the open source Enzo adaptive mesh refinement (AMR) hydrodynamic cosmology code, who will port its software components to a newly developed AMR software framework called Cello. Cello implements the highly scalable array-of-octrees AMR algorithm on top of the powerful Charm++ parallel object system. Designed to be extensible and scalable to millions of processors, the new framework, called Enzo-E, will target exascale high performance computing (HPC) systems of the future. Through this project, the entire Enzo community will have a viable path to exascale simulations of unprecedented size and scope. The investigators have chosen three frontier problems in cosmology to drive the development of the Enzo-E framework: (1) the assembly of the first generation of stars and black holes into the first galaxies; (2) the role of cosmic rays in driving galactic outflows; and (3) the evolution of the intergalactic medium from cosmic dawn to the present day. Annual developer workshops and software releases will keep the broader research community informed and involved in the developments.
This project is supported by the Office of Advanced Cyberinfrastructure in the Directorate for Computer & Information Science & Engineering and the Division of Astronomical Sciences in the Directorate of Mathematical and Physical Sciences.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |
2019 — 2024 |
Lazowska, Edward (co-PI) [⬀] Norman, Michael Culler, David Smallen, Shava Kellen, Vincent |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cloudbank: Managed Services to Simplify Cloud Access For Computer Science Research and Education @ University of California-San Diego
The University of California, San Diego's San Diego Supercomputer Center and Information Technology Services Division, the University of Washington's eScience Institute, and the University of California, Berkeley's Division of Data Science will develop and operate CloudBank, a cloud access entity that will help the computer science community access and use public clouds for research and education by delivering a set of managed services designed to simplify access to public clouds. Driven by the profound potential of the public cloud and the associated complexity in using it, CloudBank will serve as an integrated service provider to the research community through a comprehensive set of user-facing and business operations functions. These services will span the spectrum from novice to advanced cloud users, including front line user support, cloud solution consulting, training, and assistance in preparing proposals that include cloud resources. CloudBank will provide innovative financial engineering options that will give researchers more flexible cloud terms tailored for their needs and contribute to the sustainability of CloudBank operations. CloudBank will help NSF by bundling multiple small requests that come directly to NSF into a bulk request to cloud providers, dis-incentivizing more costly direct connections. Through this aggregation and innovative financial contract types, CloudBank will pass along savings to researchers that would otherwise be unavailable to them.
CloudBank will provide on-ramp support that reduces researcher cloud adoption pain points such as: managing cost, translating and upgrading research computing environments to an appropriate cloud platform, and learning cloud-based technologies that accelerate and expand research. It will be complemented by a cloud usage monitoring system that gives NSF-funded researchers the ability to easily grant permissions to research group members and students, set spending limits, and recover unused cloud credits. These systems will support multiple cloud vendors, and be accessed via intuitive, easy-to-use user portal that gives users a single point of entry to these functions.
The CloudBank project and associated portal software, outreach and training materials, and experience in negotiating and delivering public cloud services will significantly advance the state of the practice and understanding of how to use these resources in computer science research and education. The close collaboration between the CloudBank project, cloud providers, researchers, and students will simultaneously enable new research while providing a unique opportunity to develop and study the operational, technological and business dimensions of fundamentally new model of public/private partnership in the service of the research enterprise.
It is a primary objective of CloudBank to broaden the access and impact of cloud computing across the many fields of computer science research and education. The project will reach hundreds of researchers and students through allocated research projects and classes. A far larger group of stakeholders will benefit by CloudBank outreach efforts, such as workshops, publications, the CloudBank Center of Excellence on Cloud-Enabled Research and Education, and the CloudBank Advisory Board. CloudBank offers a long-term vision for service and sustainability that will broaden the impact of public cloud computing across all sciences and help ensure that students entering the workforce and research enterprise will be able to contribute and compete in the global economy.
This project is accessible at http://tiny.cc/cloudbank
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |
2019 — 2020 |
Norman, Michael Majumdar, Amitava Altintas, Ilkay Strande, Shawn Tatineni, Mahidhar (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Category I. Computing Without Boundaries: Cyberinfrastructure For the Long Tail of Science @ University of California-San Diego
Science and engineering rely upon an increasingly complex and integrated ecosystem of advanced computing and data systems, scientific software, and expertise to conduct research that leads to new knowledge and discovery, and improves the Nation's competitiveness and the health and welfare of its citizens. From the building blocks of life on earth to the deepest mysteries of the universe, researchers use this cyberinfrastructure (CI) to carry out computation and analysis at ever larger scales and complexity. Computing without Boundaries: Cyberinfrastructure for the Long Tail of Science is a transformational project from the San Diego Supercomputer Center at the University of California, San Diego, designed to address these challenges. The centerpiece of the project is the acquisition, deployment, and operation of Expanse, a powerful supercomputer that will complement and extend NSF's Innovative High-Performance Computing (HPC) program. Expanse will: 1) increase the capacity and performance for thousands of users of batch-oriented and science gateway computing; and 2) provide new capabilities that will enable research increasingly dependent upon heterogeneous and distributed resources composed into integrated and highly usable CI. Expanse will feature innovations in system software, operations, and support that extend its capabilities far beyond the limits of the physical system. Through its integration with the public cloud and the Open Science Grid, collaboration with the Science Gateway Community Institute, and support for composable systems, Expanse will become part of a more inclusive national CI. The long tail of computing reflects diversity in science, researchers, their institutions, and those who support and operate CI. The project will reach out to underserved and under-resourced communities, through new initiatives like HPC@MSI, which will allocate a portion of Expanse to Minority Serving Institutes via a rapid access allocation to help them quickly use this powerful new resource. Projected to have a peak speed of 5 Petaflop/s, Expanse will feature next-generation Intel Central Processing Units (CPUs), NVIDIA Graphics Processing Units (GPUs), and a Mellanox InfiniBand network. It will be composed of 13 SDSC Scalable Compute Units (SSCUs), each of which contains 56 CPU nodes and 4 GPU nodes along with over 60 TB of distributed non-volatile memory storage for user scratch. The SSCUs will be integrated with a 12-PB Lustre parallel file system and 7 PB of object storage. SDSC, along with its partners Dell and Aeon Computing, will deploy Expanse in SDSC's energy-efficient data center on the UCSD campus. Expanse will be connected to multiple high-performance research and education networks at 100 Gbps and reach thousands of users who require high performance, but modest-scale resources. Allocation and usage policies will be tailored to achieve fast turnaround and responsiveness. Experts in computational science, data-intensive computing, scientific workflows, and large-scale systems operations will support Expanse at the highest levels of utilization, reliability, and usability. Through integration with national CI, Expanse will enable new models of computing and research that require the full complement of HPC systems and simulation, experimental data analysis, and computational expertise to support and facilitate breakthrough science. Knowledge gained through the project will lead to improvements in algorithms, software, and systems management tools, as well as a better understanding of how integrated CI can be configured to support emerging research.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |
2019 — 2023 |
Norman, Michael Kirkpatrick, Christine |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Bd Hubs: Collaborative Proposal: West: Accelerating the Big Data Innovation Ecosystem @ University of California-San Diego
The BD Hubs foster regional networks of stakeholders and cooperate nationally on US priorities of importance to a region and to the nation. The activities of the BD Hubs contribute to a vibrant national data innovation ecosystem. The West Big Data Innovation Hub builds and strengthens strategic partnerships -- harnessing the data revolution to address scientific and societal challenges. Whether working towards the future of data-informed healthcare or tackling projects in disaster recovery, the Hub envisions a diverse community empowered to contribute to areas of national priority. The Hubs focus on data science activities and initiatives that inspire cross-sector collaboration and exemplify the need for multi-disciplinary approaches.
With this award, the West Big Data Innovation Hub will: (1) Develop and enable translational data science (TDS) pilot projects in our thematic areas to highlight the value of cross-sector collaboration, enhance fluency with real-world use cases, and emphasize a pragmatic and holistic view of the data and analytics lifecycles. We envision our signature TDS initiatives for 2019-2023 to include: Fire and Water: Data Collaborative for the Future of Natural Resource Management; Stress-Testing Access for Road Video; and Housing Instability: Trusted Data Collaborative for Responsible Data Management. (2) Facilitate team formation across different stakeholder groups through our activities, capturing inspirational stories and encouraging teams to reflect and share their insights about cross-sector collaboration. (3) Raise awareness of regional opportunities and inspire work in priority areas including Natural Resources & Hazards, Metro Data Science, Health & Medicine, Data-Enabled Discovery & Learning, Data Sharing, Cloud Computing, and Responsible Data Science. (4) Support data science education and workforce development. Recognizing that a diverse, multi-faceted workforce is key to addressing current scientific and societal challenges, we will continue to expand our portfolio of education and workforce development efforts, including a focus on Train-the-Trainer sessions, Pedagogy and Practice, Data Science for Social Good and the Data Science Corps, Findable Accessible Interoperable and Reusable (FAIR) data, and institutional change -- providing a platform for broadening participation in data science. Core to our progress in Programmatic Activities, Socio-Technical Shared Resources and Services, and Education and Workforce Development Activities will be a coordinated evaluation, opportunities for scaling regional successes to the national network of Big Data Hubs, and strategic efforts for Hub sustainability including the development of external funding streams. These efforts will be designed to enable community input and to strengthen channels for ongoing dialogue.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |
2021 — 2024 |
Norman, Michael |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Cds&E: Ai-Enhanced Exascale Simulations of the Earliest Galaxies @ University of California-San Diego
Impressive recent advances in accelerating physics-based scientific simulations using deep neural network-trained surrogate models have stimulated this project, to use such models to accelerate cosmological simulations of the formation of the first galaxies, including chemical pre-enrichment by Population III stars. The surrogate model will be trained on high-resolution adaptive mesh refinement (AMR) simulations and validated by incorporating it into AMR simulations of lower spatial resolution. Once validated, this approach will become part of Enzo-E, the highly scalable version of the widely used Enzo code. A principal goal is to establish the validity and usefulness of this pathfinding approach, addressing a frontier problem in astronomy by combining numerical simulations with machine learning. This technical methodology can be customized for other astrophysical situations where a vast range of scales inhibits a direct numerical assault. It will also help to stimulate adoption of surrogate models in multiple disciplines, and lead to improved science-informed deep learning neural network designs.
The surrogate model, to be called Pop3Net, is composed of several convolutional deep neural networks of custom design which predict when and where Pop III stars form, and other networks that emulate radiative and chemical feedback effects. Pop3Net will be trained using state-of-the-art full-physics simulations of the formation of the first galaxies. The hybrid Enzo-E+Pop3Net code will be used to simulate the formation of large ensembles of high redshift galaxies, computing synthetic observations using the Powderday package, to compare with future data from the James Webb Space Telescope and the Nancy Grace Roman Space Telescope. This part of the work will examine how sensitive galaxy properties are to the chemical enrichment history of the first galaxies, including supersonic baryon-dark matter streaming velocity effects.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |