2002 — 2005 |
Kennedy, Ken Johnson, Don (co-PI) [⬀] Zwaenepoel, Willy (co-PI) [⬀] Vardi, Moshe [⬀] Mellor-Crummey, John |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Mri: Acquisition of Citi Terascale Cluster (Ctc) @ William Marsh Rice University
0216467 Moshe Y. Vardi Don H. Johnson; Ken W. Kennedy; John M. Mellor-Crummey; Willy Zwaenepoel MRI: Acquisition of CITI Terascale Cluster \(CTC\)
This proposal, requiring access to the kinds of experimental computational resources needed for scalability experiments, aims to support scalability to thousands of processors. Achieving this goal requires experimentation of computational facilities of sufficient size to establish that solutions will scale to large systems. A high-performance computational cluster with a peak performance of approximately one teraflop, supporting both compute- and data-intensive science and engineering, will enable researchers to make fundamental advances in diverse areas such as biochemistry, biology, chemistry, computational mathematics, computer science and engineering, earth science, economics, physics, political science, and psychology. Experiments planned include: a. Scalability of compiler techniques for systems with hundreds of processors and deep memory and communication hierarchies; b. Development, simulation, and testing of scalable Web services on hundreds of processors; c. Simulations of ad hoc multihop wireless networks scaling to thousands of nodes; d. Scalable algorithms for Monte-Carlo studies of the physics of heavy ion collisions; e. Design and evaluation of scalable optimization algorithms based on component frameworks; f. Extraction and analysis of data on hundreds of millions of international events, to better predict and understand international conflicts (extend the Kansas Data System); and g. Scalability tests and practical application of new algorithms for modeling and simulation of biomolecular interactions using several thousand flexibility parameters.
By integrating the equipment in the existing curriculum the educational impact is expected to be large, going beyond a course in parallel programming. Several programs are already in place addressing diversity.
|
1 |
2002 — 2004 |
Cooper, Keith Kennedy, Ken Torczon, Linda Mellor-Crummey, John |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ngs: Grads: Efficient Script-Based Application Development For Networked High Performance Computing Environments @ William Marsh Rice University
Title: NGS: Efficient Script-Based Application Development for Networked High Performance Computing Environments.
This action will continue the research on the GrADS technologies for building Grid applications and to expand their applicability to new classes of applications. In particular, the work will concentrate on workflow- style applications from mesoscale weather forecasting and computational biology. In addition, it will explore enhanced technologies for automatic construction of grid applications from high-level specifications, along with new mechanisms for performance monitoring and rescheduling. The results will be demonstrated on new versions of the two GrADS testbeds.
|
1 |
2004 — 2008 |
Fowler, Robert Koelbel, Charles Cooper, Keith Kennedy, Ken Mellor-Crummey, John |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Introducing Parallelism Into Scripting Languages Through Generalized Data Distributions and Library Preprocessing @ William Marsh Rice University
The goal of this project is to explore compilation technologies for high-levelscripting languages that make it possible to run the generated code on leading-edge parallel com-puters without sacri .cing application performance,programming clarity,or programming system responsiveness.The intellectual merit of the proposed research lies in research on the trade-o .sbetween di .erent compilation times .language generation time versus application compilation orinterpretation time .to achieve higher optimization levels without making compile times unaccept-ably long.
|
1 |
2008 — 2011 |
Nakhleh, Luay (co-PI) [⬀] Clementi, Cecilia (co-PI) [⬀] Nordlander, Peter (co-PI) [⬀] Mellor-Crummey, John Tezduyar, Tayfun (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Mri: Acquisition of Cyberinfrastructure For Computational Research (Ccr) @ William Marsh Rice University
Proposal #: CNS 08-21727 PI(s): Mellor-Crummey, John Clementi, Cecilia; Nakhleh, Luay; Nordlander, Peter A.; Tezduyar, Tayfun E.. Institution: Rice University Houston, TX 80309-0572 Title: MRI/Acq.: Acq. of Cyberinfrastructure for Computational Research (CCR) Project Proposed: This project, acquiring instrumentation and operating a computational cluster for research (CCR), enables code development, medium scale computation, preparation of scalable codes for execution on national-scale resources, post-processing and visualization of results from remote supercomputers, and research training. Projects range from development of enabling technologies, such as programming models and performance analysis tools for parallel systems, to computational science and engineering research that includes techniques for automated verification of complex hardware and software designs, data mining of multiple whole genomes, studying large-scale ecological dynamics, determining the properties of nanophotonic structures through simulation, simulating protein dynamics to study flexibility and function, modeling transport properties in biophysical systems, and understanding fluid structure interactions in physiological systems and engineering designs. The increase in the number of CPU hours available for shared research computing contributes to support the explosive growth of computational research. Hence, more tightly coupled computing can be supported at a higher scale; a centralized computing facility can be scheduled for higher utilization; a single system can be operated and administered (rather than collections of cluster for individual research groups, and may effectively on-ramp national resources. Operating this modest-size computational cluster locally complements the TeraGrid and accelerates research. In particular, higher bandwidth and lower latency to the desktop from a local system better supports code development using interactive tools with graphical user interfaces (e.g., for debugging, performance, and scalability analysis, and data visualization) while preparing codes for execution on national-scale resources. Indeed, computation is an indispensable tool for scientific inquiry and complements traditional approaches of theory and experimentation.
Broader Impacts: CCR enhances the graduate research and training of hundreds of students and post-doctoral fellows in science and engineering. It provides educational experience for undergraduate and graduate students through its use in courses that integrate high performance computing and computational problem solving. Furthermore, a boot-camp will familiarize students with high performance computing and teach computational problem solving skills. The instrument also contributes to recruiting and serves as an educational tool in the Alliance for the Graduate Education and the Professoriate (AGEP), a successful program in attracting members from underrepresented groups into higher education, thus broadening their educational experience.
|
1 |
2012 — 2014 |
Sarkar, Vivek (co-PI) [⬀] Mellor-Crummey, John |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research - Si2-S2i2: High-Performance Computational Science With Structured Meshes and Particles (Hpcs-Smp) @ William Marsh Rice University
The starting point for this proposal is a view of scientific simulation articulated in the conclusions of the 2008 National Academy of Sciences Study, The Potential Impact of High-End Capability Computing on Four Illustrative Fields of Science and Engineering: "Advanced computational science and engineering is a complex enterprise that requires models, algorithms, software, hardware, facilities, education and training, and a community of researchers attuned to its special needs." (p. 122)
Over the last few years, the design of computer and software systems, particularly as they relate to simulation in the physical sciences, has been organized around a collection of algorithmic patterns / motifs. These patterns have been very productive because they are a natural "common language" in which application scientists can express their computations, and for which computer scientists can provide optimized libraries, domain specific languages, compilers, and other software tools.
This project will design an institute focused on a subset of these patterns --- structured grid discretizations of partial differential equations and particle methods, along with the linear and nonlinear solvers that enable their effective use --- with the specific goals of providing simulation capabilities for a set of scientific domains that make heavy use of these patterns. Two major components are envisioned to this proposed institute, called the Institute for High-Performance Computational Science with Structured Meshes and Particles (HPCS-SMP). The first component is a software infrastructure development activity that will be performed by a team whose expertise spans the design and development of mathematical algorithms and software frameworks, as well as the design and development of compilers, runtime systems, and tools that enable one to obtain high performance from emerging multicore and heterogeneous architectures. The second component is an outreach activity, in which algorithms, libraries, and software frameworks developed by the institute will be customized and integrated into simulation codes for stakeholder application domains. At the heart of this activity will be collaborations and partnerships, in which the institute will provide one or more software developers to collaborate with application scientists over a period of months to years to develop a new simulation capability or enhance an existing one.
The design of this institute will be carried out through a series of workshops, each focused on one of five stakeholder science domains that have been identified as using these motifs and that play a central role in various NSF Grand Challenge problems, with participation of both representatives of the science domain and the the relevant mathematics and computer science communities. In addition, there will be a final workshop that will bring together the relevant mathematics and computer science experts to identify cross-cutting themes. These information obtained from these workshops will be used by the project to develop the final conceptual design of the institute, in the form of a document that includes the input from all of the workshops and our analysis of how this leads to a design of a software institute.
|
1 |
2015 — 2020 |
Mellor-Crummey, John |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Si2-Ssi: Collaborative Research: a Sustainable Infrastructure For Perfomance, Security, and Correctness Tools @ William Marsh Rice University
Software has become indispensable to society, used by computational scientists for science and engineering, by analysts mining big data for value, and to connect society over the Internet. However, the properties of software systems for any of these purposes cannot be understood without accounting for code transformations applied by optimizing compilers used to compose algorithm and data structure templates, and libraries available only in binary form. To address this need, this project will overhaul, integrate, and enhance static binary analysis and runtime technologies to produce components that provide a foundation for performance, correctness, and security tools. The project will build upon three successful and widely adopted open source software packages: the DynInst library for analysis and transformation of application binaries, the MRNet infrastructure for control of large-scale parallel executions and data analysis of their results, and the HPCToolkit performance analysis tools. The project team will engage the community to participate in the design and evaluation of the emerging components, as well as to adopt its components.
This project will have a wide range of impacts. First, software components built by the project will enable the development of sophisticated, high-quality, end-user performance, correctness, and security tools built by the project team, as well as others in academia, government, and industry. Software developed by the project team will help researchers and developers tackle testing, debugging, monitoring, analysis, and tuning of applications for systems at all scales. Second, end-user tools produced by the project have a natural place in the classroom to help students write efficient, correct, and secure programs. Third, components produced by the project will lower the barrier for new researchers to enter the field and build tools that have impact on production applications without years of investment. Fourth, the project will provide training for graduate students and interns in the area of software for performance, correctness, and security. Finally, through workshops and tutorials, the project will disseminate project results, provide training to enable others to leverage project software, and grow a community of tool researchers who depend on project components and thus have a strong motivation to help sustain project software into the future.
Modernizing open-source software components and tools for binary analysis will enable static analysis of application characteristics at the level of executable machine code, transformation of binaries to inject monitoring code, measurement to capture a detailed record of application?s interactions with all facets of a target platform, analysis of recorded data in parallel, and attribution of analysis results back to application source code in meaningful ways. Providing innovative, software components that support development of robust performance, correctness, and security tools will accelerate innovation by tools researchers and help them grapple with the increasing complexity of modern software. Of particular note, helping tools researchers and computational scientists grapple with the challenges of software for modern parallel systems and producing training materials that help people use this software, addresses several of the needs identified in the NSF Vision for Cyberinfrastructure for the 21st Century.
|
1 |