2000 — 2004 |
Saltz, Joel (co-PI) [⬀] Sussman, Alan |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Compiler and Runtime Support For Data Intensive Computing On Multi-Dimensional Data @ University of Maryland College Park
One of the largest and fastest-growing problems in scientific computing is the analysis and processing of very large data sets. These scientific data sets can come from long-running simulations (e.g. simulations of water pollution that create "snapshots" of the expected water conditions at later times), archives of remote sensing data (e.g. high-resolution satellite imagery), and archives of medical images (e.g. MRI scans for a patient or group of patients). These data sets are usually multi-dimensional, including spatial coordinates, time stamps, and several physical properties at each point. Several systems now support storage, retrieval, and visualization of such data sets, but few can efficiently process the data. This project will develop methods to produce efficient programs to carry out multi-dimensional data processing and analysis using a high-level parallel language.
The project will attack this problem by developing runtime routines for optimizing resource usage, appropriate language extensions, and aggressive compiler optimizations for large data processing. The runtime methods will implement policies that optimize computational efficiency on a broad range of large data set analyses, taking into account the spatial structure and partitioning of the data and the computation to be performed. Incorporating these routines into the investigator's Active Data Repository will substantially generalize and improve that system. The language extensions and compiler optimizations will then make use of the runtime system to enable applications that analyze multi-dimensional data sets to be expressed at an abstract level, yet achieve high utilization of computational, storage, and communication resources.
|
1 |
2001 — 2005 |
Sussman, Alan |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Itr/Ap&Im a Data Intense Challenge: the Instrumented Oil Field of the Future @ University of Maryland College Park
Collaborative Research: ITR/AP&IM Data Intense Challenge: The Instrumented Oil Field of the Future
Mary Wheeler - University of Texas at Austin - 0121523 Alan Sussman - University of Maryland, College Park - 0121161 Joel Saltz - Ohio State University Research Foundation - 0121177 Manish Parashar - Rutgers University - 0120934
Increasing production from existing oil and natural gas reservoirs is crucial for the US economy. In order to better monitor and optimize oil and gas production, advanced technologies from field instrumentation to information technology and computational science are essential. Field technologies include time-lapse surface and borehole seismic, permanent downhole sensors, intelligent well completions, fiber optics, and remote control operations. IT technologies include data management, data visualization, parallel computing, and decision-making tools such as new wave propagation and multiphase, multi-component flow and transport computational portals. These diverse technologies can be integrated to achieve real-time monitoring and optimization of reservoir production: The Instrumented Oilfield.
A major outcome of the proposed research is a computing portal which will enable reservoir simulation and geophysical calculations to interact dynamically with the data and with each other and which will provide a variety of visual and quantitative tools. Test data will be provided by oil and service companies currently participating in UT Austin industrial affiliate programs. Since the proposed research is directed towards the general problem of modeling and characterization of the earth's subsurface, it has immediate application to other areas, including environmental remediation and storage of hazardous wastes.
|
1 |
2005 — 2006 |
Keleher, Peter Sussman, Alan Richardson, Derek (co-PI) [⬀] Bhattacharjee, Samrat (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Csr--Aes: Employing Peer-to-Peer Services For Robust Grid Computing @ University of Maryland College Park
The project will design and build a massively scalable infrastructure for executing grid applications on a widely distributed set of resources. Such infrastructure will be decentralized, robust, highly available, and secure, while efficiently mapping applications to available resources throughout the system. This proposal addresses the crucial problem of job placement in a completely distributed and decentralized manner. The proposed methods include a distributed algorithm for submitting jobs and efficiently matching them to available resources, and P2P techniques for both load-balance and for resilience; preliminary analysis indicated the proposed scheme scale is seamlessly with system size, and is robust against component failures and peer departures/joins. The project will implement, and validate the system developed using a set of problems in computational astronomy. The work proposed here will make robust, reliable, large-scale computing resources available to users from a broad range of scientific disciplines, allowing users to form ad-hoc resource-sharing communities. Such a system will enable scientists to seamlessly contribute resources to the community and to utilize these resources as needed. All the software developed as part of this proposal will be made publicly available and will help other researchers as well as practitioners to better evaluate the software produced and to build on it.
|
1 |
2006 — 2011 |
Keleher, Peter Sussman, Alan Richardson, Derek (co-PI) [⬀] Bhattacharjee, Samrat (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Csr--Aes: Creating a Robust Desktop Grid Using Peer-to-Peer Services @ University of Maryland College Park
The confluence of peer-to-peer and Grid computing has been described as inevitable in several recent papers. Peer-to-peer (P2P) services show the promise of providing robustness, reliability, and decentralization in many areas, while Grid computing has shown great benefits for enabling applications to take advantage of idle cycles available on machines all across the Internet. In this project we are employing P2P services to both allow users (peers) to submit jobs to be run in the system and to run jobs submitted by other users on any resources available in the system. All peers contribute resources to an ad-hoc resource pool, and all peers can submit jobs that are executed using available resources, across institutional boundaries. Using P2P services can provide a robust, reliable, scalable job submission and execution system that is able to take advantage of any computational resources that a set of users are willing to share.
We focus on the crucial problem of completely distributed and decentralized job placement in a Grid environment. We are working on distributed algorithms for submitting jobs and efficiently matching them to available resources. We use P2P techniques for both load balancing and for resilience. From preliminary analysis and experiments, our schemes both scale with system size and are robust against component failures and peer departures/joins. The work is a collaborative effort between computer scientists and astronomers. We are designing, implementing, and validating the algorithms using a set of applications and associated workloads from computational astronomy.
|
1 |
2006 — 2010 |
Sussman, Alan Subrahmanian, Venkatramanan Abed, Eyad (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Dddas-Tmrp: Data-Driven Power System Operations @ University of Maryland College Park
The main objective of this research is to develop new algorithms and tools for the distributed collection, sharing, mining and harnessing of data for cooperative health monitoring and vulnerability assessment of power systems in real time. Fundamental concepts and technology for achieving this objective exist today in the areas of power systems, system theory and computer science, but harnessing them to achieve this vision will require the significant multidisciplinary concerted effort proposed here. The project aims at making fundamental contributions to the necessarily multidisciplinary systems required for data utilization in monitoring and control of interconnected engineering systems. The application that this proposal focuses on is the creation of new data-driven tools for electric power system operation and control. The applications systems include traditional SCADA systems as well as the emerging Phasor Measurement Unit (PMU) data concentrators. This effort will focus on five specific task areas: Distributed agent-based system monitoring; Pattern recognition and change detection; Dynamic analysis of signatures for early warning; Data-driven, low-order modeling for security assessment; Automated visualization for power system health monitoring. This work will provide a broader impact in several areas. First, the results in data manipulation, visualization, and processing will have application in other domains. Any complex system that relies heavily on measurements for monitoring and control shares the same need to extract key information from massive amounts of data received over wide areas and time scales. Second, several of the investigators work very closely with the electric power industry through the Power Systems Engineering Research Center (PSERC), an NSF Industry/University Cooperative Research Center designed to stimulate collaboration among 13 universities and 35 companies. Results from this project will naturally influence that collaboration in the form of seminars, tutorials, and interaction with industry.
|
1 |
2009 — 2013 |
Keleher, Peter Sussman, Alan Richardson, Derek (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Csr: Small:Data Staging and Parallel Applications in Robust Desktop Grids @ University of Maryland College Park
This project is an investigation into algorithms and mechanisms that will allow sophisticated staging of input and output data in desktop grids. Data staging allows the system to store data semi-permanently in the underlying peer-to-peer structure, and to run multi-node jobs (applications) whether they be tightly-coupled parallel applications, arbitrary work-flows, or anything between. We are building support for these application types by extending our current desktop grid infrastructure in three distinct areas. We are developing cluster identification techniques that can define arbitrarily-sized virtual clusters through both passive and active network measurement. We are incorporating virtual cluster descriptions into the underlying peer-to-peer infrastructure to allow the scheduling algorithms to map multi-node jobs to the clusters. Finally, we are incorporating data placement into the underlying infrastructure; data is placed according to use and process binding. This work will impact several research areas, including that of distributed and decentralized scheduling, application description, network characterization, and storage networks. In all of these areas our work will explore the tension between local autonomy and global, aggregate objectives. The algorithms and techniques will have broad applicability across a wide range of emerging distributed and collaborative applications. However, the work described here will also explicitly and immediately impact the quality of research conducted by our collaborators in astronomy and elsewhere. The ability to run parallel applications, and those with more complicated inter-relationships, will enable whole new classes of scientific applications to be run on top of ad-hoc grid-like systems.
|
1 |
2012 — 2015 |
Sussman, Alan |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Ci-Addo-New: Parallel and Distributed Computing Curriculum Development and Educational Resources @ University of Maryland College Park
Computing increasingly permeates daily lives, yet few appreciate the growing presence of Parallel and Distributed Computing (PDC) in common computing activities; e.g., modern laptops' processors contain multiple cores and special-purpose devices such as graphics processors (GPUs). With increasing availability of powerful PDC technology, familiarity with single-processor computers and sequential computing no longer constitutes computer literacy. Technological developments point to the need for a broad-based skill set in PDC at all levels of higher education in disciplines such as Computer Science, Computer Engineering, and the related computational disciplines. The rapid changes in technology challenge educators to decide what to teach and how to teach it. Students and employers face similar challenges in characterizing "basic" expertise in computing. The PIs are addressing these challenges via a project devoted to creating and sustaining curricular and educational infrastructure to facilitate the teaching of PDC topics in undergraduate computer-related curricula. The goal is for every graduating student to become skilled in PDC technology, hence be prepared to enter tomorrow's workforce.
The project embodies multiple synergistic activities that develop: flexible PDC curricula for a spectrum of academic programs and institutions; mechanisms that help individuals maintain currency; instructional materials for PDC-related topics; experience-based guidelines for injecting PDC into curricula. A signature activity is competitions for early adopters of PDC curricula (winners receive seed funds, equipment donations from industry) and workshops and training sessions to foster awareness and adoption of PDC curricula. Feedback from early adopters and coordination with the ACM/IEEE 2013 CS Curriculum Taskforce steers future development of both the PDC curricular guidelines and of strategies for deploying PDC material within computing curricula at a larger scale.
This project is supported by CISE, OCI, and EHR/DUE.
|
1 |
2013 — 2017 |
Sussman, Alan Hollingsworth, Jeff |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Sc Conference Experiencing Hpc For Undergraduates Program @ Association Computing Machinery
The purpose of this proposal is to fund travel for undergraduate students to participate in the Experiencing HPC for Undergraduates program at the SC13, SC14 and SC15 conferences. The primary goal of this program is to lead the students into research as an undergraduate and then encourage them to attend graduate school in HPC topics in Computer and Computational Science. The program makes use of several existing parts of the SC technical program, with additional activities specific to the program. The unique content for the participants in this program will include an HPC boot camp session, talks by well known researchers in the field, a panel featuring current graduate students, and a panel on academic and industry career opportunities in the HPC field.
The key idea of this program is that the best way to get people excited about HPC is to visit and participate in a major technical conference in the field. SC is an ideal venue for such a program since it combines elements of a high quality technical meeting (papers, posters, tutorials) with a major industry trade show (commercial and research exhibits, vendor briefings, birds of a feather sessions). By providing sophomore and junior undergraduates an opportunity to see what the field is about, it is believed that we can excite them about the field in time for them to apply to graduate school or decide on which industry to enter when they complete their bachelor?s degrees.
The goal of this program is to expand the High Performance Computing workforce by encouraging talented undergraduates to consider graduate studies and careers in the field of HPC. The program will make a special effort to recruit participants from under represented groups and minority serving institutions. HPC is critical to many national goals from scientific innovation to product development. Currently there is a shortage of new students entering the field. This program is designed to try to help meet those national needs.
|
0.904 |
2015 — 2017 |
Sussman, Alan |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Eager: Collaborative Research: Developing a Parallel and Distributed Computing Concepts Curriculum Enhancement For the Computer Science Principles Course @ University of Maryland College Park
This project is initiating design of a Parallel and Distributed Computing (PDC) curriculum enhancement for the Computer Science Principles (CS0) course that is also applicable to the corresponding high school advanced placement (AP) course. Modern computing systems routinely employ high levels of parallelism (e.g., multiple cores, graphics processors) and distributed computing (e.g., cloud services). PDC significantly affects daily lives, and its importance in solving meaningful societal problems can inspire students to take a deeper interest in applying computing in STEM fields and beyond. However, most CS0 curricula are founded predominantly on older, sequential models of computing. CS0 has multiple purposes that address many audiences, and is now seen as a vehicle for increasing interest among students, especially from underrepresented groups, for studying computing in depth or to major in computer science. It has thus been the subject of an effort to create a new AP exam, with associated high school courses. Since the AP exam is still in a pilot phase, there is a brief window for amending the curriculum to reflect 21st century computing. The potentially vast impact of the AP course could quickly settle into educating students under the old sequential paradigm, acquiring an inertia that makes it difficult to change. This collaborative effort is developing PDC curriculum guidelines for CS0 and AP courses through gathering expert input and involving necessary stakeholders to ensure that an appropriate curriculum is developed and adopted. The goal is for students to gain an understanding of how modern computing technology actually functions, rather than being taught an obsolete operational model. The effort is following a process similar to the proposers' highly successful prior effort to design a PDC guideline for undergraduate curricula, which was incorporated into the ACM standard curriculum. The approach begins with a thorough survey of the state of practice in covering PDC topics in CS0. The survey is followed by a workshop attended by experts and stakeholders to draw upon their knowledge and experience. Then a select steering committee is formed that advises the investigators on the design of a curriculum and helps them identify means to bring it to fruition. The effort seeks the broadest possible dissemination of the results to have the greatest possible impact. Broad impact is the core of this effort, which seeks to expand awareness and appreciation of the important concepts at the foundation of PDC, to students in many different disciplines and in high school. Making these interesting and exciting topics more accessible to a wider range of students should generate deeper interest in the study of computer science and the application of concurrent computation in science, engineering, and commerce. Advancing PDC education further enables advances in science and engineering, which depend ever more on high performance computing, by providing the next generation of practitioners and researchers with the necessary skills and knowledge to routinely recognize how and where PDC concepts may be applied.
|
1 |
2017 — 2019 |
Sussman, Alan |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Cybertraining: Cdl: Preparing Instructors to Offer Experimental Courses in An Updated Pdc Curriculum, and Broadening Participation @ University of Maryland College Park
This effort will develop a cadre of college faculty to pioneer a shift in the computer science curriculum to ensure students are educated in the use of 21st century platforms that pervasively incorporate parallel and distributed computing (PDC). Twentieth century computers were mostly built on the principle of a single processor, executing a sequence of operations. That model is tightly bound into curricula, even though the last decade has seen widespread deployment of multi-core processors, graphics processors, online servers, and the internet of things, all of which depend on the much different PDC mindset for problem solving and programming. Financial, technical, scientific, engineering and medical companies, government laboratories, the department of defense, the intelligence community, and many other sectors are desperately seeking employees who can exploit PDC systems, because the existing workforce was heavily steeped in the older sequential model. Even new graduates continue to learn the old approach because of the considerable inertia in the educational system. Thus, by turning the tide toward incorporation of PDC into the early stages of computer science education, through teaching the teachers, this project will strategically serve the national interest, as stated by NSF's mission: to promote the progress of science; to advance the national health, prosperity and welfare; and to secure the national defense. It will be a significant step toward modernizing the emerging workforce to have the computing skills needed for the United States to maintain leadership in all of these areas.
The Center for Parallel and Distributed Computing Curriculum Development and Educational Resources (CDER) with the IEEE Computer Society Technical Committee on Parallel Processing, developed curriculum guidelines for parallel and distributed computing (PDC) that guided PDC aspects of the ACM Computer Science Curricula 2013. The guideline sought to shift courses in the first two years from the sequential model toward the PDC paradigm. In the modern world, students must see PDC as an aspect of computational problem solving from the very beginning. This project will update the curriculum guideline, with special foci on big data, energy, distributed computing, and exemplars. It will offer grants (encouraging participation by institutions serving underrepresented groups) for faculty to work on course development and attend a workshop where they will be trained in the use of PDC, and in experimental course design and evaluation. The workshops, offered each summer of the grant period, will also support attendance by industrial and government stakeholders to help build a network of relationships. Additional instructors will be offered travel support to attend the workshops, beyond the course development grants.
|
1 |