1981 — 1984 |
Gannon, Dennis |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Algorithms and Architectures For Highly Parallel Solutions to Partial Differential Equations |
0.961 |
1982 — 1983 |
Siegel, Howard Gannon, Dennis Jamieson, Leah (co-PI) [⬀] Snyder, Lawrence |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Workshop On Algorithmically-Specialized Computer Organiza- Tions - Purdue University On September 29-October 1, 1982 |
0.961 |
1983 — 1984 |
Gannon, Dennis Rosen, Saul Rice, John [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Studies in Parallel and Vector Computing (Computer Research) |
0.961 |
1988 — 1991 |
Gannon, Dennis Dybvig, R. Kent |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Software Tools For the Design and Tuning of Parallel Supercomputer Applications
Current trends in high performance computer design indicate that parallel processing is the only way to deliver the speed needed to tackle the computational problems confronted by modern science and engineering. It is not reasonable to expect that programmers can successfully program the variety of parallel architectures being proposed without the assistance of automated and interactive parallelization tools. This project will design software tools that help the programmer develop new programs, perhaps by restructuring existing programs, that run efficiently on one or more parallel architectures. The work concerns the design and prototyping of three related tools or sets of tools: (1) a tool for parallel program performance estimation, (2) a set of tools for automatically or interactively parallelizing Lisp programs, and (3) a portable run-time environment for parallelized programs. The direct impact of this research will be that the time required to develop new programs for parallel architectures will decrease and the efficiency of programs running on parallel architectures will increase. Furthermore, because less time will be required to develop new programs it will be possible to test out new architectural ideas and new parallel processors more quickly and with less programmer effort. This may in turn lead to more rapid improvements in parallel architectures.
|
1 |
1988 — 1991 |
Temam, Roger (co-PI) [⬀] Gannon, Dennis Foias, Ciprian [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Mathematical Sciences: Theoretical and Computational Aspectsof Turbulence
This research will consider theoretical and computational aspects of fluid turbulence. The research group will combine experts in the areas of theory, applied and numerical analysis, scientific computation, and computer science. Recent advances in computers have now created sufficient computational power to analyze significant problems in turbulent flows based on Navier- Stokes models. However, computing power in and of itself is not sufficient to solve the true problems of interest. The parallel streams of theory and computation in this research will provide mutual inspiration and validation. Development of algorithms which make more effective use of the new generation of computers will be considered. Numerical experiments to test the limits of current supercomputers and algorithms will also be part of this work. Turbulence is a basic problem in fluid mechanics. A more complete understanding of turbulence will lead to improvement in the design of advanced aircraft and fluid machinery.
|
1 |
1989 — 1991 |
Gannon, Dennis |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
The Incubator Laboratory: a Multi-Purpose Undergraduate Labfor Experimental Human-Computer Interface Design
Indiana University will establish a multi-purpose lab for undergraduate training in human-computer interface design. The lab will be used as a facility for computer science students to work in teams on the design of special graphics and realtime tools for use by selected groups outside the department. The objective is to provide a controlled environment where students can learn how to apply advanced computer science ideas to building tools to solve problems that are important to other sciences and the arts.
|
1 |
1991 — 1996 |
Gannon, Dennis Mcmullen, Donald |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
A Laboratory For Parallel Programming
This proposal is the first phase in establishing a Parallel Applications and Software Laboratory that will be a component of the Indiana University Center for Innovative Computer Applications (CICA). This laboratory will build upon the resources of an established center to provide a facility where a staff of trained people will work directly with scientists from a variety of disciplines to design new parallel applications and to study the effectiveness of specialized tools for parallel programming, program restructuring and performance visualization. The research mission of the laboratory will be to use the domain of real world applications to study the problem of designing scalable parallel software. Our specific research projects for this laboratory are: (1) To apply and evaluate existing parallel programming tools to real applications in the physical and social sciences. (2) To investigate the utility of higher level, object oriented abstraction mechanisms in the design of parallel applications. In particular, we will use a new programming methodology based on object oriented design that will allow us to compile programs for a wide variety of massively parallel architectures. (3) To extend the state of the art in performance analysis tools so that they are able to predict performance as machine and problem sizes are scaled. This involves the design of a new generation of algorithm visualization tools that incorporates parallel complexity models and static and dynamic analysis of parallel programs with analytical models of machine behavior.
|
1 |
1993 — 1995 |
Gannon, Dennis Bramley, Randall [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Parallel Programming Tools and Linear Systems Solver For Cfd Applications
This award is for a postdoctoral associate in Experimental Science. The associate, Xiaoge Wang, will be working on tools for parallelizing CFD codes.
|
1 |
1993 — 1995 |
Shannon, Greg Hanson, Andrew [⬀] Gannon, Dennis Mcmullen, Donald Shirley, Peter (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cise Research Instrumentation: Advanced Computer Graphics Equipment For Rendering and Visualization
A graphics computing facility including an FDDI interface, high-resolution color graphics monitor and framebuffer, stereoscopic viewing glasses, and 3D interface equipment shall be acquired to support research efforts in computer and information science and engineering. This, and AVS scientific visualization software will be used for rendering, visualization, and animation research projects in mathematics, physics, virtual reality, scientific computation, discrete and combinatorial structures, and parallel program performance.
|
1 |
1993 — 1998 |
Norman, Michael Reed, Daniel Gannon, Dennis Bertschinger, Edmund Hernquist, Lars (co-PI) [⬀] Spergel, David (co-PI) [⬀] Ostriker, Jeremiah [⬀] Roskies, Ralph |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
The Formation of Galaxies and Large-Scale Structure
Ostriker 9318185 The Grand Challenge Application Groups competition provides one mechanism for the support of multidisciplinary teams of scientists and engineers to meet the goals of the High Performance Computing and Communications (HPCC) Initiative in Fiscal Year 1993. The ideal proposal provided not only the opportunity to achieve significant progress on (1) a fundamental problem in science or engineering whose solution could be advanced by applying high performance computing techniques and resources, or (2) enabling technologies which facilitate those advances, but also significant interactions between scientific and computational activities, usually involving mathematical, computer or computational scientist, that would have impact in high performance computational activities beyond the specific scientific or engineering problem area(s) or discipline being studied. This project brings together a balanced, critical mass team of astrophysicists, computational scientists and computer scientists, as well as the technical resources of two NSF supercomputing centers, to mount a focused attack on what is presently one of the most exciting and fundamental problems in the physical sciences: "What is the origin of large-scale structure in the universe and how do galaxies form?" There is abundant observational data sufficient knowledge of the physical laws and mathematical techniques required to understand the origin and evolution of cosmic structure. Yet, it is difficult to confront theory with observation in detail owing to the inherent complexity of these systems and the difficulty of simulating multiple length-scale interactions. However, recent developments in multiscale numerical algorithms by members of our Grand Challenge Application Group (GCAG) and their efficient implementation on scaleable parallel supercomputers should enable the investigators to overcome these limitations. The GCAG will explore the use of: (i) different numerical algorithms (grid-based, particle-based, and hybrid ( grid+particle) to solve the physical equations governing gas, radiation, dark matter and gravity in an expanding universe; (ii) different adaptation strategies (adaptive mesh refinement, hierarchical tree) to achieve efficient, multiscale solvers capable of resolving length scales of interest over 4-6 orders of magnitude; (iii) different programming models (data parallel, SPMD, object parallel) to express these complex, adaptive algorithms in an efficient and portable way on different parallel architectures (TMC's CM5, Intel PARAGON, Cray T3D, Convex MPP) available to the investigators; and (iv) new software technology (e.g. pC++) to develop compilers, optimization designs and performance analysis tools which allow the most efficient implementation of the above strategies in order to expose the various computational and computer science issues required to construct a portable, scaleable application for teraflop systems expected later in this decade. This unique numerical laboratory-will allow the investigators to compare theories of the origin of large-scale structure with the observed universe, to discard incorrect models and hopefully to determine the elements of a viable theory. This HPCC Grand Challenge will be extremely data and UO intensive, pushing current hardware and software systems and solutions to and beyond their limits. The investigators will work with staffs of the Pittsburgh and Illinois supercomputer centers as well as the computer vendors to develop practical and efficient strategies for the storage, visualization and analysis of massive numerical data sets of use to this and other HPCC applications. This award is being supported by the Advanced Projects Research Agency as well as NSF programs in astronomy and computer sciences.
|
0.951 |
1995 — 1998 |
Gannon, Dennis |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cise Postdoctoral Program: Parallel Programming Archetypes For Scientific Computation: the Metatevt Project
9503966 Gannon The focus of this work is to apply the concept of program archetypes to the design of an Internet based electronic textbook on parallel programming techniques. An archetype is an abstraction that captures the common structural, computation, and communication features of a set of algorithms. This concept, when embedded in an interactive hypertext environment, provides a new approach to teaching about parallel programming as well as a software engineering foundation for deriving new parallel programs from existing applications. The archetypes work will be expanded to include new areas of application study, including sparse matrix computation in adaptive finite element problems as well as parallel algorithms in computational cosmology. A second aspect of this project is to help integrate the work of these application scientists and of the Indiana MetaText research team that is building electronic textbook software. The goal is to build a MetaText archetype encyclopedia for parallel programming. ***
|
1 |
1997 — 2000 |
Mcrobbie, Michael Elmore, Garland Peebles, Christopher (co-PI) [⬀] Gannon, Dennis Mcmullen, Donald |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
A High Performance Network Connection For Research and Education At Indiana University and Indiana University-Purdue University Indianapolis
This award is made under the high performance connections portion of NCRI's "Connections to the Internet" announcement, NSF 96-64. It provides partial support for the installation and operation of a DS-3 ATM connection to the MCI POP in North Royalton, OH for access to the Very High Performance Backbone Network Service (vBNS). Applications include: - Scientific Applications on Arrays of Multi-Processors - HPC++ parallel high performance C++ compiler group - Problem Solving Environments Group - virtual laboratory project Collaborating with: - California Institute of Technology - Drexel University - Indiana University - Los Alamos National Laboratory - New Mexico State University - University of California at Irvine - University of Oregon The award provides partial support of the project for two years.
|
1 |
1997 |
Gannon, Dennis Norman, Michael [⬀] Baden, Scott (co-PI) [⬀] Chrisochoides, Nikos (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Workshop: Workshop On Structured Adaptive Mesh Refinement Grid Methods At the University of Minnesota in Minneapolis, Mn, March 1997 @ University of Illinois At Urbana-Champaign
Workshop on Structured Adaptive Mesh Refinement Grid Methods, being held March 12-13, 1997 at the Institute for Mathematics and Its Applications at the University of Minnesota, Minneapolis, MN. Structured Adaptive Mesh Refinement (SAMR) methods are playing an increasing role in tackling difficult scientific applications, including compressible flows, cosmology, and electronic structures arising in local spin density calculations. Providing adequate software support for SAMR codes is challenging even on sequential implementations. In parallel implementations, program complexity increases qualitatively due to computation and communication requirements that are dynamic, data-dependent, and irregular. The rapid evolution in target parallel systems only further complicates design decisions. A workshop is proposed that will bring together experts in applications, numerical methods, and software development from academia, national labs, and industry. The goal of the workshop is to identify common ground in the application and implementation of SAMR, as well as issues requiring specialization. The specific objectives of the workshop are : (i) to improve the general understanding of the application of SAMR to practical problems, (ii) to identify issues critical to efficient and effective implementation on high performance computers, (iii) to stimulate the development of a community code repository for software including benchmarks to assist in the evaluation of software and compiler technologies. The proceedings of the workshop will be published by Springer-Verlag as part of the IMA (Institute for Mathematical and its Applications) Book Series.
|
0.942 |
2003 — 2005 |
Gannon, Dennis Palakal, Mathew (co-PI) [⬀] Fox, Geoffrey (co-PI) [⬀] Mcrobbie, Michael Voss, Brian |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Extensible Terascale Facility (Etf): Indiana-Purdue Grid (Ip-Grid)
Indiana University, in collaboration with Purdue University is funded by a two-year Cooperative Agreement from the Division of Advanced Computational Infrastructure in the amount of $1,517,430 to integrate the resources available at Indiana University and Purdue University with the Extensible Terascale Facility (ETF). Funding is provided to establish a 20 gigabit/second connection from Indiana University and Purdue University through Indianapolis to the Chicago Hub of the ETF.
As a result of the funding provided through this Terascale Extension Award, Indiana University and Purdue University will be contributing a significant fraction of their resources to the ETF user community including: 1) a heterogeneous group of systems delivering 1.67 teraflops on an ongoing basis with the capability to offer up to 6.26 teraflops for very demanding well-justified applications; 2) a baseline storage capacity offered as part of ETF would include 129 terabytes with peak demand contributions exceeding 400 terabytes; 3) access to a variety of visualization resources including a CAVE, an Immersadesk, Display Walls, etc. are also offered; 4) specialized instrumentation facilities including the Purdue Terrestrial Observatory, and the Service Crystallography at Advanced Photon Sources (SCrAPS); and 5) access to a number of large scientific data sets in the life sciences, earth system simulations, vegetation and soils research, multispectral remote sensing imaging, geological GIS data, economic and social scientific data, and anthropological data.
Investigators at both Indiana University and Purdue University will also contribute significantly to the broader goals of the Extensible Terascale Facility. They have a significant concentration of expertise in areas of grid computing, middleware, and portal development. Both universities, through their IP-Grid participation, also have extensive outreach activities fostering the training and education of a diverse technological workforce extending from K-12 programs through undergraduate and graduate programs.
|
1 |
2003 — 2009 |
Marru, Suresh Plale, Beth [⬀] Gannon, Dennis |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Information Technology Research (Itr): Linked Environments For Atmospheric Discovery (Lead)
Each year across the United States, floods, tornadoes, hail, strong winds, lightning, and winter storms cause hundreds of deaths and result in annual economic losses of more than $13B. Their mitigation is stifled by rigid information technology frameworks that cannot accommodate the unique real time, on-demand, and dynamically-adaptive needs of weather research.
Linked Environments for Atmospheric Discovery (LEAD), the foundation of which is a series of interconnected virtual "Grid environments," allows scientists and students to access, prepare, predict, manage, analyze, and visualize a broad array of meteorological information independent of format and physical location. A transforming element of LEAD is the ability for analysis tools, forecast models, and data repositories to function as dynamically adaptive, on-demand systems that can change configuration rapidly and automatically in response to the evolving weather; respond immediately to user decisions based upon the weather problem at hand; and steer remote observing systems to optimize data collection and forecast/warning quality.
LEAD will allow researchers, educators, and students to run atmospheric models and other tools in much more realistic, real time settings than is now possible, hasten the transition of research results to operations, and bring the pedagogical benefits of sophisticated atmospheric science tools into high school classrooms for the first time. Its capabilities will be integrated into dozens of universities and operational research centers that collectively reach 21,000 university students, 1800 faculty, and hundreds of operational practitioners.
|
1 |
2003 — 2007 |
Pierce, Marlon Gannon, Dennis Fox, Geoffrey (co-PI) [⬀] Plale, Beth (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nmi: Collaborative Proposal: Middleware For Grid Portal Development
This proposal is designed to facilitate grid portal development using the portlet/container approach to building portals. This approach separates portal control and basic services from content. A central control server provides basic portal services such as authentication, access control, and user customizability. Into this framework, portal content and custom services are plugged in using software components called portlets. The container manages the organization and interaction of the portlets, and the portlets deliver specific web content (either local or remote), including Grid service interfaces.
The portlet-based design concept supports distributed, loosely coupled development and deployment: user interfaces and science interface components can be developed independently, using the standard portlet API, and then reused between portals. Services and interfaces may be installed and added to various portals in a well-defined way. The portlet model is also an ideal fit to the emerging Open Grid Services Architecture (OGSA) and its implementation specification, the Open Grid Service Infrastructure (OGSI). Because OGSI is based on the new web-service standards, each Grid service can be directly accessed by a custom portlet.
The impact of making the Grid readily approachable by the international community of researchers is potentially extremely large, as the immense resources that have been collected and organized in recent years by the underlying Grid technologies become visible as usable components of the global research community's desktop. This project will greatly simplify the use of Grid technologies and allow new services to be made readily available to individual researchers and groups by enabling the proliferation of Grid portal technology through reusability and simplification of installation. Scientists will be able to easily form flexible groups with collaborators across the world and use the Grid to share data and resources. This project provides tools for collaboration between established and ad hoc groups of users, enabling those scientists to communicate effectively with each other about the science they are doing, and providing customized views of the Grid that are tailored to meet the needs of collaborating groups.
|
1 |
2005 — 2011 |
Gannon, Dennis Pennington, Robert (co-PI) [⬀] Catlett, Charles Stevens, Rick Foster, Ian (co-PI) [⬀] Skow, Dane |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Sci: Etf Grid Infrastructure Group: Providing System Management and Integration For the Teragrid
The Extensible Terascale Facility (ETF) is the next stage in the evolution of NSF large-scale cyberinfrastructure for enabling high-end computational research. The ETF enables researchers to address the most challenging computational problems by utilizing the integrated resources, data collections, instruments and visualization capabilities of nine resource partners. On October 1, 2004, the ETF concluded a three-year construction effort to create this distributed environment called the TeraGrid (TG) and we are now entering the production operations phase.
The TeraGrid resource partners include: the University of Chicago/Argonne National Laboratory, the San Diego Supercomputer Center at UCSD, the Texas Advanced Computing Center at UT-Austin, the National Center for Supercomputing Applications at UIUC, Indiana University, Purdue University, Oak Ridge National Laboratory, and the Pittsburgh Supercomputing Center.
A separate proposal was submitted to NSF on October 19, 2004 for the TeraGrid Grid Infrastructure Group (GIG). Under the direction of Charlie Catlett at UC/ANL, in general, the GIG will be responsible for coordination of development activities for the TeraGrid with subcontracts to the partner sites. The resource partners (RP) will each have independent cooperative agreements with NSF, but will work closely with the GIG to implement the vision of the TeraGrid.
This proposal outlines the Grid Infrastructure Group (GIG) plans to participate as a System Management and Integration Group within the TeraGrid team to provide the Resource Partners and the scientific community with ongoing access to this computational science facility. This proposal covers the period November 1, 2004 through October 31, 2009.
TeraGrid, a world-class networking, computing, and storage infrastructure has been built and deployed. This initiative now faces the challenge of further engaging the science and engineering community to guide the tailoring of this generic infrastructure to better support their needs, catalyzing new discoveries and broadening the base of computational science. TeraGrid integrates some of the nation's most powerful resources to provide high-capability production services to the scientific community. In addition, NSF supports common software through the NSF Middleware Initiative (NMI) and community-specific infrastructure through its Information Technology Research (ITR) projects.
The TeraGrid Grid Infrastructure Group (GIG) will build on these foundations to broaden the community benefiting from cyberinfrastructure and to harden and deepen TeraGrid's unique capabilities. Collaborating with 16 science partners, the GIG has developed infrastructure priorities to simplify research modalities that remain difficult (or infeasible), even with today's cyberinfrastructure. For example, the TeraGrid aims to make routine the following frequently requested, but currently difficult tasks:
1. Drive complex workflows with multiple computational and data access steps across TeraGrid and smaller scale resources in other Grids in an integrated and automatic manner.
2. Harness TeraGrid resources in an on-demand mode, to provide computational decision-support for time-critical events ranging from weather to medical treatment.
3. Optimize turnaround, costs, and utilization by creating resource brokers that present a single point of access to schedule computational and data management tasks across all TeraGrid resources based on resource availability information. A five-year roadmap has been presented. But recognizing that user needs continue to evolve in response to scientific opportunities, it is planned to reevaluate this roadmap annually based on a widening set of science partner discussions.
TeraGrid will encourage the scientific community to leverage this resource to tackle the most important computational problems in virtually every scientific discipline. The infrastructure and community-driven grid service bridges and portals, which are called science gateways, will bring increased productivity to a large numbers of scientists who have not heretofore used NSF's high-performance computing resources. The problems targeted by current and planned TeraGrid users are among the most computationally intensive areas for modern science and represent a class of problems that cannot be addressed effectively by either smaller-scale grid environments or stand-alone supercomputer centers. Leveraging software and infrastructure partners, the TeraGrid will develop policy for software, security, and resource sharing necessary to underpin international cyberinfrastructure.
TeraGrid, NMI, ITR projects, and discipline-specific infrastructure projects will be integrated, thus, forming a coherent cyberinfrastructure. This cyberinfrastructure will provide common software components and use the TeraGrid network as a national grid resource backplane, reaching thousands of scientists through science gateways and collaboration with other grids. Working with the software partners, the Grid Infrastructure Group intend to develop a set of policies and software that will be widely used by other grid projects, with an eye toward sustaining infrastructure beyond the end of this decade. The GIG will coordinate education outreach and training (EOT) initiatives across the nine TeraGrid resource provider sites to support a broad EOT program for cyberinfrastructure. We have set quantitative objectives for growing the TeraGrid user community by an order of magnitude: to 5,000 users by FY09.
To empower all TeraGrid users, the GIG has addressed heterogeneity and the policy requirements for unique national resources and high-availability production services, developing a coordinated software environment across these heterogeneous resources and a powerful verification and validation system. In a complementary approach to TeraGrid, community-oriented ITR projects such as the Grid Physics Network (GriPhyN ) and Linked Environments for Atmospheric Discovery (LEAD are addressing scaling and software packaging capabilities necessary for aggregation of many departmental-scale resources. Similarly, computational science projects at DOE, via the SciDAC initiative, and at NIH, via the NIH Roadmap, are also important components of the cyberinfrastructure landscape. Moreover, our collaborators in Europe, Asia-Pacific, and elsewhere are building scientific grid infrastructure in projects such as, the UK eScience Programme ], Enabling Grids for E-Science in Europe (EGEE), and Japan's National Research Grid Initiative (NAREGI). The TeraGrid will partner with these and other grid projects, NSF's core centers program, and software providers such as the NMI GRIDS Center to catalyze an integrated NSF cyberinfrastructure program with cross-agency and international impact.
|
0.948 |
2006 — 2007 |
Gannon, Dennis Plale, Beth [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Science of Search: Data Search, Analytics, and Architectures Center (Dsaac)
A planning meeting will be held to determine the organization and viability of forming a new multi-university Industry / University Cooperative Research Center (I/UCRC) for Data Search, Analytics, and Architectures, with Indiana University as the lead research site and the Florida International University as a research site. The Center will focus on an area of technical and economic importance. It will study the representation, management, storage and analyses of large multi-modal data. Managing large complex data sets and analyzing them is problem common to many industries. The proposed center should benefit significantly from the resources available at the two institutions including unique and extensive facilities funded by NSF on Emerging Techniques for Advanced Information Processing at Florida International University.
|
1 |
2007 — 2010 |
Plale, Beth [⬀] Gannon, Dennis |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Csr---Csi. An Adaptive Programming Framework For Data and Event Driven Computation
Data-driven applications in computational science react in real time to their environment in a complex detect-analyze-response cycle. These computations can often be viewed as complex data flow graphs having components that are both data- and computationally- intensive, and requiring access to live data feeds and access to large-scale computational resources. A user may cycle through multiple graphs accessing data from sensors, instruments, databases, and large collections of files in the process of discovering new knowledge. This research investigates a programming model and framework for knowledge discovery in data-driven applications. Users program the system by declarative specification of detect-analyze-response behavior. Underlying the programming model is a continuous rule-based events processor and workflow orchestration engine organized as Web services. The research formalizes an abstract model of interaction and will map the higher-level conceptualization to the events processing and workflow runtime components. It demonstrates that the model supports a unique adaptive framework where knowledge gained from the computational and data analysis can be fed back to the data event streams. The approach is validated experimentally through quantifiable metrics and by its application to two model problems: severe storm prediction where a weather forecast is triggered based on data mining results from mining radar or model data, and adaptive resource management where hardware and software resources and environment data streams are monitored for on-the-fly resource requirements prediction.
|
1 |
2007 — 2012 |
Wilkins-Diehr, Nancy Pierce, Marlon Marru, Suresh Gannon, Dennis |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Sdci Nmi Improvement: Open Grid Computing Environments Software For Science Gateways
National Science Foundation NSF Software Development for Cyberinfrastructure (SDCI) Program Office of Cyberinfrastructure
Proposal Number: 0721656 Principal Investigator: Marlon Pierce Institution: Indiana University Proposal Title: SDCI NMI Improvement: Open Grid Computing Environments Software for Science Gateways
Abstract
The Open Grid Computing Environments (OGCE) project will develop, integrate, package, and distribute software to TeraGrid science gateways and related science portal efforts. Science gateways support scientific research communities and their outreach efforts. They consist of Web-based user and application programming interfaces that provide high-level access to Cyberinfrastructure middleware and translate middleware capabilities into scientific use cases. The OGCE works to provide these gateway development groups with the reusable software they need. OGCE portal software and component libraries currently provide client support for numerous Grid middleware tools, including the Globus Toolkit, Condor, MyProxy, and GPIR information services. The portal software is packaged and easily installable, allowing portal developers to build a functioning Grid portal in a single step. The current funding award will extend this work to provide several additional features as well as maintenance of the current software suite. Previous versions of the OGCE releases concentrated on client components for remote job and file management through a common, high-level Grid programming interface. In the project extension, the OGCE will build on this to address the information and metadata management needs of both scientists and gateway providers with the following tools: personal and resource metadata catalogs; enhanced system information tools, including access to batch queue prediction services; and gateway auditing, accounting, and logging libraries. The OGCE will also provide improved support for adding science applications to gateways. Efforts here will include computational experiment builder interfaces to workflow engines for end users, application factory services for application providers, and portlet XML tag libraries for portal developers. Finally, the OGCE identifies ?Web 2.0? approaches as potentially revolutionary to the gateway building process and will lead these developments within the gateway community.
|
1 |
2008 — 2010 |
Stewart, Craig (co-PI) [⬀] Brown, Geoffrey (co-PI) [⬀] Plale, Beth (co-PI) [⬀] Gannon, Dennis Wheeler, Bradley [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cyberinfrastructure Software Sustainability and Reusability Workshop
National Science Foundation Office of Cyberinfrastructure
Proposal # 0829462 PI name Bradley Wheeler Institution Indiana University Title ?Cyberinfrastructure Software Sustainability and Reusability Workshop?
Project Summary
This workshop proposal targets an examination of sustainability and reusability of software developed, supported, and used by the NSF community. Specifically, workshop goals include: examination of current software evaluation and adoption models by labs and virtual organizations; examination of long-term sustainability models; and mechanisms for supporting sustainability via funding organizations, open source, and commercialization. White papers on these topics and others will be solicited of the community in advance of the workshop. Results from the workshop will be documented, as well as recommendations to NSF. Intellectual merit is identified as the exploration of this topic and a resulting deeper understanding of how we as a country of scientists and educators deal with sustaining community sourced software over the long term. Broader impact is multi-dimensional: in addition to the potential transformative nature of resulting actions and strategies by both the community and funding agencies, the proposal will make explicit funds available for HBCU and MSI participation.
|
1 |