1983 — 1986 |
Schwan, Karsten |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Research Initiation: Issos - Operating Software For Industrial Systems |
0.907 |
1995 — 1998 |
Schwan, Karsten Hudson, Scott Fujimoto, Richard [⬀] Ahamad, Mustaque (co-PI) [⬀] Limb, John |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cise Research Infrastructure: Distributed Laboratories @ Georgia Tech Research Corporation
9501637 Fujimoto This award provides support for infrastructure at Georgia Institute of Technology in the form of three distributed laboratories where individuals at geographically distinct locations can interact with each other on-line using powerful, distributed computational tools, to solve shared problems. The infrastructure will include a multi-granular compute server and ATM switch, a video server with high-performance multi-media workstations, and network equipment. The research topics to be explored include interactive steering, distributed simulations, collaborative systems; large-scale community, ATM and wireless networks.
|
0.93 |
1995 — 1996 |
Schwan, Karsten Fujimoto, Richard (co-PI) [⬀] Ahamad, Mustaque (co-PI) [⬀] Ammar, Mostafa (co-PI) [⬀] Hudson, Scott |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cise Research Instrumentation: Interactive Distributed Computing @ Georgia Tech Research Corporation
9422033 Schwan This award provides support for the acquisition of a computational cluster of 10 Sun SPARCstations, and 2 user-interface capable SGI Indy workstations. The research projects to be supported include interactive scientific programs, distributed collaboration, interactive simulations, and efficient state sharing in interactive systems. A goal of the research is to develop systems and user interface technologies that will allow the realistic solution of problems using computing, networking and user interface capabilities. Primary research projects include on-line interaction with scientific simulations running on multiple, networked parallel machines; collaboration support systems that will allow users to collaborate over long distances; interactive simulations that will allow end users to run and view system simulations in conjunction with actual system operation; and development of operating system technologies that address the efficient sharing of state on single parallel machines and across multiple networked workstations or supercomputers. ***
|
0.93 |
1995 — 1996 |
Schwan, Karsten Fujimoto, Richard (co-PI) [⬀] Ahamad, Mustaque (co-PI) [⬀] Yalamanchili, Sudhakar [⬀] Menon, Suresh (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Reg: High Performance Computing For Engineering Applications @ Georgia Tech Research Corporation
9411846 Yalamanchili The School of Electrical and Computer Engineering, the School of Aerospace Engineering, the College of Computing and the School of Mechanical Engineering at the Georgia Institute of Technology will purchase a cluster of multiprocessor and uniprocessor graphics workstations interconnected by state of the art high speed communications networks to existing parallel supercomputers. This tested will provide a feasible hardware basis to pursue the development and application of new computational paradigms for large scale engineering applications. In particular, we foresee the ability to harness such high speed networks of specialized graphics and parallel machines such that components of applications can utilize distinct parallel machines while results from these distinct platforms can be visualized, integrated and manipulated in real-time on workstations. This integration opens new avenues for solutions to solving many large problem. The research projects that will be the primarily users of the equipment are drawn from the development of large scale engineering applications, and the technology to realize an integrated, interactive for engineering applications.
|
0.93 |
1998 — 2001 |
Schwan, Karsten Das, Subhendu (co-PI) [⬀] Eisenhauer, Greg |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Program Steering: From Interactive Programs to Distributed Laboratories @ Georgia Tech Research Corporation
It is important for scientific investigators, engineers, and developers to be able to interact with their high performance applications running on parallel or distributed machines. This project will explore technologies to enable the flexible and interactive use of computational instruments, including on-line monitoring, analysis, steering, and experimentation. In addition it will permit users to exploit the new functionalities enabled by future integrated computing, networking, and user interface platforms, which include 'looking over a colleague's shoulder' from a remote location, sharing the outputs of computational instruments among multiple users or instruments, working concurrently on the same problem using high quality visualization techniques or immersive environments, or jointly operating a single shared computational tool, or even temporarily working on alternative or complementary tasks which are then reintegrated with ongoing experiments or processes.
|
0.93 |
1999 — 2005 |
Schwan, Karsten Ahamad, Mustaque (co-PI) [⬀] Atkeson, Christopher Ramachandran, Umakishore [⬀] Chervenak, Ann (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cise Research Infrastructure: Advanced Media-Oriented Systems Research: Ubiquitous Capture, Interpretation, and Access @ Georgia Tech Research Corporation
EIA-9972872 Ramachandran, Umakishore Ahamad, Mustaque Atkeson, Christopher G. Chervenak, Ann Schwan, Karsten Georgia Institute of Technology
CISE Research Infrastructure: Advanced Media-Oriented Systems Research: Ubiquitous Capture, Interpretation and Access
Georgia Tech researchers will perform research on systems and application level issues arising from two applications, a virtual classroom and perceptual information spaces. The PI's will perform research addressing capture, interpretation, access and delivery of multiple media streams. At the application level, research will address deducing users expressions (e.g. focus of interest) and emotions; scaling issues to the campus wide level will be explored for the virtual classroom, and the equipment will support a research on perceptual processing. Systems level research will examine QoS methods, shared state, metadata, and storage architecture for multimedia lectures and other objects. Middleware and runtime systems will be developed for inter-cluster and client server computing as well as to handle media streams in a heterogeneous environment.
|
0.93 |
1999 — 2003 |
Schwan, Karsten Yalamanchili, Sudhakar [⬀] Schimmel, David Rosu, Marcel-Catalin Hughes, Esther |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Computing in Networks: Active System Area Networks @ Georgia Tech Research Corporation
The confluence of computing and communication has produced an explosion of network-based applications. Numerous examples can be drawn from web-based applications, interactive simulations, gaming, visualization, and collaborative environments. Current approaches to the development of network-based applications are still based on a CPU-centric model using node architectures optimized for uniprocessor or small-scale multiprocessor applications. This can lead to significant inefficiencies for distributed applications.
This project explores an architecture for the support of emerging network-based applications whereby selected application-level computations are executed within network interfaces (NIs). The focus is on networks of workstations (NOW) that have emerged as a viable high performance computational vehicle. Within this context the project will support applications with substantial computation and data manipulation requirements. Such high performance programs are drawn from three different application domains: scientific simulation, computer vision and image processing, and computer graphics. The unique aspect of the approach to supporting these classes of programs is the use of extensible network interfaces, where extension concerns programs executed both on the low cost microprocessors resident on NIs and on additional, programmable and NI-associated hardware in the form of field programmable gate arrays (FPGAs). Specifically, the project will demonstrate (1) the use of FPGAs to implement data-intensive computations and (2) the use of the NIs' microprocessors to handle associated meta-information and perform computationally non-intensive operations best performed "close" to the network. Extensibility for NIs will be attained by development of a flexible and extensible virtual communication machine. The project refers to such enhanced networks as Active System Area Networks (ASANs).
The key research issues are (1) to understand and demonstrate an ASAN's ability to perform selected computations and services more efficiently than host CPUs and to explore the effects and trade-offs of off-loading certain functionality from host CPUs to the ASAN, (2) to develop the programming model that enables the efficient and application-specific use of the ASAN's computational resources, and (3) to create an ASAN software architecture that is low overhead and easily extensible.
The approach to the proposed research is experimental. It includes construction of a testbed, implementation of selected NI extensions for use by the three classes of parallel/distributed applications investigated in this work and performance evaluation of the resulting system/application performance. Successful demonstration of the>approach will have a major impact the way future multimedia network applications are designed, optimized, and supported.
|
0.93 |
2002 — 2006 |
Schwan, Karsten Pu, Calton (co-PI) [⬀] Eisenhauer, Greg Dovrolis, Constantinos Wolf, Matthew |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Sti: Netreact Services: Middleware Technologies to Enable Real-Time Collaboration Across the Internet @ Georgia Tech Research Corporation
Modern science is a distributed wide-area enterprise, requiring real-time coordination of scientific instruments and remote sensors, computational resources, large data repositories and teams of researchers in different locations, even on different continents. The high-bandwidth networking demands of any such real-time and data-intensive collaborations tax the largest network pipes, and when conducted across the Internet, their Gigabit/sec data streams must utilize heterogeneous platforms with network link speeds that vary from 10Gbps to 10Mbps and with end user machines that range from desktop PCs to large supercomputers. However, effective scientific collaboration demands that team members be able to interact with each other and with critical remote resources in real-time, despite platform heterogeneity and despite dynamic variations in the availability of platform resources.
The key idea of the NetReact middleware services is to utilize the substantial server and processing resources associated with distributed collaborations to improve end user performance and compensate for potential deficiencies in network capabilities. NetReact provides rich functionality for dynamically reconfiguring both middleware and applications in response to network and platform monitoring, and to coordinate (1) middleware and application-level reactions to changes in network state with (2) the possibly simultaneous actions taken at the transport level. NetReact's monitoring (NRM) services dynamically determine available network bandwidth and communication latencies. NetReact uses such information to adjust middleware and application actions, to tune the underlying network transport, and even to dynamically select suitable network paths for ongoing middleware-enabled scientific collaborations. By embedding NetReact services into the grid computing middleware commonly used for scientific collaboration, the functionality of such NetReact-enriched middleware is improved substantially, enabling end users to collaborate in real-time even when they do not have access to high end machines or high capacity network links, thereby supporting scientific applications that currently remain out of reach for existing networking and grid computing technologies.
|
0.93 |
2002 |
Pu, Calton (co-PI) [⬀] Schwan, Karsten Yalamanchili, Sudhakar (co-PI) [⬀] Blough, Douglas |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Center For Experimental Research in Computer Systems @ Georgia Tech Research Corporation
This planning grant award is the first step toward the setting up of the Center for Experimental Research in Computer Systems (CERCS) seeks to address complex communication/computation systems by bringing together researchers with knowledge of the key technologies underlying these systems, and thereby, create research teams that can address future systems and applications in a fashion that is integrated across multiple technologies and heterogeneous system components. The mission of CERCS is to develop new hardware and software technologies, to create technological advances, and to take advantage of these advances to remove technological barriers faced by complex, integrated systems.
The CERCS approach is experimental and fosters research in which new technologies are evaluated experimentally, with large-scale applications and on systems of substantial size or complexity. The aim is to understand the challenging application requirements that cause novel system-level research, where insights at the system level motivate changes in how certain applications are implemented, and where new system technologies enable new classes of applications. The Center will work with external partners to comprehend their needs and requirements, and to experiment with alternative solutions and approaches.
|
0.93 |
2002 — 2004 |
Schwan, Karsten Pu, Calton (co-PI) [⬀] Pande, Santosh Eisenhauer, Greg |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Infofabric: Adaptive Services in Distributed Embedded Systems @ Georgia Tech Research Corporation
Schwan, Karsten CCR-0208953 "InfoFabric: Adaptive Services in Distributed Embedded Systems" This research is developing "Infofabric" services to manage multiple shared data streams and enable high performance sensing and communication in dynamically reconfigurable sensor nets. For example, in emergency response applications, the computing infrastructures employed are rapidly assembled conglomerates of portable and handheld end user devices. Multiple communication modes are used to interact across collaborating peers and also with local and remote command centers and/or information repositories. A key problem is that such devices typically cannot access, display, and manipulate information with the quality needed by end users. An example is an observer `in the field' trying to match visible cloud formations with the outputs produced by remotely running weather simulations, the latter using real-time radar data. Unless the handheld device can visualize data with high quality and in real-time, field observations cannot be used to refine or steer the remote weather prediction programs. Similarly, search and rescue operations can be aided by rich (multi-media), real-time communications between team members and by high fidelity graphical displays of terrain data available from remote servers. The basic technical problems to be solved for the resulting complex, distributed and embedded applications include (1) the provision of high levels of flexibility in how, where, and when necessary processing and communication actions are performed on the underlying distributed platforms, and (2) the ability to continuously meet end user needs despite runtime variations in service locations, platform capabilities (e.g., remaining power on end devices), and user requirements. The `InfoFabric' approach supports data-intensive, embedded applications with lightweight publish/subscribe middleware. An end user dynamically subscribes to information channels when needed, and the InfoFabric applies the processing specified by the user. Processing and communication actions are dynamically mapped to the underlying distributed devices and machines. To attain high performance and meet embedded systems requirements like such as power, new compiler and runtime binary code generation methods dynamically generate and install code on the InfoFabric's platform. Code is specialized to match current user needs to available platform resources. To meet dynamic needs and deal with runtime changes in resource availability, resource management mechanisms associated with middleware carry the performance, usage, and needs information required for runtime adaptation of processing and communication actions. Because the InfoFabric middleware has detailed knowledge of the ways in which information should be transported and manipulated before delivering it to end users, it can employ techniques like automatic redundancy and replication, and service (re)location and (re)partitioning to match changing user needs and platform availabilities.
|
0.93 |
2003 — 2007 |
Schwan, Karsten Pu, Calton (co-PI) [⬀] Eisenhauer, Greg Wolf, Matthew |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Software: Adaptive-Xml: Tools For Collaborative Network Computing @ Georgia Tech Research Corporation
Modern science is an increasingly distributed enterprise, particularly when addressing challenging scientific problems with multidisciplinary researchteams, where team members are routinely assembled from multiple universities,the national labs, and industry participants. A problem pervasive to suchdistributed endeavors is the need to efficiently share scientific data across multiple teams, sites, applications, and machines.
This project's focus is on the ability to represent such data so that it is easily shared across research teams that each use their own, well-defined and domain-specific data representations. S(cientific)-XML is a suite of tools that translate user-friendly XML-based meta-information about shared data to/from the application-specific, efficient, binary-based data structure descriptions used by high performance scientific codes. With S-XML, end users can conveniently express and view their structured data, but all data manipulation and exchanges are performed using efficient binary data representations. Complementing these tools is the XML-ECho adaptive XML-conscious peer-to-peer communication infrastructure, which implements the wide-area exchange of the large-scale binary data used in scientific collaborations.
This middleware uses runtime adaptation to dynamically adjust its data transport and manipulation actions to meet application-level quality of service needs. Specifically, via XML-based descriptions of data structure, end users can dynamically express and alter interest expression that state what data is most important to them and should therefore, be transported preferentially.
The resulting Adaptive-XML tools and data exchange middleware will enable effective collaboration in scientific endeavors that remain infeasible with today's technologies, a concrete example being the Terascale Supernova Initiative now being undertaken by a large research team distributed across U.S. universities and national labs. Project outcomes will also benefit U.S. corporations, as evident from our discussions with companies like Schlumberger and also from the deployment of some of our technologies in industry testbed (e.g., at Delta Air Lines).
|
0.93 |
2003 — 2012 |
Pu, Calton (co-PI) [⬀] Schwan, Karsten Yalamanchili, Sudhakar (co-PI) [⬀] Blough, Douglas |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Industry/University Cooperative Research Center For Experimental Research in Computer Systems (I/Ucrc Ercs) @ Georgia Tech Research Corporation
An Industry/University Cooperative Research Center (I/UCRC) will be established at the Georgia Institute of Technology, called the I/UCRC for Experimental Research in Computer Systems (ERCS).
The I/UCRC is committed to fostering interdisciplinary research and establishing a culture of experimental research reaching out to local and national industry, to encourage participation and contribute to the regional and national economics through the availability of intellectual talent and emerging technologies. Operationally ERCS will create, develop, and evaluate hardware/software systems, in the context of realistic end user applications, for platforms ranging from embedded/real-time devices, to parallel/cluster systems, to the Internet and facilitate the construction and management of such systems by creating new principles, algorithms and techniques, software tools and mechanisms.
|
0.93 |
2003 — 2008 |
Schwan, Karsten Pu, Calton (co-PI) [⬀] Pande, Santosh Eisenhauer, Greg Balch, Tucker (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr: Collaborative Research: Morphable Software Services: Self-Modifying Programs For Distributed Embedded Systems @ Georgia Tech Research Corporation
Future embedded system applications and infrastructures will be increasingly dynamic. Moreover, the devices used in such infrastructures will vary widely, from sensors and embedded devices, to handhelds, to high end server systems, all of which interact continuously in order to collect, collate, and deliver information from where it is produced to where it is needed. This project addresses the dynamic nature of distributed embedded systems, by developing new information technologies that integrate across multiple areas of Computer Science, including computer architecture, operating and real-time systems, compilers, and middleware. The key intent is to create morphable embedded services, that is, services that continuously self-modify and adapt in order to meet dynamic application needs and environmental/resource constraints, including power budgets, end-to-end quality of service (QoS) guarantees (e.g., timing constraints), and security constraints.
There are many Useful examples of morphable services. In limited forms, they are already present in today's cellphone platforms, for instance, where end users dynamically download new rings or acquire new games (possibly displacing existing ones), etc. Service morphing, however, goes much beyond such configuration capabilities. Imagine a cellphone, for example, which dynamically morphs into an portable wand, using its sensing (e.g., its built-in camera) and communication abilities (e.g., by interacting with other nearby phones) to guide its owner out of a disaster site. Then, in contrast to such functionality-centric morphing, consider this cellphone drawing on the power of nearby server systems (or other phones) to provide suitable levels of service to its user, despite the fact that its power is running low. This can be done, for instance, by dynamically offloading services onto other platforms, by (re)partitioning services across the device and cooperating server systems, and/or by deploying more power-efficient and perhaps, less graphics-capable service code to the phone itself. Another interesting aspect of our work is its ability to go beyond performance and power as the only critical elements of future systems. With our approach, for instance, compiler methods and middleware can be used to enhance information security rather than system performance. This can be done by scattering critical application state to reduce its exposure to external intrusions. As a result, information security can become an integral element of the QoS needs of applications.
A concrete example of security-focused service morphing is to `scatter' critical and vulnerable values across multiple cooperating distributed platforms and to `assemble' them only to the extent needed by the application under compiler control. Moreover, when the last use of the `assembled' value is complete, the `assembled' value is destroyed. Each use of an `assembled' value is verified by compiler-generated code that authenticates it. For example, consider the use of last four digits of the social security number used for authenticating a transaction. The entire social security number will never be stored in a memory location as a value (that could be hacked into). The value will be scattered in a form known only to the compiler which it will then use to `assemble' the value just in time only for the extent of the use.
The different service morphing techniques to be developed in this research include dynamic component (re)deployment, (re)specialization, and (re)partitioning. Such actions are supported by system-level mechanisms that efficiently carry the performance, usage, and requirements information needed for runtime component morphing, principally addressing components' processing and communication actions. The intent is for self-modifying components to be able to acquire runtime information about current resource availabilities and Quality of Service demands. While developing these software technologies, we will concurrently explore new application-specific techniques and methods that take advantage of morphable software services, targeting remote sensing and autonomous robotics applications. Finally, while most of our work will utilize current embedded systems platforms, using XScale boards, we will also consider how to further improve hardware platforms to better enable morphable services. Such work essentially broadens the optimization space in which morphable services are able to operate.
Our technical approach integrates across multiple CS disciplines, by exploiting, for instance, detailed knowledge about computer architecture (e.g., power usage related to memory footprint) to develop compiler techniques that dynamically generate code with functionality and the performance/power profiles more suitable to current application needs. Compiler-level and architectural knowledge is maintained as meta-information at the middleware level, and lightweight middleware dynamically deploys newly morphed code to target platforms. Kernel-level mechanisms collect and distribute the resource information needed for such actions. They also help integrate the application-level with the system-level actions being taken, the latter being particularly important when satisfying certain end-to-end constraints (e.g., timing or power constraints) desired by distributed embedded applications.
A key goal of this research is to demonstrate the importance and utility of morphable services for critical applications. This implies the need to jointly develop application techniques and ideas with morphable service technologies. By grounding our research in a challenging application domain, autonomous robots used in emergency management situations, our technological solutions must ``close the loop'', integrating system-level information about resource constraints, with middleware-level options to morph services, with application-level opportunities for making tradeoffs and choices about how to best meet current requirements. The result are systems in which changed application needs result in new code modules deployed and specialized to meet these needs, jointly with changes in underlying system configurations and properties. In other words, applications and systems are continuously `morphed' to best match end user requirements.
A concrete example of extending application-level research to exploit service morphing is to extend mission-centric notions of `value' in autonomous robots. In robotics, `value' captures an individual robot's contribution to a mission undertaken by a robot team, and `value' helps a robot determine its next actions. Our new research will extend these solutions: instead of considering only movement alternatives, the robots will also consider the `values' of other activities like communication, computation and observation. This approach depends significantly on other components of this proposal, namely QoS management and cooperative service morphing, so that the communication links available to a robot team and the CPU power needed for interpreting distributed sensor inputs can be deployed appropriately.
|
0.93 |
2005 — 2007 |
Schwan, Karsten |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Sger: Service Paths -- Optimizing End-to-End Behaviors in Distributed Service Architectures @ Georgia Tech Research Corporation
Distributed applications and the hardware/software platforms on which they run are becoming increasingly complex. As a result, it is also becoming increasingly difficult to understand and then manage application behavior, particularly for applications that are critical to the ability of organizations to provide services for which they have contractual obligations.
Georgia Tech's `Service Paths' project is developing technologies for (1) dynamically discovering the critical `paths' applications follow through distributed sets of services, and (2) managing such paths in order to improve an application's ability to meet dynamic service level agreements. Service path technologies will substantially improve systems' abilities to provide end-to-end guarantees and enhance the underlying systems' capabilities to provide such guarantees. Concrete outcomes include new system-level methods and abstractions to better understand current application behavior, new middleware abstractions suitable both for constructing and managing large-scale software overlays across many machines, and experimental results attained with virtualized platforms like PlanetLab and with novel hardware support for runtime platform virtualization.
Broader significance: the costs of developing and changing modern IT infrastructures are now dominated by personnel rather than hardware. Technology providers have reacted to this fact by making it increasingly easy to develop complex distributed applications. Without also understanding and then being able to manage and control such applications, cost savings attained at development time cannot be translated to the continuing cost savings required to make U.S. industry competitive with low-cost, international providers.
|
0.93 |
2006 — 2009 |
Schwan, Karsten Fujimoto, Richard [⬀] Leonard, John Guensler, Randall (co-PI) [⬀] Hunter, Michael |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Dddas-Tmrp: Dynamic, Simulation-Based Management of Surface Transportation Systems @ Georgia Tech Research Corporation
This project will address research challenges concerning the effective realization of dynamic data driven application simulations (DDDAS) used for the management of surface transportation systems. Middleware research will address challenges associated with dynamically adapting the computing and communication infrastructure to disseminate and propagate data and computations to meet quality of service requirements. Research in data cleaning and imputation will use statistical techniques to address issues such as gaps and errors that will arise in real-time data feeds. Modeling research will focus on agent-based, dynamic data driven simulation models of transportation networks that adapt to real-time data feeds to predict future states of the system, and modeling of plume cloud dispersions that when integrated with transportation simulations, allow evaluation of the DDDAS approach in important emergency scenarios. Finally, a paradigm termed ad hoc distributed simulations will be explored that features dynamic collections of autonomous simulations interacting with each other and real-time data in a continuously running real-time distributed simulation environment. Experimental research will be conducted by deploying a prototype system on instrumented vehicles, wireless enabled roadside computers, and high performance compute servers.
|
0.93 |
2006 — 2010 |
Schwan, Karsten Eisenhauer, Greg Wolf, Matthew Gavrilovska, Ada (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Peta-Scale Storage For High End Computing @ Georgia Tech Research Corporation
The difficulties faced by scientists and engineers in attaining high performance I/O for data-intensive applications are exacerbated by the low level of abstraction presented by current I/O systems. Proposed research will create higher level I/O abstractions for developers. The proposed SSDS framework models I/O as I/O Graphs that `connect' application components with input or output mechanisms like file systems based on metadata constructed offine by autonomous metabots. I/O Graphs can be programmed to realize application-specific I/O functionality, such as data filtering and conversion, data remeshing, and similar tasks. Their management is automated, including the mapping of their logical graph nodes to underlying physical MPP and distributed machine resources. I/O performance in SSDS will be improved by integrating the computational I/O actions of I/O Graphs with the backend file systems that store high volume data and with the I/O actions already taken by applications, and by moving metadata management offline into metabots. The implementation of the SSDS system and its I/O Graph model will impact a substantial HPC user community, due to its planned integration with the Lightweight File System (LWFS) currently under development at Sandia National Laboratories (SNL). This file system and its SSDS extensions will be deployed on large-scale machines at Sandia to demonstrate scalability and application utility. SSDS will be integrated with the file formats and file systems used by other groups at Sandia and at Oak Ridge National Laboratories (ORNL)
|
0.93 |
2006 — 2010 |
Schwan, Karsten Ahamad, Mustaque [⬀] Eisenhauer, Greg Lee, Wenke (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ct-Isg: Trusted Passages: Managing Distributed Trust to Meet the Needs of Emerging Applications @ Georgia Tech Research Corporation
Abstract
PI: Ahamad; CoPIs: Lee and Schwan
Critical applications that range from operational information systems used in government and commercial settings to those that run daily e-commerce web services rely on distributed computing systems to produce, process, and disseminate information in a trustworthy fashion. Attacks on such applications and the underlying infrastructure can severely limit their ability to meet end-user needs. The inherent complexity of applications, technologies, and platforms in today's large scale distributed systems makes it extremely challenging to create services that can continue to behave in a trustworthy manner in the presence of attacks. In this research project, a new approach is proposed to meet the trust needs of applications by integrating modern system virtualization techniques with new methods for runtime trust monitoring and assessment. This approach dynamically creates and maintains an abstraction called a trusted passage that encapsulates processing, storage and communication resources required by an application. Such resources exist across distributed and potentially untrusted execution platforms. Trust controllers, which monitor application execution, determine at runtime when some resources associated with a trusted passages may have become compromised and need to be replaced by others. Trusted passages leverage new capabilities soon to be part of most, if not all, computational and network platforms. Thus, even with an insecure Internet, our goal is to continually assess the distributed computational platforms being used by applications, and based on that information, provide trusted services to critical applications. This research fosters close collaboration among security and systems researchers and strengthens ongoing interactions with multiple industry research partners.
|
0.93 |
2007 — 2009 |
Eisenhauer, Greg Gavrilovska, Ada (co-PI) [⬀] Schwan, Karsten |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
I/O Virtualization - From Self-Virtualizing Devices to Metadata-Rich Information Appliances @ Georgia Tech Research Corporation
Demands for online access to large data sets distributed across multiple machines are prevalent both in large-scale enterprises and in scientific settings. An example is remote data visualization for large-scale applications running on peta-scale machines, where researchers are seeking ways to accelerate the scientific processes they are carrying out via real-time inspection and validation of the outputs from their running codes.
This research addresses a basic problem basic problem for the large-scale data movements implied by such tasks, which is the substantial semantic gap between application-level data exchanges and the I/O services provided by underlying hardware and system platforms. For instance, at the device level, data being moved may be described as a linear sequence of blocks, whereas the application level views it as highly structured data on which it is performing a complex checkpoint operation. To help close help close this gap, for future virtualized execution platforms, this project realizes the notion of Abstract IO (A-IO) devices, where each such device can be designed to provide useful new functionality to end user applications. Toward this end, it exploits new system- and device -level virtualization technologies to present to operating systems what appear to be physical devices, but are actually virtual representations of such devices. For such `logical devices', it then addresses the question regarding where to place their higher level I/O device functions on future many-core platforms: on the general computational cores of host machines or on specialized cores associated with devices. The utility of the approach is evaluated in the context of two realizations of A-IO based devices (NIC and block device), and with actual scientific and enterprise applications.
|
0.93 |
2008 — 2011 |
Schwan, Karsten Eisenhauer, Greg Wolf, Matthew Gavrilovska, Ada (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Actively Managing Data Movement With Models - Taming High Performance Data Communications in Exascale Machines @ Georgia Tech Research Corporation
Large scientific applications have complex communication needs, including intra-machine point-to-point and global communications, cross-machine checkpointing, and data outputs for online validation and remote data display. Coupled scientific models for multidisciplinary investigations further add to this complexity. This multi-purpose, rich, and dynamic nature of data communications in future exascale codes presents the first challenge addressed by this research. Further, given the many-core nature of future computer chips and the likely presence of specialized hardware accelerator cores, application-level communications and I/O face an increasingly complex set of on-chip, cross-node, and cross-machine interconnects. The complex nature of the physical communication infrastructures present in future exascale machines is the second challenge addressed by this research. In summary, the problem facing developers of future exascale applications for scientific discovery is how to effectively manage the complexity of their communication needs while protecting their most critical `core' communications from perturbation.
This project will develop higher level, explicit models for the data communications performed in future exascale codes. These models, called C-Models, will describe and implement the communications performed for I/O for purposes of online analysis, storage, and visualization, and for data movements across coupled application codes, and will also capture the interaction of the data movements implied by all of the above with the internal data communications inherent to each single application. This abstraction and encapsulation of communication complexity is key to taming the complexity of future exascale applications. The C-Model infrastructure will also help protect the critical core communication component of these applications from perturbation, helping to maximize the performance of these applications on leadership-class machines.
|
0.93 |
2009 — 2015 |
Vetter, Jeffrey Schwan, Karsten Fujimoto, Richard (co-PI) [⬀] Schulthess, Thomas Dongarra, Jack |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Keeneland: National Institute For Experimental Computing @ Georgia Tech Research Corporation
Many-core processor architectures are rapidly emerging in many computing environments. One of their attractions is the ability to speed up significantly computation for certain classes of algorithm. In addition, they typically offer lower energy consumption per unit of computation. Several recent studies have identified the development of effective methods for efficiently programming many-core architectures as a major challenge. This project will make available, as experimental platforms, two systems in which one form of many-core processor with very high memory bandwidth, a graphics processing unit, is deployed at scale for use as an accelerator for high-performance parallel computing.
The Georgia Institute of Technology (Georgia Tech) and its partners, the University of Tennessee at Knoxville and the Oak Ridge National Laboratory will initially acquire and deploy a small, experimental, high-performance computing system consisting of an HP system with NVIDIA Tesla accelerators attached. This will be integrated into the TeraGrid. The project team will use this system to develop scientific libraries and programming tools to facilitate the development of science and engineering research applications. The project team will also provide consulting support to researchers who wish to develop applications for the system using OpenCL or to port applications to the system.
In 2012, the project will upgrade the heterogeneous system to a larger and more powerful system based on a next-generation platform and NVIDIA accelerators. It is anticipated that the final system will have a peak performance of roughly 2 petaflops/s. The project will operate the upgraded system as a TeraGrid resource for a further two years.
The final system has the potential to support many different science areas. Possible areas of impact include some of the scientific domains in which GPU-based acceleration has already been demonstrated to have an impact at smaller scale; for example, chemistry and biochemistry, materials science, atmospheric science and combustion science.
In addition to providing infrastructure for science and engineering research and education, the project partners will educate and train the next-generation of computational scientists on cutting-edge computing architectures and emerging programming environments, using the experimental computing resource as one example.
|
0.93 |
2009 — 2013 |
Vetter, Jeffrey Gavrilovska, Ada (co-PI) [⬀] Schwan, Karsten Yalamanchili, Sudhakar (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Shf: Medium: Heterogeneous Virtual Machine: Future Execution Environments For Heterogeneous Many-Core Architectures @ Georgia Tech Research Corporation
A current industry trend aimed at addressing platform performance/power requirements is to create heterogeneous manycore systems, comprised of general purpose and specialized cores designed to accelerate certain application or system functions. A second trend, designed to make it easier to map a wide variety of functions and components to manycore platforms, is platform-level support for system virtualization. This research innovates, implements, and evaluates new virtualization technologies for heterogeneous manycore architectures composed of commodity general-purpose and accelerator cores. The goal is to realize an efficient execution environment for composing and executing a range of computationally and data-intensive applications.
The system abstractions innovated include (i) the HVM (heterogeneous virtual machine) platform abstraction for dynamic composition of resources (e.g., cores, accelerators, memory, I/O) (ii) new methods for managing heterogeneous manycore resources, including power, and (iii) specialized execution environments for optimizing accelerator interactions. These components are implicitly integrated through an execution model wherein the same abstractions and mechanisms are used to dynamically manage diverse accelerator platforms, thereby realizing our vision of freely shared and customized platform resources provided to applications.
|
0.93 |
2009 — 2015 |
Pu, Calton (co-PI) [⬀] Schwan, Karsten Yalamanchili, Sudhakar (co-PI) [⬀] Blough, Douglas |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Industry/University Cooperative Research Center For Experimental Research in Computer Systems (Iucrcercs) @ Georgia Tech Research Corporation
IIP 0934313 Georgia Institute of Technology Schwan
This is a proposal to renew support for the Industry/University Cooperative Research Center for Experimental Research in Computer Systems (CERCS). The multi-university center is headquartered at the Georgia Institute of Technology, with an affiliate group at the Ohio State University. CERCS was established in 2001. The focus of the CERCS faculty at Georgia Institute of Technology is on the core systems of technologies underlying large scale computing systems-technology creation.
CERCS will continue to entertain a large number and variety of projects, driven by faculty interests, industry connection and center capabilities. Three key domains of interest to CERCS are Enterprise Systems, Scientific Computing, and Embedded Systems; and, underlying and uniting these domains are four significant research thrusts. CERCS is committed to fostering interdisciplinary research, establishing a culture of experimental research reaching out to local and national industry, and to encourage participation and contribute to the regional and national economies through the availability of talent and emerging technologies.
The broader impact of CERCS is on the application of the research results on the IT producer and consumer companies. CERCS plans to stimulate and ensure practical research with broad practical impact by collaborating with industry partners. Work in energy management, multicore software stack, and applications to scientific computing, enterprise computing, and mobile computing will all benefit society by reducing costs of computing and allowing society to efficiently address larger computing problems than currently possible. CERCS also plans to create broad student community for research and education activities and to educate qualified students to join the software/computing industry.
|
0.93 |
2010 — 2012 |
Schwan, Karsten Yalamanchili, Sudhakar (co-PI) [⬀] Clark, Nate |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
From Business Analytics to Accelerator Clouds @ Georgia Tech Research Corporation
This proposal seeks funding for the Center for Experimental Research in Computer Systems studies conducted by the Georgia Institute of Technology site (lead). Funding Requests for Fundamental Research are authorized by an NSF approved solicitation, NSF 10-507. The solicitation invites I/UCRCs to submit proposals for support of industry-defined fundamental research.
Accelerator Clouds are evolving in response to enormous growth in business data and the impact of power consumption in data-centers and cloud systems. This program address the development of accelerator clouds equipped with high performance accelerators such as the new generation of graphics processing units (GPUs) used for general purpose computing. The development, deployment, and widespread adoption of accelerator clouds in the enterprise have intellectual and associated engineering challenges. This proposal includes a new domain specific intermediate representation, the kernel IR (intermediate representation) tailored to business data analytics applications. This model for IR is efficient for compilation of enterprise applications and powerful for driving the optimized compilation to a range of accelerator targets. It will ensure portability across a range of platforms. Also, it includes on-line program analysis techniques and transformations to optimize the execution on a range of hardware targets.
The ability to leverage the GPU resources in the cloud can significantly improve many data-intensive and compute-intensive applications such as risk assessment for insurance companies. Given the experimental nature of the work, it is likely that the tools developed will be leveraged by the companies involved Logic Blox, IBM). The PIs intent to continue their active participation with CRA-W, and with students from Georgia Tech's Women Resource Center. The proposal gives convincing details of educational components and how they are linked to the proposed research. The proposers have a strong track record in integrating the proposed research in education. The industrial partners are convincingly involved in the proposed research (interest in the concepts, integrated education, internships).
|
0.93 |
2010 — 2013 |
Schwan, Karsten Joshi, Yogendra Lee, Hsien-Hsin Mukhopadhyay, Saibal Kim, Hyesoon (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ii-New: Greenit: Testbeds For Real-Time Data Center and Platform Energy and Thermal Management @ Georgia Tech Research Corporation
GreenIT: Testbeds for Real-time Data Center and Platform Energy and Thermal Management - CRI Proposal 0958514 The projected power demand for data centers and servers in the US is expected to grow to about 100B kWh by 2011, and even expected efficiency improvements projected by the EPA are insufficient to cap steady growth in IT energy demand. In order to reign in this growth, the 'GreenIT' cross-cutting and cross-disciplinary project seeks solutions that span multiple disciplines (mechanical, civil, electrical, and computer science) and multiple technologies (chips/packaging, operating systems/software, cooling technologies and thermal/fluid modeling, at multiple length and time scales). The project approach is measurement-based, establishing a 'GreenIT' testbed for energy efficient IT. The testbed will operate at `data center scale', because it is coupled with ongoing efforts at Georgia Tech that are providing us both with (1) a large-scale, commodity IT infrastructure, i.e., racks of machines and (2) new facilities used for a multi-site collaboration in cloud computing. Commodity equipment and instrumentation are housed in the CEETHERM lab, in Mechanical Engineering at Georgia Tech, which offers dynamically controllable air cooling capabilities. This permits joint control of cooling and IT power consumption. Additional researcher-accessible measurement stations target new facilities and next generation multicore platforms. A common thread in this research is its use and promotion of coordinated management 'across the energy stack', where management actions taken at different levels of the energy stack are performed in a coordinated manner. The outcome is holistic data center management for improved power efficiency.
|
0.93 |
2011 — 2013 |
Schwan, Karsten Pu, Calton (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Frp: An Experimental Comparative Study of N-Tier Application Performance in Computational Clouds @ Georgia Tech Research Corporation
Center for Experimental Research in Computer Systems Proposal #1127904
This proposal seeks funding for the Center for Experimental Research in Computer Systems at Georgia Institute of Technology. Funding Requests for Fundamental Research are authorized by an NSF approved solicitation, NSF 10-601. The solicitation invites I/UCRCs to submit proposals for support of industry-defined fundamental research.
While cloud computing is rising in importance for numerous applications, there remains a fundamental lack of understanding of performance achievable for different configurations, especially for N-tier applications common in such areas as e-commerce and social networking. The proposed research will seek to systematically design large scale experiments from which performance data will be derived and performance metrics established for N-tier applications. The resulting large data sets can enable researchers to explore means to achieve optimal allocation of hardware and software resources for specific applications. The proposed comparative experimental study will enable development of comparative models through which N-tier application performance can be predicted and as such holds the opportunity for significant breakthroughs in understanding of cloud performance for this class of problems.
The proposed research has the potential to source the development of tools from which industry providers of cloud resources can better manage their resources and offer services in a cost effective way. Additionally, this optimization can be applied to achievement of Green IT goals. The work is well supported by center individual industry members and has the potential to extend the portfolio of the center by virtue of the many studies and modeling efforts achievable using the dataset generated by this study. Beyond the center, the dataset, if properly designed, has the potential for broad impact in the research community as a resource for studies in this area. The proposal furthermore provides a solid plan for student and UREP involvement.
|
0.93 |
2012 — 2016 |
Eisenhauer, Greg Wolf, Matthew Schwan, Karsten Yalamanchili, Sudhakar (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Si2-Ssi: Collaborative Research: a Glass Box Approach to Enabling Open, Deep Interactions in the Hpc Toolchain @ Georgia Tech Research Corporation
Parallel computing has entered the mainstream with increasingly large multicore processors and powerful accelerator devices. These compute engines, coupled with tighter integration of faster interconnection fabrics, are drivers for the next-generation high end computing (HEC) machines. However, the computing potential of HEC machines is delivered only through productive parallel program development and efficient parallel execution. This project enables application developers to improve performance on future HEC machines for their scientific and engineering processes. This project challenges the current model for parallel application development via "black box" tools and services. Instead, the project offers an open, transparent software infrastructure -- a Glass Box system -- for creating and tuning large-scale, parallel applications. `Opening up' the tools and services used to create and evaluate peta- and exa-scale codes involves developing interfaces and methods that make tool-internal information and available for new performance management services that improve developer productivity and code efficiency.
The project will explore the information that can be shared 'across the software stack'. Methods will be developed for analyzing program information, performance data and tool knowledge. The resulting Glass Box system will allow developers to better assess the performance of their parallel codes. Tool creators can use the performance data to create new analysis and optimization techniques. System developers can also better manage multicore and machine resources at runtime, using JIT compilation and binary code editing to exploit the evolving hardware. Working with the `Keeneland' NSF Track II machine and our industry partners, the project will create new performance monitoring tools, compiler methods and system-level resource management techniques. The effort is driven by the large-scale codes running on today's petascale machines. Its broader impact is derived from the interactions with technology developers and application scientists as well as from its base in three universities with diverse student populations.
|
0.93 |
2012 — 2015 |
Schwan, Karsten Liu, Ling Blough, Douglas |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
I/Ucrc: Secure Health Services in the Cloud @ Georgia Tech Research Corporation
Migration of health organization software services to a cloud environment posses special security and confidentiality risks. The proposed work seeks to design and evaluate security mechanisms for cloud computing environments as well as a new security architecture for cloud-based health services. New mechanisms will be developed in the areas of utility-based encryption, data fragmentation, and redactable signatures and tested in several deployed cloud environments. The effort would establish the first integrated cloud security architecture combining protected memory, secure distributed storage, and utility-based encryption.
The outcomes of the proposed work have the potential for significant impact the use of cloud computing in the health care sector. This adoption hinges critically on the fact that the safety, security and integrity of the data be maintained. The proposed work has the potential to achieve an architecture that can preserve these characteristics in a cloud environment. The work is supported by the Industry Advisory Board as well as individual industry members of the center and has the potential to extend the centers portfolio through expansion into the cloud-based health informatics area. The center will involve graduate students in the work and disseminate results both through publication and the involvement of the PI in the Georgia Health Information Exchange (HIE) project.
|
0.93 |
2014 — 2017 |
Schwan, Karsten Gavrilovska, Ada [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Csr: Small: Collaborative Research: Enhancing Cloud Performance With On-Demand Isolation @ Georgia Tech Research Corporation
The modern trend in computing systems is towards system architectures containing a large numbers of heterogeneous computational and I/O resources. Unfortunately, while the increase in scale allows increased workload consolidation, wherein a single system runs multiple independent applications in parallel, it does come at the cost of introducing increased interference across the different application workloads. Workload interference is the result of the behavior of one application impacting the performance of another, even if both applications are running on different hardware resources. This can be due to contention on shared hardware resources (such as last level caches, memory controllers, or I/O devices) or even software resources managed by the operating system. Cross workload interference is especially problematic for large scale shared infrastructures such as cloud hosting services, which rely on co-hosting large numbers of widely disparate workloads inside a single datacenter environment. Preventing interference effects is critical for cloud computing to fully deliver on its promise as a universal computing substrate.
This project addresses the problem of cross workload interference, by providing a holistic system that both detects the impact of interference on applications and mitigates its effects by providing dynamic isolation capabilities in the underlying system software. This approach relies on the ability to dynamically partition the underlying hardware resources such that isolation is achieved at the hardware layer, while also allowing the partitioning of system software to avoid contention on more abstract resources present in the system software itself. To achieve these goals this work implements a "Virtual Platform" abstraction representing an individual and isolatable system domain assigned to a particular task or workload and consisting of one or more virtual machine instances. The virtual platform itself is assigned an allocation of hardware resources consisting of independent "isolatable units." These units are created through the decomposition of local hardware resources into the finest grained subdivision of resources that can be both individually allocated and effectively isolated from the rest of the system. While providing partitioned hardware resources to a virtual platform provides hardware level isolation, it does not address interference generated by the system software. Avoiding software level interference is achieved by partitioning the system software itself through Multi-Stack Virtualization. Multi-stack virtualization allows multiple independent system software layers to co-exist on the same local system by restricting their managed resources to the set allocated to a virtual platform. Taken together this system provides full isolation capabilities at both the hardware and software layers.
|
0.93 |