1995 — 1999 |
Reif, John (co-PI) [⬀] Ellis, Carla (co-PI) [⬀] Kedem, Gershon (co-PI) [⬀] Vitter, Jeffrey [⬀] Chase, Jeffrey |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Acquisition of a Workstation Cluster Testbed For Next- Generation Collaborative Computing
9512356 Vitter Duke University proposes the establishment of an ATM-based network cluster of high-performance 64-bit workstations for supporting collaborative research and applications. Three fundamental systems performance concerns are considered for supporting a wide mix of workloads and applications of these collaborative activities: locality management, resource conflicts, and resource reduction. Seven specific projects have been identified: (1) toolkits and interfaces for explicit control of disk resources, (2) parallel file system, (3) virtual memory support for memory-intensive computations, (4) distributed shared memory, (5) cluster virtual storage, (6) data compression, and (7) predication for locality management. The workstation cluster instrumentation is necessary to perform state-of-the-art experimental research on methods to make the cluster viable for next-generation collaborative computing. The proposed cluster testbed is indispensable for conducting the research. The instrumentation is both the vehicle for and the subject of research. ***
|
0.915 |
1998 — 2003 |
Vitter, Jeffrey (co-PI) [⬀] Urban, Dean Agarwal, Pankaj Chase, Jeffrey Arge, Lars (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Experimental Software Systems: Georgraphic Information Systems On High-Speed Clusters: a Vertically Integrated Approach
9870724 Chase, Jeffrey S. Agarwal, Pankaj K. Duke University Experimental Software Systems: Geographic Information Systems on High-Speed Clusters: A Vertically Integrated Approach Geographic Information Systems (GIS) are important management and analysis tools in science, and government. Practical interest in GIS has spurred fundamental research in several areas of computer science. However, this research has focused mainly on abstract problems, and technology transfer to GIS practitioners has been slow. This project seeks to develop an improved foundation for dealing with massive spatial data through a collaboration of researchers in computer algorithms, software systems, and environmental sciences. The research deals with GIS problems at all levels of abstraction-from applications to algorithms, systems, and tools. The algorithmic research builds on recent advances in geometric algorithms and I/O-efficient algorithms for problems that arise in GIS. The systems research seeks to develop and evaluate system support for I/O-intensive applications on networked clusters, incorporating experimental research in storage systems, memory management, and networking. The common focal point for all project members is a toolkit that supports practical implementations of GIS algorithms and applications. The expected outputs of the research are: (1) new algorithms for dealing with massive spatial data sets, (2) whole-system implementations of GIS algorithms on high-speed clusters, and (3) transfer of these computational advances into the practice of GIS in the environmental sciences.
|
0.915 |
1999 — 2005 |
Agarwal, Pankaj Chase, Jeffrey Lebeck, Alvin (co-PI) [⬀] Littman, Michael Arge, Lars (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Research Infrastructure: Data-Intensive Computing For Spatial Models
EIA-9972879 Chase, Jeffrey S. Agarwal, Pankaj Arge, Lars Lebeck, Alvin Littman, Michael Duke University
Research Infrastructure: Data-Intensive Computing for Spatial Models
Duke University researchers will conduct research on locality management for data intensive systems; locality issues include algorithms and toolkits to promote locality, fast and predictive data placement, memory system architectures (main, cache, and disk), network I/O, and mobile/wireless computing. Applications for the research include computational sciences, GIS applications, 3D modeling and visualization, molecular modeling. For wireless and mobile computing, transcoding for heterogeneous devices and power management will be addressed. As an outreach component, Duke will work with Martha Absher, a Presidential Awardee for Excellence, who coordinates College of Engineering activities to recruit and retain minority students; their activities will center on expanding REU opportunities; they will also collaborate with North Carolina Central University, also in Raleigh, in computational science.
|
0.915 |
2000 — 2003 |
Vitter, Jeffrey [⬀] Chase, Jeffrey |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr: Algorithms For Active Storage
This project explores techniques for efficient computation on large, disk-based data sets. The research focuses on theoretical and practical aspects of ``active'' storage systems, in which each storage unit (a disk or group of disks) has some limited capability for local computation. A key goal of the project is to develop a theoretical model for active storage systems. The model is a basis for designing and evaluating algorithms for active storage systems, identifying useful computation kernels for active storage units, and deriving lower bounds for fundamental problems. The applied aspects of the project include design and implementation of a programming environment (PEARL) for active storage algorithms, and experimental evaluation of new algorithms for active storage. This project is a collaboration with a separately funded project team at Carleton University in Ottawa, Canada.
|
0.915 |
2000 — 2004 |
Vahdat, Amin [⬀] Chase, Jeffrey |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr: System Support For Automatic and Consistent Replication of Internet Services
This project explores two complementary techniques for addressing fundamental limitations in replicating network services. The first aspect of this proposal seeks to automatically replicate service programs and state information to allow transparent caching or replication of dynamic services. The goal of the research is to allow transparent caching or replication of dynamic services, a key step toward automatically converting unscalable service implementations into scalable ones. The second thrust of this work is to allow network services to dynamically trade replica consistency for increased system availability and performance. The TACT (Tunable Availability and Consistency Tradeoffs) toolkit allows Internet services to flexibly and dynamically choose their own availability/consistency tradeoffs. We use three consistency metrics, Numerical Error, Order Error and Staleness to capture application-specific consistency requirements of Internet services. Applications use these metrics in addition to application-specific parameters to assign a numeric value to system consistency, e.g., the percentage of user requests that must eventually be rolled back because of underlying replica inconsistency. Finally, TACT allows consistency to be specified on a per-user, client, and replica basis, enabling differentiated quality of service.
|
0.915 |
2002 — 2005 |
Vahdat, Amin (co-PI) [⬀] Chase, Jeffrey |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Request Routing For Scalable Services
Clustering technologies enable incremental scaling of Internet server sites at modest cost. It is increasingly common in cluster-based service architectures to distribute incoming request traffic among servers using redirecting intermediaries integrated into the network switching fabric or interposed between the client and servers. However, Internet services and their delivery architectures continue to evolve rapidly. This creates new challenges and opportunities for redirecting intermediaries, and motivates basic research in both the mechanisms for request redirection and the request routing policies for specific service environments.
This work will undertake a coordinated research program to expand the potential of redirecting intermediaries as an enabling technology for scalable Internet services. The work focuses primarily on integrating service-aware redirection and request routing as network-level functions in a high-speed switching architecture. The methodology combines simulation, construction of software prototypes, and evaluation of prototypes using synthetic and real workloads.
The expected outcomes of the work are: An improved understanding of the role of request routing as an enabler for large-scale Internet services, Simulation results evaluating these policies in large systems, Software prototypes that demonstrate the value of these solutions in practice for Web-based services and network storage services, and Opportunities to train students as participants in this research at both the graduate and undergraduate levels.
In summary, the research work has the following basic objectives:
1. Define protocol features essential for redirection at the level of the transport protocol. The switch routes incoming requests on each transport connection to any active server at the discretion of a service-specific routing policy; referred to as Anypoint communication.
2. Implement an Anypoint-capable transport protocol that supports features commonly required by service protocols: reliable communication, ordering and duplicate suppression, and congestion control.
3. Define interfaces and capabilities for service-specific policy modules in Anypoint intermediaries. This defines an architecture for decomposing service protocol implementations into a client, a server, and service module to extend the intermediary.
4. Evaluate the intermediary architecture defined by Anypoint by constructing software prototypes of virtualized service implementations. The initial targets are HTTP 1.1 application services and NFS.
|
0.915 |
2003 — 2004 |
Chase, Jeffrey Vahdat, Amin [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Evaluating Global-Scale Distributed Systems Using Scalable Network Emulation
The power of the national information infrastructure has expanded enormously over the last decade. Commercial Internet service and government-sponsored advanced networks for research and education support a range of national priorities. Unfortunately, rapid development and deployment of robust, adaptive network software is hampered by a fundamental obstacle: prototype systems are difficult to evaluate due to scale and complexity of their host environment---the Internet.
This proposal seeks to remove this obstacle by constructing a software environment for evaluating prototype network software systems under realistic, controlled, repeatable conditions through scalable Internet emulation. The proposed system, ModelNet, emulates a wide-area network on a high-speed cluster, enabling researchers to deploy unmodified software prototypes in a configurable Internet-like environment and subject them to faults and varying network conditions.
This research will first investigate techniques for scaling network emulation to thousands of unmodified applications with aggregate communication bandwidth of over 10 Gb/s. Second, it will enable the community to leverage large-scale network emulation as a primary technique for rapidly developing and evaluating next-generation Internet services and applications. Finally, the research will seek fundamental improvements in network service robustness and performance by providing a controlled environment for subjecting network services to a range of realistic deployment scenarios.
|
0.915 |
2003 — 2007 |
Chase, Jeffrey |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nmi: Collaborative Research: a Grid Service For Dynamic Virtual Clusters
This project will develop the architecture and components to automate configuration and management of a large, mixed-use server cluster. The cornerstone of the architecture is a cluster site manager called Cluster on Demand (COD) that adds a fundamental new capability to manage a cluster as a multi-purpose modular resource hosting different environments in isolated partitions. COD will support secure isolation of multiple user communities, pushbutton control over software environments subject to appropriate authorization controls, and dynamic policy-based resource provisioning. The COD design leverages widely used open-source components to support diverse hardware platforms and software configurations, and to evolve rapidly with new technology. The project will develop a COD software release integrated with other components of the NMI software suite, and evaluate it through test deployments at Duke University and MCNC, with grid connections to the NC BioGrid.
Intellectual Merit and Broader Impact - The project will facilitate the deployment of clusters to advance science and education, reduce costs and improve return on investments in cluster infrastructure, and remove key barriers to sharing cluster resources across organizational boundaries. The project will also provide research experiences for graduate students and promote participation by underrepresented groups. Among the students participating in the project are two female PhD students. The project will also create opportunities for Duke's Research Experiences for Undergraduates (REU) program to mentor underrepresented undergraduates (primarily African-Americans) during summer internships.
|
0.915 |
2004 |
Chase, Jeffrey |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Student Stipends to Attend Usenix Nsdi and Fast Conferences
This award will provide travel support for approximately ten graduate students to attend the USENIX First Symposium on Networked systems Design and Implementations (NSDI 04), and Third USENXI Conference on File and Storage Technologies (FAST 04) to be held March 29-30, 2004 in San Francisco, CA. NSDI and FAST are important technical forums for high-quality research in networked and storage systems. The collaborative support of these two events will contribute to initiate, develop, and foster relationships between the two research communities. For students it is an opportunity to meet and discuss their research with some of the most notable researchers in these fields.
|
0.915 |
2005 — 2009 |
Chase, Jeffrey |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research:Csr: Aes: Virtual Playgrounds: Making Virtual Distributed Computing Real
An important focus of distributed computing research is to decouple the roles of resource consumer and resource provider, and recruit computing and storage resources from providers dynamically to meet demands of consumers. This decoupling promises on-demand resource access for the consumers, and promotes diversity of supply and economies of scale among providers-thus increasing flexibility and reducing cost. This old but powerful idea underlies research on grids, virtualization, utilities, and overlay networks. However, its practical realization has been hindered by high participation costs for providers and consumers. Consumers struggle to acquire, configure, and manage remote resources and to adapt applications, while providers must adapt local system configurations to enable remote access. Two distinct threads of computer systems research seem poised to overcome this stalemate. Service oriented architectures provide flexible and scalable management interfaces and infrastructure, while virtual machine technology can turn any computer into a secure user-configurable resource. The combination of these threads enables a fundamental new model for distributed computing based on remote management and dynamic reconfiguration of virtualized resources. The challenge (and opportunity) to be addressed in this project: to bring together service-oriented grid and virtual machine technologies to enable robust, scalable, and secure on-demand access to distributed resources-and to construct the abstractions, interface, policies, and algorithms that applications need to function in a dynamic, virtualized world.
To this end, the Virtual Playgrounds project will address three primary challenges: 1. Virtualization. Building on emerging technologies for virtual machines (VM), on-demand clustering, and service-oriented architectures. 2. Isolation. Building on these core virtualization mechanisms and on state-of-the-art security infrastructure, the project will investigate mechanisms to express, monitor, and enforce contractual arrangements among consumers and providers, encompassing service quality and security assurances. 3. Dynamic adaptation. The project will investigate approaches to dynamic resource management in the new context of virtual playgrounds, including mechanisms to disseminate information about resource availability and needs, match producers and consumers, and monitor and audit resource status and flow. The project will have a significant positive impact on communities building distributed computing systems in support of research and education, and on the users of those systems. The project will reduce costs and improve return on investments in cyberinfrastructure, and remove key barriers to sharing resources across organizational boundaries. The project will also deepen ties between academic researchers and national labs, provide research-training opportunities for graduate and undergraduate students, and increase student exposure to advanced computing environments.
|
0.915 |
2007 — 2011 |
Chase, Jeffrey Babu, Shivnath (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Csr---Vcm: Foundations For a Programmable Self-Managing Hosting Center
This project seeks to leverage advances in virtual machine computing to enhance the foundations of automated resource management for virtual server infrastructure. It focuses on architecture, mechanisms, and policies for autonomic hosting centers that sense-and-respond to adapt automatically to changes in traffic demands or resource conditions, while holding human administrative burdens constant. It addresses key elements of a computing service utility: configuring and instantiating operating system images and services, binding them to server resources, and controlling their interactions at the system and application level.
The research includes development and evaluation of a Web-based laboratory and testbed software for research in autonomic data centers and adaptive services. The testbed responds to a need for new tools that is recognized in the autonomic computing research community, and it has potential to accelerate progress in autonomic computing research. The core capability of the testbed is a set of mechanisms to enable self-monitoring and adaptation by the hosted services and the autonomic data center itself. The testbed provides facilities for users to develop and install controllers for all aspects of resource management policy and adaptation, and experiment with selected workloads and faultloads interactively. It will provide researchers and students with prepackaged deployable applications and integrated load generation, with functions to modulate the request stream along various dimensions and experiment with controller policies and their interactions.
The project also involves research in self-managing services, controller policies, and analysis of instrumentation data. These elements of the project use the testbed as a tool for experimental evaluation.
|
0.915 |
2009 — 2013 |
Chase, Jeffrey |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Tc: Large: Collaborative Research: Trustworthy Virtual Cloud Computing
This award is funded under the American Recovery and Reinvestment Act of 2009 (Public Law 111-5).
Proposal#: 0910767 Collaborative Proposal #s: 0909980, 0910483, 0910653 Title: TC: Large: Collaborative Research: Trustworthy Virtual Cloud Computing PIs: Peng Ning, Xuxian Jiang, and Mladen Vouk Abstract:
Virtual cloud computing is emerging as a promising solution to IT management to both ease the provisioning and administration of complex hardware and software systems and reduce the operational costs. With the industry?s continuous investment (e.g., Amazon Elastic Cloud Computing, IBM Blue Cloud), virtual cloud computing is likely to be a major component of the future IT solution, which will have significant impact on almost all sectors of society. The trustworthiness of virtual cloud computing is thus critical to the well-being of all organizations or individuals that will rely on virtual cloud computing for their IT solutions.
This project envisions trustworthy virtual cloud computing and investigates fundamental research issues leading to this vision. Central to this vision is a new security architecture, which harnesses new opportunities and capabilities such as built-in out-of-band system access, processor and hardware support for trusted computing, and out-of-box examination by hypervisors. This project focuses on key research issues following this security architecture, including new security services that enhance the trustworthiness of virtual cloud computing, protection of management infrastructure against malicious workloads, and protection of hosted workloads from potentially malicious management infrastructure. The research will enable the adoption of virtual cloud computing for critical IT management in industry and government organizations. This project will involve both graduate and undergraduate students, and will produce open source software and tools, which will be made available to the public.
|
0.915 |
2010 — 2014 |
Chase, Jeffrey |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Sdci Net New: the Missing Link: Connecting Eucalyptus Clouds With Multi-Layer Networks
The backbone of IT infrastructure is evolving towards a service-oriented model, in which distributed resources (software services, virtualized hardware infrastructure, data repositories, sensors, and network overlays) can be composed as a customized IT service on demand. In particular, cloud computing infrastructure services manage a shared ``cloud'' of servers as a unified hosting substrate for diverse scientific applications, using various technologies to provision servers and orchestrate their operation. At the same time, high-speed networks increasingly offer dynamic provisioning services at multiple layers. Network-connected clouds offer a general, flexible, and powerful model to scale up computing power for data-intensive science applications running at multiple cloud sites. The software produced in this project offers interfaces and control policies for application-driven orchestration of federated clouds interconnected by advanced networks.
The project develops software to link cloud computing clusters to other cyberinfrastructure resources through dynamically provisioned networks. A principal focus is to extend popular cloud infrastructure software with hooks to connect provisioned machine instances running in the cloud to external resources through dynamic circuit networks. The project enables cloud applications to dynamically request compute resources at multiple points in the network, together with bandwidth-provisioned network pipes to interconnect them and link them with other services and data repositories. The orchestration framework is based on the Open Resource Control Architecture (ORCA), an extensible platform for dynamic leasing of resources in a shared networked infrastructure. The resource allocation policies are enabled through semantic resource descriptions and extended intelligent SPARQL queries. Driving applications for this project are MotifNetwork, IMG/JGI and Supernova Factory and sensor networks linked to cloud resources (CASA). Development and demonstrations leverage the Breakable Experimental Network (BEN, a multi-layer optical network testbed located in North Carolina), NLR and ESNet.
|
0.915 |
2012 — 2014 |
Chase, Jeffrey Futhey, Tracy |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Eager: Duke On-Ramps: Openflow-Enabled Network Resource Access That Is Manageable, Programmatic, and Safe
This project will deploy a pilot Software-Defined Science Network (SDSN) to interconnect research computing resources on the Duke campus and link them to national WAN circuit fabrics. This pilot deployment will support both Computer Science applications such as GENI and domain sciences such as Duke's Institute for Genome Sciences and Policy (IGSP) and Duke's high-energy physics group (HEP). Additionally, the SDSN will connect to the Duke Shared Cluster Resource (DSCR), a 4300-core shared batch job service used by domain scientists all over the Duke campus. The key goal is to enable domain scientists to request virtual networks that are 'simple and scalable enclaves' for science networking, and that link selected resources on campus with selected resources outside, while excluding unrelated traffic.
The project will experiment with OpenFlow controllers on a trial basis within isolated flowspace slices of the SDSN, including OpenFlow-enabled traffic engineering policies that offload science traffic onto the SDSN. The project plan is that initial trial demos will exercise cloudbursting capability to expand computing service into a cloud site, and (potentially) to support virtual machine migration among the OpenStack cloud testbeds on campus. The PIs will report on their experience in technical papers. Travel budget is included for presentation of learnings to other university CIO/CTOs. Although software-defined networking (SDN) technologies are currently being widely discussed and are key elements in the GENI architecture, there is little operational or campus-level architectural experience with using them. The project will advance the state of the art in integrating SDN technologies into campus networks, and in enabling safe, controlled interconnection of science resources and GENI resources within and across campuses. The project seeks to devise and implement practical solutions that are easily reproducible beyond the initial prototype, scalable to wider use, and grounded in technologies that are (or soon will be) solid, manageable, and commercially available for deployment throughout production campus networks. The project outcomes include reporting of results and lessons to other campus network operators and to SDN researchers and industry
Broader Impact: The pilot will provide an opportunity to gain experience with architectural, deployment, administrative and operational issues of OpenFlow in campus settings to serve research and education needs beyond the Computer Science domain. The Duke campus OpenFlow model (GENI-derived technology) offers domain sciences on-demand access to ultra-high-speed networks without performance limiting firewalls. As such it will provide direct benefit to the domain sciences. The PIs will report on issues and operational experience associated with the deployment. These reports and the PI's willingness to share their experience with other universities will reduce barriers to use of GENI from campuses, establish GENI technologies and OpenFlow as building blocks for science networking, enhance support for computational science on the Duke campus, and facilitate sharing of resources and data among science researchers and their collaborators on and off campus.
|
0.915 |
2013 — 2018 |
Chase, Jeffrey |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Twc: Frontier: Collaborative: Rethinking Security in the Era of Cloud Computing
There are at least two key features of the move to cloud computing that introduce the opportunity for significant leaps forward in computer security for tenant services. First, a compute cloud provides a common software, hardware and management basis for rolling out cross-cutting services en masse that have resisted incremental deployment in a one-service-at-a-time fashion. Second, compute clouds offer providers a broad view of activity across an unprecedented diversity of tenant services. This research project leverages these features to develop new approaches to a wide array of fundamental problems in computer security. By convening Cloud Security Horizons summits with industry stakeholders, this project further seeks to both contribute to industry directions in cloud computing and to be informed by them.
Particular longstanding security challenges addressed in this project include secure transport, authorization, user and software authentication, security monitoring, and incident analysis. Moreover, since modern clouds are not sufficiently extensible to support the envisioned capabilities, this project is constructing cloud software platforms that enable the flexibility, extensibility and security needed for this research to come to fruition in practice.
|
0.915 |
2013 — 2015 |
Chase, Jeffrey |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Cc-Nie Integration: Transforming Computational Science With Adamant (Adaptive Data-Aware Multi-Domain Application Network Topologies)
Workflows, especially data-driven workflows and workflow ensembles are becoming a centerpiece of modern computational science. However, scientists lack the tools that integrate the operation of workflow-driven science applications on top of dynamic infrastructures that link campus, institutional and national resources into connected arrangements targeted at solving a specific problem. These tools must (a) orchestrate the infrastructure in response to application demands, (b) manage application lifetime on top of the infrastructure by monitoring various workflow steps and modifying slices in response to application demands, and (c) integrate data movement with the workflows to optimize performance.
Project ADAMANT (Adaptive Data-Aware Multi-domain Application Network Topologies) brings together researchers from RENCI/UNC Chapel Hill, Duke University and USC/ISI and two successful software tools to solve these problems: Pegasus workflow management system and ORCA resource control framework, developed for NSF GENI. The integration of Pegasus and ORCA enables powerful application- and data-driven virtual topology embedding into multiple institutional and national substrates (providers of cyber-resources, like computation, storage and networks). ADAMANT leverages ExoGENI - an NSF-funded GENI testbed, as well as national providers of on-demand bandwidth services (NLR, I2, ESnet) and existing OSG computational resources to create elastic, isolated environments to execute complex distributed tasks. This approach improves the performance of these applications and, by explicitly including data movement planning into the application workflow, enables new unique capabilities for distributed data-driven "Big Science" applications.
|
0.915 |
2016 — 2019 |
Chase, Jeffrey |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Cici: Secure and Resilient Architecture: Creating Dynamic Superfacilities the Safe Way
Modern computational science is becoming increasingly collaborative as scientists utilize remote shared facilities, including instruments, compute resources, and data repositories. Department of Energy (DOE) researchers have coined the term "superfacility" to describe an integration of two or more existing facilities using high-performance networks and data management software in order to increase scientific output. Currently, superfacilities are purpose-built manually for a specific scientific application or community, limiting their use to large projects that are long-lived. Recent advances in campus science networks (Science DMZs) and federated Infrastructure-as-a-Service, as in NSF GENI, provide the basic building blocks to enable construction of dynamic superfacilities on demand. Automating the creation of superfacilities reduces their cost but introduces new security challenges. By design, their dynamic network links bypass campus security appliances in order to maintain a friction-free network path; security for these paths is typically addressed by managing interconnections manually. This project creates a framework to automate, authorize, and monitor end-to-end connectivity across dynamic superfacilities, bringing this technology to a wider range of scientists.
The SAFE Superfacilities project brings together researchers and IT support organizations from RENCI/UNC Chapel Hill, Duke University and DOE/ESnet. The goal of this project is to generalize support for stitching dynamic network circuits by providing the authorization and security monitoring necessary to enable general, dynamic, and safe interconnections as a foundational building block for Science DMZ, Software Defined Exchanges (SDX), and superfacilities. One element of the project focuses on using the SAFE logical trust system to authorize dynamic stitching of network links in two systems developed, deployed, and operated by the researchers and their collaborators: the ExoGENI testbed and Duke's Software-Defined Science Network (SDSN) campus network exchange. A second element addresses dynamic out-of-band security monitoring of traffic over these links. The project serves as a model for improving security while maintaining high-performance friction-free network paths between campus scientists and remote facilities.
|
0.915 |