2008 — 2013 |
Funkhouser, Thomas (co-PI) [⬀] Freedman, Michael |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Nets-Anet: a Network Architecture For Federated Virtual/Physical Worlds
This research asks how one might design a network architecture to support three-dimensional virtual worlds as a dominant application platform. The architecture is based on three key design principles. First, rather than being centralized or peer-to-peer, the architecture is based on federation: cooperative but not necessarily collaborative interaction between multiple parties. This enables providers to enforce local administrative and security policies, yet requiring new support for discovery, messaging, and migration between and within domains. Second, application communication is grounded in three-dimensional coordinate spaces: objects can only communicate after being introduced through proximity. This geometric addressing decouples applications from their physical locations on hosts, and introduces interesting security protections from unwanted communication. Third, by using this communication model, the architecture can directly interface with and connect the physical world, leading to new possibilities for virtual interactions.
Much as the Internet was designed with a layered communication model, this research designs a new layered approach for virtual worlds: from a high-level object layer providing a rich programming environment for immersive virtual worlds, to the narrow waist of geometry-based communication, and down to the underlying service layer that implements computation, storage, and communication mechanisms. With backgrounds across networking, systems, and graphics, the investigators have been previously developing a highly expansible and personalizable virtual world system, Meru. This new project will develop the network architecture necessary to enable the seamless interaction and interoperation between many different Meru-based virtual worlds.
Integrating virtual worlds is already a pressing issue and concern among providers. Research towards a unifying networked system architecture would improve these efforts and could lay the groundwork for a next-generation programming platform for the Internet. It would integrate the current divide between the logical, host-centric networks and the emerging sensor networks of tomorrow. By incorporating existing efforts towards building an open, scalable virtual world system, the research will have impact in all of the areas virtual worlds are already bringing change. Fundamentally, virtual worlds, even more so than the Internet, are a platform for inter-personal communication, affecting education, public services and planning, commerce, and social networks.
|
1 |
2009 — 2013 |
Rexford, Jennifer (co-PI) [⬀] Freedman, Michael |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nets: Medium: a Scaffold For Service Centric Networking
This award is funded under the American Recovery and Reinvestment Act of 2009 (Public Law 111-5).
This research proposes a new network architecture, SCAFFOLD, that directly supports the need of wide-area services. SCAFFOLD treats service-level objects (rather than hosts) as first-class citizens and explores a tighter coupling between object-based naming and routing. A clean-slate, scalable version of the federated SCAFFOLD architecture is being designed and prototyped. System components include programmable routers/switches, resolution services for object-based lookup and forwarding, and integrated end-hosts.
The center of people's "digital lives" today are online services -- not the networks or computers on which they run. The research ultimately explores what abstractions and mechanisms that will make the future network a powerful, flexible hosting platform for wide-area services (the so-called ``cloud''). In doing so, SCAFFOLD would lower the barrier to deploying networked services that are scalable, reliable, secure, energy-efficient, and easy to manage.
The project includes a summer-camp outreach activity with schools serving under-represented groups to build services on top of SCAFFOLD, new special course development, and technology transfer with industry.
|
1 |
2009 — 2013 |
Freedman, Michael |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nets: Medium: Collaborative Research: Designing a Content-Aware Internet Ecosystem
This award is funded under the American Recovery and Reinvestment Act of 2009 (Public Law 111-5).
A significant majority of current Internet traffic is due to distributing content, yet the Internet was designed to be largely agnostic to characteristics of the content flowing over it. This research investigates the design and operation of a content-aware Internet ecosystem, which thrives on the interaction between users (seeking seek fast and correct downloads), content providers (seeking to minimize network congestion and transit traffic), and network providers (who generate content, and seek the cost- and resource-efficient dissemination).
This research takes a two-pronged approach. On one hand, it explores novel analysis of fundamental performance limits for a content-aware Internet ecosystem that rigorously characterizes the benefits of an intelligently designed cross-layer architecture. On the other hand, it includes developing mechanisms and practical implementation of a content distribution system, by which involved parties can interact constructively to achieve these gains yet respect each others' interests. This approach combines a range of techniques, including modeling and theoretical analysis, measurement and data analysis, system design, simulation, and system implementation. Affordable and ready access to digital content helps inform, educate, and entertain society as a whole. Additionally, by developing cost- and resource-effective delivery techniques, the friction continuing to build between involved parties can be reduced and the technical side of the network neutrality debate can be better informed. To enhance this impact, the project includes an educational component involving local universities from under-represented groups, curriculum development and interactions with industry.
|
1 |
2010 — 2017 |
Freedman, Michael |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Career: Towards Scalable Datacenter Services With Strong Robustness Guarantees
Modern datacenters have reached hundreds of thousands of servers, and modern distributed services can be spread over multiple or even dozens of datacenters worldwide. For scalability, availability, and performance, these services have increasingly embraced a weakened model of data consistency. This trade-off proved highly successful for applications such as web crawling, search, and content distribution. On the other hand, the recent trend to move ever-more dynamic applications to the "cloud" portends a shift in service requirements---in which losing data or applying operations out-of-order may not be acceptable---as does far-flung demand for concurrent access to data and services.
To meet these changing needs, this research project reconsiders the challenge of building storage and replicated systems with strong robustness guarantees and at scale. Recognizing that large-scale, complex systems are typically built-up from smaller groups---which either provide different functionality or partition some larger state-space into smaller, more manageable parts---it leverages a group compositional approach to tackle the problem. First, the research develops novel protocols for smaller groups of nodes that offer strong properties at minimal overhead. Second, it proposes a coordination service and a suite of management algorithms that adaptively organizes these groups and composes them together. These algorithms explore problems of dynamic load balancing, topological control, and security. The research includes substantial systems building, including the incorporation of its core services into several other distributed systems (e.g., a scalable virtual world and a istributed name resolution service).
The remarkably rich and varied Internet services run out of datacenters form the core of people's online experiences today. The algorithms and software systems developed by this research may lower the barrier to developing datacenter services that are scalable, reliable, secure, efficient, and easy to manage. And by reducing the technical difficulty of building robust, large-scale applications---by providing them with a firm foundation---this research may enable further innovation in Internet services.
|
1 |
2010 — 2014 |
Freedman, Michael |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Fia: Collaborative Research: Nebula: a Future Internet That Supports Trustworthy Cloud Computing
Cloud computing provides economic advantages from shared resources, but security is a major risk for remote operations and a major barrier to the approach, with challenges for both hosts and the network. NEBULA is a potential future Internet architecture providing trustworthy networking for the emerging cloud computing model of always-available network services. NEBULA addresses many network security issues, including data availability with a new core architecture (NCore) based on redundant connections to and between NEBULA core routers, accountability and trust with a new policy-driven data plane (NDP), and extensibility with a new control plane (NVENT) that supports network virtualization, enabling results from other future Internet architectures to be incorporated in NEBULA. NEBULA?s data plane uses cryptographic tokens as demonstrable proofs that a path was both authorized and followed. The NEBULA control plane provides one or more authorized paths to NEBULA edge nodes; multiple paths provide reliability and load-balancing. The NEBULA core uses redundant high-speed paths between data centers and core routers, as well as fault-tolerant router software, for always-on core networking. The NEBULA architecture removes network (in) security as a prohibitive factor that would otherwise prevent the realization of many cloud computing applications, such as electronic health records and data from medical sensors. NEBULA will produce a working system that is deployable on core routers and is viable from both an economic and a regulatory perspective.
|
1 |
2010 — 2014 |
Peterson, Larry [⬀] Pai, Vivek Freedman, Michael Rexford, Jennifer (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Mri: Development of a Virtual Cloud Computing Infrastructure
Proposal #: 10-40123 PI(s): Peterson, Larry L. Freedman, Michael J.; Pai, Vivek; Rexford, Jennifer Institution: Princeton University Title: MRI/Dev: Development of a Virtual Cloud Computing Infrastructure Project Proposed: This project, building VICCI, a programmable cloud-computing research testbed, enables a broad research agenda in the design of network systems that requires both multiple point-of-presence and significant processing/storage capabilities on the sites. VICCI, a distributed instrument with a point-of-presence at Princeton, GeorgiaTech, Stanford, and U Washington, along with international clusters in Europe and Japan, encompasses both a distributed set of virtualized compute clusters and networking hardware and the software that enables multiple researchers to innovate both at and above the infrastructure layer. It is designed to support research both into the design and deployment of large-scale distributed services that use an environment. VICCI enables research in - Building block services (addressing issues of replication, consistency, fault-tolerance, scalable performance, object location, and migration) designed to be used by other cloud applications, - Developing new cloud programming models designed for targeted application domains, and - Studying cross-cutting issues at the foundation of the cloud?s design and how to build a trusted cloud platform that ensures confidentiality and integrity of computations that are outsourced to the cloud. Plans include bootstrapping VICCI with working software from PlanetLab with an ultimate goal of folding the results into VICCI itself, thus creating an even more effective platform for research into scalable network systems. Broader Impacts: This project, strongly influenced by the experience with PlanetLab that has demonstrated the importance of deploying experimental network services on realistic platforms (i.e., platforms that are realistic enough to attract the real user community), provides a realistic environment to evaluate and deploy scalable new network services. VICCI supports deployment studies of prototype systems. Thus, it accelerates research and teaching processes by supporting seamless migration of scalable services and applications from early prototypes. Moreover, it offers a path to re-energize the innovative process that has led to new network services, widespread consumer adoption, and generation of new economic and social value. It also provides graduate students with extensive experience in building large-scale distributed systems and enables the design of more courses taking advantage of the instrument.
|
1 |
2012 — 2017 |
Freedman, Michael |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Tc: Large: Collaborative Research: Facilitating Free and Open Access to Information On the Internet
This project develops methods to provide citizens information about technologies that obstruct, restrict, or tamper with their access to information. Internet users need an objective, independent, third-party service that helps them determine whether their Internet service provider or government is restricting access to content, specific protocols, or otherwise degrading service. Towards this goal, we are (1) monitoring attempts to block or manipulate Internet content and communications; and (2) evaluating various censorship circumvention mechanisms in real-world deployments}. The project develops a large-scale measurement and monitoring service that measures network reachability and performance from a variety of access networks to various Internet services; infers whether ISPs or governments are restricting or otherwise throttling access to various applications and services; and detects attempts to tamper with information presented to users. The project also studyies the policy ramifications of making information about censorship and information tampering available to Internet users. It will provide up-to-date information about both the extent of censorship and information tampering in countries around the world and technologies countries are using to implement censorship and thwart censorship circumvention tools. Discoveries are disseminated through real-time portals and through regular written reports and academic publications.
|
1 |
2013 — 2017 |
Pai, Vivek Freedman, Michael |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Bigdata: Small: Dcm: Jetstream: a Flexible Distributed System For Online and in-Place Data Analysis
Many research and commercial endeavors are experiencing dramatic transformations through the use of Big Data, wherein large data repositories are collected and analyzed to reveal trends, correlation, and information that may not be apparent in smaller samples. Current approaches assume centralizing the repository, which may be a poor fit in environments where the data generation rate exceeds the network capabilities. In this project, the PIs investigate system architectures for both real-time and historical analysis of geographically distributed data, combined with research in adaptively reducing data volumes to optimize bandwidth capabilities. This combination allows better use of the computation and storage associated with smarter end devices, including, but not limited to, distributed sensors, smart meters, and even full servers, without requiring network upgrades. Given the historical trends of the growth of computation and storage versus the capacity limits of wide-area networks, this research enables more data collection and analysis to be performed at a lower overall system cost. Further, the ability to dynamically adapt data precision and fidelity to available network bandwidth allows systems to gracefully and automatically improve performance in the presence of higher-capacity networks. The research enables the collection and analysis of data that is currently left unanalyzed because of network constraints. Such data can include finer-granularity usage data, which could indicate actionable steps to reducing household energy consumption, or it could include a greater olume of debugging and monitoring data, which could better predict system failures or provide greater insight than with current methods.
|
1 |
2021 — 2024 |
Freedman, Michael |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Cns Core: Medium: the Case For Blended Storage
In recent years, data storage technology has been designed for one of two objectives: either improved performance or a lower cost per bit. Consequently, significant effort has gone into building new storage systems that are organized either to be very fast (but expensive) or very cheap (but slow). This proposal takes a different approach: it makes the case for blended storage systems, i.e., software systems that combine multiple different storage technologies, to achieve both high performance and low cost per bit.
Widely-used storage data structures and algorithms are designed for a single storage medium in mind, which leads to poor performance when they are run on multiple different storage technologies with different characteristics. Therefore, the proposal rethinks the fundamental data structures and algorithms that underpin modern storage systems for blended storage, including: indexing, data placement, logging, caching, prefetching and approximate membership structures. The proposal will design and implement these data structures and algorithms in a prototype key-value store that is designed specifically for blended storage.
The proposed research will redesign widely-used data structures and algorithms for key-value stores to take advantage of blended storage technologies for fast data access at a low cost. Since key-value stores underpin many cloud data systems, the immediate benefits will include high performing data systems at a reduced cost. These will in turn translate to cost-effective cloud services for end users. The PIs will release the software artifacts under open-source licenses to enable other researchers and will work with hardware and cloud providers to transition the research into practice. The education plan will elevate storage-centric systems design as an integral part of systems education.
The project repository will be hosted on Github at https://github.com/princeton-sns and maintained indefinitely.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |