2004 — 2005 |
Snoeren, Alex |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Student Travel Support For Acm Hotnets-Iii Workshop; October 15-16, 2004; San Diego, Ca @ University of California-San Diego
The goal of the Third Workshop on Hot Topics in Networks (HotNets-III), being held in San Diego, California on 15-16 October 2004, is to bring together researchers in the networking and distributed systems community to debate emerging research directions. This workshop promotes community-wide discussion of ideas that are not yet mature, with the expectations that (1) this will influence and foster ongoing research in the community, and (2) many of the HotNets position papers will grow into papers accepted at SIGCOMM or other quality conferences. This grant provides funding to assist 10 US-based graduate students in attending this meeting. Participation in workshops like this is an extremely important part of the graduate school experience, providing the opportunity to interact with more senior researchers and to be exposed to leading edge work in the field. The support provided enables the participation of students who would otherwise be unable to attend HotNets-III.
|
0.915 |
2004 — 2010 |
Snoeren, Alex |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Career: Decoupling Policy From Mechanism in Internet Routing @ University of California-San Diego
This research is on separating forwarding policy from route discovery in the Internet, allowing users to select among possibly many inter-AS paths available to them and enabling ISPs to more effectively manage the end-to-end behavior of their traffic. The researcher is exploring the concept of the so-called network capability that binds together a path request, an accountable resource principal, and an authorizing agent. Network capabilities form the basis of Platypus, a loose source routing protocol that composes authorizations from multiple ISPs to construct alternative inter-AS routes that can be independently validated and accounted for on the fly. The proposed research has the potential to significantly improve the performance, reliability, and robustness of the Internet. Network capabilities remove many of the obstacles to deploying sophisticated routing policies in the Internet, enabling network operators to explicitly authorize third parties to affect the network's routing decisions in an accountable, secure, and efficient fashion. The major result of this project will be an implementation of Platypus and a demonstration of its effectiveness in expressing and arbitrating amongst competing ISP routing policies. In particular, this research will show that capabilities can express both existing ISP routing policies and additional, more complex policies not supported by BGP. The Platypus implementation will be made freely available and deployed on the PlanetLab and Internet-2 test-beds.
|
0.915 |
2004 — 2008 |
Vahdat, Amin [⬀] Snoeren, Alex |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Framework For Designing, Evaluating, and Deploying Global-Scale Adaptive Networked Systems @ University of California-San Diego
National Science Foundation Distributed Systems Research CISE/CNS
ABSTRACT PROPOSAL NO.: 0411307 PRINCIPAL INVESTIGATOR: Vahdat, Amin M INSTITUTION: University of California, San Diego PROPOSAL TITLE: Framework for Designing, Evaluating, and Deploying Global-scale Adaptive Networked Systems
Today, designing and building networked and distributed systems remains fraught with difficulty. Yet, addressing these challenges is becoming increasingly important as an ever larger fraction of the world's infrastructure comes to rely upon networked systems. Challenges faced by networked systems include failures, lack of fate-sharing among sys-tem components, highly variable communication patterns, race conditions, difficulty in reproducing bugs, asynchrony, security concerns, etc. While the advent of higher-level programming languages such as Java has raised the level of abstraction and eased this burden, most programmers still face the task of reinventing the appropriate techniques for dealing with asynchronous, failure-prone network environments known by a handful of elite programmers.
This proposal is exploring the design and implementation of a high-level language and runtime environment for building and operating robust, high-performance, and highly-available global-scale distributed systems. Target applications include Internet-scale routing protocols, planetary-scale network testbeds, and computational Grids. This work aims to holistically support the entire distributed system life-cycle from design to de-ployment and operation. The overall goals of this work are to effect a qualitative shift in: i) the time required to build, debug, and deploy robust distributed systems, ii) the ease and accuracy of translating high-level policy to low-level system specifications and con-figurations, and iii) the ability to compare the utility of competing systems architectures in a fair and consistent manner.
Dr. Brett D. Fleisch Program Director, CISE/CNS June 29, 2004 .
|
0.915 |
2005 — 2010 |
Pasquale, Joseph (co-PI) [⬀] Vahdat, Amin [⬀] Snoeren, Alex |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nets-Nbd: Algorithms and Infrastructure For Shared Mesh-Based Broadcast @ University of California-San Diego
This proposal is developing a set of technologies to create federated content distribution utilities that support the simultaneous delivery of a wide variety of content to overlapping sets of clients with statistical quality of service assurances. The federated content distribution infrastructure will: i) simultaneously meet the performance demands of high-bandwidth and low-latency content delivery and the resource allocation constraints of constituent service providers; ii) weather a variety of attacks both from the outside and from self-interested or malicious nodes directly participating in the protocol; iii) incorporate basic algorithms for distributing content under a wide variety of dynamic network conditions. Because the infrastructure is shared by a variety of applications and hosted by a number of mutually distrustful administrative domains, the system must provide mechanisms that both adjudicate among competing applications and allow each administrative domain to maintain its own resource allocation policies.
If successful this research will effect a qualitative shift in: i) the way in which data is distributed across the Internet; ii) basic algorithms for determining optimal data distribution strategies across arbitrary data meshes; iii) the levels of reliability and performance that can be achieved in critical nation-wide or global-scale event notification systems (e.g., the air-traffic control system); and iv) the ability of resource-poor providers to harness federated distribution utilities to publish urgent content such as Internet worm or virus signatures on a global scale.
|
0.915 |
2006 — 2010 |
Vahdat, Amin (co-PI) [⬀] Voelker, Geoffrey (co-PI) [⬀] Snoeren, Alex Yocum, Kenneth |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Csr---Pdos: Harnessing Virtualized Cluster Resources @ University of California-San Diego
Large-scale computational resources are increasingly being delivered through distributed clusters of commodity workstations, which, when taken as a whole, provide the raw horsepower of traditional supercomputers at significantly reduced cost. Unfortunately, economic reality still dictates that large clusters must be shared across multiple, distinct applications, each with their own resource needs. This project focuses on designing and implementing an efficient management framework that enables the creation, allocation, and management of virtual clusters.
A logical abstraction layered on top of a set of physical machines, virtual clusters harness virtual machine technology to more efficiently share computational resources between competing application demands while ensuring fault isolation. Critically, this proposal leverages the power of virtual machine monitors to provide novel functionality for an emerging class of applications. In particular, by exposing the dynamic levels of parallelism, dilating logical time, and supporting apparently infinitely large clusters, this work supports the distinctive needs of grid computing, network modeling, and Internet epidemiology.
An output of this work will be a fully operational environment for managing cluster-based computational resources integrated with publicly available virtual machine technology. In addition to dynamically adjusting resources in response to changes in demand and application load, virtual clusters can instantly create clones of existing virtual machines, a functionality critical to the deployment of large-scale high-fidelity honeypots. Finally, the ability to slow down logical time within a virtual cluster---thereby speeding up the relative speed of network communication---enables the emulation of network links orders of magnitude faster than those typically available on commodity clusters.
|
0.915 |
2006 — 2007 |
Vahdat, Amin (co-PI) [⬀] Savage, Stefan (co-PI) [⬀] Snoeren, Alex |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nets-Find: Enabling Defense and Deterrence Through Private Attribution @ University of California-San Diego
Research in network security to date focuses largely on defenses---mechanisms that impede the activities of an adversary. Practical security, however, requires a balance between defense and deterrence. While defenses may block current attacks, without a meaningful risk of being caught adversaries are free to continue their attacks with impunity. Deterrence is usually predicated on effective means of attribution---tying an individual to an action. In the physical world attribution is achieved through forensic evidence, but constructing such evidence is uniquely challenging on the Internet.
This project is developing a novel architectural primitive---private attribution, based on group signatures--that allows any network element to verify that a packet was sent by a member of a given group. Importantly, however, actually attributing the packet to a particular group member requires the participation of a set of trusted authorities, thereby ensuring the privacy of individual senders. In addition, this work explores content-based inverse firewalls that can inspect the content of traffic leaving a secured network, ensuring that sensitive information is kept within an enterprise. Approved data can then be labeled by the inspecting firewall, providing an audit trail should concerns arise.
Broader Impacts: This research is developing a key architectural component to improve the level of security and assurance available to network services. In addition, the PIs are initiating a dialogue among both researchers and network operators about critical policy aspects of network security. In particular, information about the sources of both normal and attack traffic that must be safeguarded according to some policy.
|
0.915 |
2006 — 2010 |
Snoeren, Alex Yocum, Kenneth |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nets-Nbd: Distributed Rate Limiting @ University of California-San Diego
Distributed resource management continues to remain a fundamental challenge to the operation, development, and success of the Internet. The control of resources, such as network capacity, at topologically distinct locations is necessary in a variety of scenarios: managing complex peering agreements between ISPs, meeting quality of service requirements, delivering content from multiple locations, or defending against malicious network activity. The PIs are developing a new form of traffic policing called distributed rate limiting (DRL) that enforce global rate limits across traffic aggregates at multiple sites, enabling the control of distributed resources as if they were centralized. Critically, DRL ensures that transport-layer flows behave as if they traversed a single rate limiter in the Internet; DRL is effectively transparent, emulating the behavior of a central limiter.
The rate and amount of communication between limiters directly impacts the effectiveness of the system; DRL's accuracy and responsiveness depends on the underlying communication fabric. This project studies the ability of various communication fabrics (e.g., mesh, trees, and gossip) to deliver the requisite accuracy and scalability for distributed rate limiting. While DRL has immediate applications in distributed traffic policing, network admission control, and managing complex ISP traffic matrices, the research additionally explores the possibility of both enforcing different kinds of centralized behavior and controlling other rate-based resources.
Broader Impacts: The outcomes of this proposal include a public release of the distributed rate limiting mechanism for research, academic and non-commercial purposes. The PIs expect the resulting software to be especially useful in supporting undergraduate course projects.
|
0.915 |
2007 — 2011 |
Vahdat, Amin (co-PI) [⬀] Voelker, Geoffrey (co-PI) [⬀] Savage, Stefan (co-PI) [⬀] Snoeren, Alex |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Nets-Find: Privacy Preserving Attribution & Provenance @ University of California-San Diego
Real-world security policies invariably involve questions of ``who'' and ``what''--who are the principals, what data are they seeking to access, and so forth. By contrast, the present-day Internet architecture concerns itself primarily with issues of ``how'' and ``where''-- what are the protocols by which a data item is delivered and to which topological endpoints. This inherent dissonance of purpose makes Internet security a bolt-on affair---with abstract access control policies pushed off to be implemented by particular applications or mapped onto the poor approximations provided by network-level abstractions (e.g., network firewalls). Moreover, these imperfect mechanisms are themselves attacked with impunity since today's Internet architecture provides a functional anonymity that insulates attackers from any meaningful liability.
This project is developing two key architectural capabilities--host attribution (which physical machine sent a packet) and data provenance (what is the ``origin'' of the data contained within a packet)--to enable the direct expression of a wide-range of security policies. Moreover, these properties are being implemented in a fashion that mandates their use (in a strong sense) by the network, but manages to preserve end-user privacy. The PIs are focusing on two key applications in this work: forensic trace-back and attribution for the purpose of attack deterrence, and defensive data-exfiltration to place precise controls over what kinds of data may move across a network.
Broader Impacts: This research is developing key architectural components to improve the level of security and assurance available to network services. In addition, the PIs are initiating a dialogue among both researchers and network operators about critical policy aspects of network security. In particular, information about the sources of both normal and attack traffic that must be safeguarded according to some policy.
|
0.915 |
2009 — 2012 |
Snoeren, Alex Yocum, Kenneth |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nets: Small: Topology Switching For Data Centers and the Clouds Above @ University of California-San Diego
This award is funded under the American Recovery and Reinvestment Act of 2009 (Public Law 111-5).
This project addresses the complex networking challenge presented by the emerging cloud computing model. Cloud providers must run a diverse set of client applications, each with potentially different networking demands, on shared data-center facilities. Traditionally, a datacenter network is configured to use the same routing process to choose the "best" route for each flow in a datacenter, regardless of the application. For example, Ethernet frequently performs shortest-path routing along a single spanning tree. Yet data center networks typically exhibit significant redundancy; routing along a single tree leaves many paths unused, sacrificing potential gains in reliability, isolation and performance.
Topology switching moves beyond this one-size-fits-all approach providing an architecture for fine-grained multi-topology networking. It allows applications to create custom routing systems within a data center; they can configure multiple logical topologies that, together, are tailored to their reliability and performance requirements. From a cloud provider's perspective, a topology-switched network increases efficiency by multiplexing potentially hundreds of topologies across the same shared physical network. The PIs are designing a scalable topology-switched routing platform that facilitates the exploration of application interfaces, management challenges, novel routing strategies, and performance benefits of this approach.
Ultimately, the project aims to develop a flexible topology management primitive that improves administrators' ability to effectively manage extremely large datacenter deployments. The research is also analyzing the benefits and costs of multi-topology networking. Additional outcomes of this proposal include a public release of the topology switching platform, enabling academic and industrial feedback and adoption.
|
0.915 |
2009 — 2013 |
Sobel, Joel (co-PI) [⬀] Paturi, Ramamohan [⬀] Savage, Stefan (co-PI) [⬀] Snoeren, Alex Mccubbins, Mathew (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Netse: Medium: Network Structure, Incentives, and Outcomes @ University of California-San Diego
This project is developing and validating rigorous models of several central problems in the design of the Internet and its applications, which can be viewed as evolving distributed systems composed of software agents employing some algorithmic process. In these systems, one can predict the outcome of a particular task as a function of a set of input variables: the agent communication pattern, individual incentives, and algorithmic strategy---aspects that have previously been studied in isolation, but rarely in concert. The PIs are developing an analytic framework that explicitly accounts for both network structure and agent incentives. By considering simple tasks like information diffusion, search, and leader election, they will construct a theory that can be applied to practical problems such as Internet routing, congestion control, and reputation systems.
Intellectual Merit: This project brings together a team of researchers with backgrounds spanning economics, algorithms, and networking to develop a unified approach to complex social networking that captures both network structure and agent incentives through a rigorous algorithmic framework. This project employs several research methodologies including human-subject experimentation, algorithm design, economic analysis, and large-scale simulation.
Broader Impact: The multidisciplinary approach of this project promises to lead to a comprehensive understanding of the effect of network structure and agent incentives in networks. The project is training graduate and undergraduate students in the skills necessary to deal with social networks, which are emerging as a fundamental underpinning of many applications. The PIs are designing two new courses based upon the research conducted in the project.
|
0.915 |
2011 — 2014 |
Savage, Stefan (co-PI) [⬀] Snoeren, Alex Levchenko, Kirill (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nets: Small: Understanding Network Failure @ University of California-San Diego
This project seeks to quantify the frequency, duration, causes, and impact of faults across a variety of network classes. Unlike previous efforts that have relied on substantial special-purpose instrumentation and monitoring infrastructure, the PIs are conducting their analysis using only commonly available data sources, such as device logs and configuration records maintained by network operations staff. In this way, they hope not only to provide concrete data regarding the particular networks being evaluated, but to define a repeatable methodology that can be employed by other researchers and even commercial operators to assess the reliability of other networks. Through partnerships with an academic backbone network (CENIC), an enterprise network services company (Hewlett-Packard), and large-scale Web services provider (Microsoft), the PIs have obtained access to device logs, operational maintenance records, and configuration information for a significant number of real networks.
Concretely, this project is working to deliver a fault analysis methodology based on readily available data like device logs, configuration information, and operator records such as email lists and trouble tickets; comparative studies regarding the differing failure characteristics of wide-area, enterprise, and data center networks; and a generative model of network faults that can be used to evaluate the suitability and efficacy of different applications and protocols to various network designs.
Broader Impact: In addition to these concrete technical contributions, the broader impacts of this project include the potential to cause network operators to reconsider how current networks are designed to make them less likely to fail, or to fail in more straightforward and easily manageable ways. Moreover, the research effort is imparting onto the next generation of computer scientists the skills necessary to assess, analyze, and model the performance characteristics of operational networks. This project supports two graduate students who assist the PIs in conducting the described work, and receive significant exposure to commercial network environments through industrial internships.
|
0.915 |
2012 — 2014 |
Snoeren, Alex |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Eager: Personalization in the Information Age @ University of California-San Diego
This work seeks to understand the extent and impact of personalization algorithms in popular on-line forums. In particular, the PI is studying web search engines, social networking sites, and recommendation services like Pintrest, Yelp and Reddit. While it is well known that each of these classes of information sources perform extensive personalization, there has not been a careful analysis on the impact of their efforts on what users see. This effort is working to answer a number of fundamental questions: How different are the experiences of distinct users? What features do the services use as input to their personalization algorithms? How are they gathered, and where are they stored? To the extent that the services display different information to different users, are users aware, and can they - or third parties - manage and exploit these changes?
This research project aims to understand the impact of various forms of information manipulation - as opposed to outright censorship - on the Internet. Personalization algorithms in particular are known to have the potential to place users inside ?filter bubbles?, where they see only information that already aligns with their viewpoints. In the worst case, adversaries can actively manipulate how content is produced, discovered, and accessed. This work will develop technologies to give users control over the information that they see to mitigate the effects of information manipulation. Concrete outcomes of this effort include prototype mechanisms to help users ?normalize? personalization technologies and potentially move between alternate on-line ?personalities.?
|
0.915 |
2013 — 2016 |
Papen, George Ford, Joseph (co-PI) [⬀] Snoeren, Alex Porter, George [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nets: Large: Collaborative Research: Hcpn: Hybrid Circuit/Packet Networking @ University of California-San Diego
Ever-larger data centers are powering the cloud computing revolution, but the scale of these installations is currently limited by the ability to provide sufficient internal network connectivity. Delivering scalable packet-switched interconnects that can support the continually increasing data rates required between literally hundreds of thousands of servers is an extremely challenging problem that is only getting harder. This project leverages microsecond optical circuit-switch technology to develop a hybrid switching paradigm that spans the gap between traditional circuit switching and full-fledged packet switching, achieving a level of performance and scale not previously attainable. This will result in a hybrid switch whose optical switching capacity is orders of magnitude larger than the electrical packet switch, yet whose performance from an end-to-end perspective is largely indistinguishable from a giant (electrical) packet switch.
The research provides a quantitative baseline for hybrid network design across a wide range of present and future technologies. The project will consist of five parts: i) traffic characterization to identify the class of network traffic that a circuit switch can support as well as the partitioning of the traffic between the circuit and packet portions of the network; ii) circuit scheduling to enable the circuit switch to rapidly multiplex a set of circuits across a large set of data center traffic flows; iii) traffic conditioning to reduce the variability of traffic at the end hosts, easing the demands placed on switch scheduling; iv) a prototype hybrid network that can use an optical circuit switch that operates three orders of magnitude faster than existing solutions; and v) a trend analysis to understand the tradeoffs resulting from potential future technology advances.
The work stands to dramatically improve data center networks, significantly reducing operating costs and increasing energy efficiency. The research material will be incorporated into courses, helping to train the next generation of computer networking scientists and engineers. The PIs will also continue ongoing outreach to high school students, both through the UCSD COSMOS summer program and through talks delivered at local high schools.
|
0.915 |
2014 — 2017 |
Snoeren, Alex |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nets: Small: Collaborative Research: Studying and Improving the Performance of Access Networks @ University of California-San Diego
The United States will see increasing penetration of high-speed connectivity into homes as a result of the national broadband initiative and related infrastructure investments. Similarly, more users around the world are using broadband access networks as an important (if not primary) means of Internet access-for example, these networks are becoming increasingly prevalent in developing regions of sub-Saharan Africa.
Researchers, operators, and policymakers need to understand the performance of broadband access networks, how the performance of the network affects the services that people commonly use, and how ultimately to improve the performance of these networks and services. Many factors, ranging from a user's choice of equipment in their home network to congestion on network links to business arrangements between Internet service providers, can affect the performance that users experience for different applications. Yet, despite our dependence on Internet connectivity from broadband access networks, we have little understanding of how various factors affect the ultimate performance of popular services. A better understanding about the performance that users experience from the edge of the network and the causes of poor performance -- driven by data from real deployments and sound empirical methods -- can better inform our discussions about both policy and technology investment and ultimately improve the experience for billions of Internet users around the world. Understanding and improving performance in both home networks will require an underlying network infrastructure that is flexible, extensible, and easy to manage. The algorithms and tools that we develop to help users and network operators troubleshoot and improve network performance to users will ultimately lead to better performance and lower operational costs.
Towards this goal, this project is developing platforms and first-of-their-kind datasets that will help users, operators, and researchers better understand the nature of performance in access networks. The project has two parts: (1) Characterizing and improving the performance of access networks; and (2) Characterizing and improving application and service performance from access networks. In contrast to previous studies of residential access networks, which have focused on the performance of the wired access link between the home network and the Internet service provider, this effort takes a holistic approach by also exploring the performance of the home wireless network and a range of factors inside the home that may ultimately contribute to the performance that users experience. Additionally, this project is developing better ways to help service providers determine why (and where) these performance degradations occur, such as whether problems are caused by the user's home network or device, the failure or degradation of some aspect of the service (e.g., the user?s domain name system (DNS) server may be slow), or some other cause. Building on existing collaborations with several large service providers, the investigators will explore the underlying causes of service performance degradation from the edge of the network, as well as ways to mitigate such degradation. The researchers will also work with policy organizations in developing regions to better understand the nature of their network and application performance, as well as possible mechanisms and recommendations for improving the performance in these regions.
The project's outcomes will be disseminated in many ways, including integration into on-campus courses and, where appropriate, massive open online courses. The researchers have close relationships with Internet service providers, content providers, policy organizations, and regulators in several countries and plan to communicate the results of the research to these groups, through publications, talks, tutorials, and workshops. They also have a history of involving undergraduates and minorities in research and will extend these efforts as part of this project. For example, given the importance of measuring broadband in developing regions, the researchers will make specific concerted efforts to engage students in developing regions, through lectures, seminars, and, where applicable, direct advising.
|
0.915 |
2016 — 2019 |
Jhala, Ranjit (co-PI) [⬀] Savage, Stefan (co-PI) [⬀] Snoeren, Alex Levchenko, Kirill [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cps: Synergy: Collaborative Research: Foundations of Secure Cyber-Physical Systems of Systems @ University of California-San Diego
Factories, chemical plants, automobiles, and aircraft have come to be described today as cyber-physical systems of systems--distinct systems connected to form a larger and more complex system. For many such systems, correct operation is critical to safety, making their security of paramount importance. Unfortunately, because of their heterogeneous nature and special purpose, it is very difficult to determine whether a malicious attacker can make them behave in a manner that causes harm. This type of security analysis is an essential step in building and certifying secure systems.
Unfortunately, today's state of the art security analysis tools are tailored to the analysis of server, desktop, and mobile software. We currently lack the tools for analyzing the security of cyber physical systems of systems. The proposed work will develop new techniques for testing and analyzing security properties of such systems. These techniques will be used to build a new generation of tools that can handle the complexity of modern cyber-physical systems and thus make these critical systems more secure.The technical approach taken by the investigators is to applying proven dynamic analysis techniques, including dynamic information flow tracking and symbolic execution, to this problem. Existing tools, while powerful, are monolithic, designed to apply a single technique to a single system. Scaling them to multiple heterogeneous systems is the main contribution of the proposed work. To do so, the investigators will develop a common platform for cross-system dynamic analysis supporting arbitrary combinations of component execution modes (physical, simulated, and emulated), requiring new coordination mechanisms. Second, building on the platform above, they will implement cross-system dynamic information flow tracking, allowing dynamic information flow tracking across simulated, emulated, and potentially physical components. Third, they will extend existing symbolic/concrete execution techniques to execution across multiple heterogeneous systems. Fourth, they will introduce new ways of handling special-purpose hardware, a problem faced by dynamic analysis tools in general.
|
0.915 |
2016 — 2019 |
Voelker, Geoffrey (co-PI) [⬀] Savage, Stefan (co-PI) [⬀] Snoeren, Alex Porter, George (co-PI) [⬀] Levchenko, Kirill (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ii-New: a Dual-Purpose Data Analytics Laboratory @ University of California-San Diego
The research enabled by the supported infrastructure has the potential to dramatically impact society in two ways. First, by undermining entire cyber-crime ecosystems: disrupting underground activities, infrastructure, and social networks through strategic intervention. Inhibiting the flow of money reduces the profitability of these activities, thereby subverting the key incentive underlying modern cybercrime. Second, improved efficiency of data center networks will significantly reduce operating costs and increase energy efficiency. The infrastructure will also create educational opportunities for students at a variety of levels, expanding the research skills of postdoc, graduate, and undergraduate students to address both data center network design and security research challenges.
This project is to pursue two separate multi-year research agendas. One is to collect and analyze extremely large datasets pertaining to various aspects of Internet malware and cybercrime while concurrently exploring new high-performance hybrid optical/electrical network architectures that dramatically decrease the cost and complexity of the infrastructure required to support such analytics. This award supports compute and storage resources to both conduct the analytics required for the Ecrime research, while simultaneously serving as a testbed for our prototype hybrid network switches.
The research enabled by this infrastructure has two key components: 1) Through in-depth empirical analyses of a range of online criminal activities, the PIs are developing an understanding of the shape of key economic and social forces---as seen at scale---in terms of relevance for both attackers and defenders. 2) Characterizing network traffic generated by large-scale data analytics, focusing specifically on identifying the class of network traffic that a circuit switch can support as well as the partitioning of the traffic between the circuit and packet portions of the network.
|
0.915 |
2016 — 2019 |
Snoeren, Alex Porter, George [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nets: Medium: Improving Network Performance and Efficiency Through Multi-Channel Network Links @ University of California-San Diego
This project seeks to challenge the conventional abstract model of a high-speed network link as a single, logical point of attachment. Instead, the proposed approach exposes the inherent parallelism that exists within the end host, network links, and the network fabric as a whole, to applications and the network control plane. The result is a network fabric consisting of a number of distinct physical networks that coexist within a single physical topology. By decoupling the channels making up network links, the project radically redesigns the network fabric to address today's requirements and challenges. A key hypothesis of this work is that composing multiple, potentially heterogeneous networks provides for greater scaling, performance, service quality, and manageability than maintaining the legacy fat-pipe link abstraction.
The proposed research will impact the broader community in four ways: (1) by addressing the societal need of large-scale networking infrastructure to support next-generation clusters and data centers, by (2) engaging with industry to help inform the design and construction of new devices, by (3) interacting with other scientific communities through interdisciplinary research, and by (4) engaging with graduate and undergraduate students to translate the resulting research into structured courses and hands-on learning experiences for traditionally under-represented student groups.
|
0.915 |
2019 — 2022 |
Snoeren, Alex Porter, George (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cns Core: Small: Designing Efficient Cloud Datacenter Network Fabrics @ University of California-San Diego
Cloud datacenter networks are tasked with providing connectivity between an ever-increasing number of end hosts whose link rates improve by orders of magnitude every few years. What network operators would ideally like is a single, full-bandwidth switch that could connect every endpoint at full rate. Such an idealized network would enable them to place jobs and data where it is convenient, without worrying about bandwidth bottlenecks, hotspots, and other network-induced limitations. Unfortunately, preserving this ``big-switch'' illusion of a single network with full bandwidth is increasingly cost prohibitive and likely soon infeasible.
This project will explore an alternative method of constructing datacenter network fabrics based upon a provably optimal topological construct, an expander graph. If successful, the project will result in network fabrics that are more flexible, capable, and scalable than existing state-of-the-art approaches. This project will develop a family of cloud datacenter network topologies based on expander graphs that eliminate the capacity bottlenecks inherent in hierarchical Clos-based topologies while minimizing the bandwidth tax incurred due to indirect routing. A single, large expander-graph network topology can be constructed out of multiple, disjoint expander graphs; this project will show how judicious tenant placement can then provide both isolation and dynamic capacity while minimizing the bandwidth tax. Moreover, by employing reconfigurable network components (i.e., circuit switches), it is even possible to evolve the set of constituent expander graphs over various time scales, allowing cloud datacenter operators to better suit the needs of their current tenants. Indeed, if the timescales are sufficiently small (e.g., 100s of milliseconds) tenants may then choose to buffer traffic until a particularly favorable (set of) path(s) is available, further decreasing the overall bandwidth inefficiency or "tax". If the network topology evolves at a rapid rate, it is possible to choose, on a per-packet basis, whether to either (1) immediately send a packet over whatever static expander is currently instantiated, incurring a modest tax on this small fraction of traffic, or (2) buffer the packet and wait until a direct link is established to the ultimate destination, eliminating the bandwidth tax on the vast majority of bytes.
This project will engage graduate and undergraduate students through structured courses, intense mentorship, and hands-on research activities through participation in the NSF-funded UC San Diego Early Research Scholars Program (ERSP).
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |