2000 — 2006 |
Estrin, Deborah (co-PI) [⬀] Heidemann, John Papadopoulos, Christos |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Simulation For Education and Research @ University of Southern California
This USC-ISI award supports work on a Collaborative Simulation for Education and Research (CONSER) project. The CONSER system involves development of infrastructure that addresses two purposes: (a) support for research activities in the development and evaluation of networking protocols, and (b) teaching in the areas of networking protocols and networking concepts. Based on a collaborative simulation environment, CONSER is intended to aid understanding of protocol behavior in complex operational environments, over a range of traffic conditions, and on a variety of media. The system will involve simulation and visualization support intended to aid researcher and student understanding, thus providing benefits in both research and teaching settings. Based on ns and nam, the work is expected to be well received by the research community and industry.
|
1 |
2000 — 2004 |
Mataric, Maja (co-PI) [⬀] Govindan, Ramesh (co-PI) [⬀] Sukhatme, Gaurav [⬀] Heidemann, John Estrin, Deborah (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Dynamic Adaptive Wireless Networks With Autonomous Robot Nodes @ University of Southern California
Multihop wireless capabilities are enabling communication and coordination among autonomous nodes in unplanned environments and configurations. At the same time wireless channels present challenges of dynamic operating conditions, power constraints for autonomously-powered nodes, and complicated interactions between high level behavior and lower level channel characteristics (e.g. increased synchronized communication significantly degrades channel characteristics). The major goal of the research proposed here is the development, testing, and characterization of algorithms for scalable, application-driven, wireless network services using a heterogeneous collection of communicating mobile nodes. Some of these nodes will be autonomous (robots) in that their movements will not be human-controlled. The others will be portable, thus making them dependent on humans for transportation. While the focus of the work is on the mobile nodes, the project includs immobile computers on the network as well. The project emphasizes that most (though not all) of the mobile nodes will have modest sensing, computational, and communication resources. The chief scientific motivation behind the work is the design of robust, efficient, and scalable algorithms. The project hypothesizes that distributed algorithms that rely on local interactions have many compelling characteristics, resulting in these properties. There is significant overlap between the problems of coordinating the autonomous mobile nodes that carry some of the sensors and the algorithms that direct the flow of information from sources(s)to sink(s) in the network. Both sets of algorithms need to be carefully designed to improve robustness, efficiency, and scalability. As motivation the project proposes that the experimental part of the research be conducted on a testbed which simulates some characteristics of an urban post-earthquake scenario in a building. The sensors in the experiments will be distributed geographically (within the building) and linked by a wireless network. Many of the mobile nodes will be largely autonomous, serving as easily-accessible knowledge collectors and repositories, and exercising a wide range of independent options in the dispatch and control of information flow and resources. Other mobile nodes will be carried about the environment by people. The project will study issues of scale (how many sensor nodes does the application software accommodate), fault tolerance (how robust is the system to loss of sensors and/or communication) and efficiency (e.g. time vs. quality of service). As part of a one-year pilot study funded by NSF, the project has been conducting initial research in the issues underlying a system such as the one above. The project also recently received a substantial equipment grant from the Office of Naval Research to support the experimental portion of this work. The project has identified two key unsolved sub-problems that are relevant to the overall goals: localization and communication coverage. In this proposal the project discusses the broad research challenges in the area of communication and coordination of autonomous mobile nodes. The project then focuses on the two key problems as concrete questions that will be addressed in the research and describes a method involving simulation and experimentation to study them systematically.
|
1 |
2001 — 2005 |
Heidemann, John Shinozuka, Masanobu [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr/Si: Pervasive Monitoring and Control of Water Lifeline Systems For Disaster Recovery @ University of California-Irvine
This research applies networks of sensors and control devices to enhance the performance of civil infrastructure systems, particularly utility lifeline systems, under both emergency and daily operational conditions. Specific objectives of this research are to: 1) develop a conceptual system design capable of data acquisition, wireless data transmission, data processing/analysis and operational control of a water delivery system; 2) develop innovative neural network-based inverse analysis algorithms and software to determine in near real-time the extent and locations of damage or malfunction sustained by system components such as pipe segments, pipe joints, pumping stations, etc., and 3) use Memphis Light, Gas and Water (MLGW) Division's water system for a pilot case study to demonstrate the improvements offered by the system design. The system thus designed represents a new generation of vastly expanded and enhanced SCADA (supervisory control and data acquisition) systems currently used by utilities. In this expanded version, the hydraulic data are sensed and transmitted through mixed wired and wireless communications through the network. The network will automatically detect aggregation points where data from a region are accumulated and trigger local processing. These points will produce summaries that can then be passed on to central observers. When necessary, observers will be able to "drill down" on specific details with explicit queries and control.
The two geographically distant Principal Investigators provide expertise on structures/mechanics, lifeline systems analysis and socio-economic impact of system failure and in computer operating systems, network protocols, and sensor networks. This combination of civil engineering and computer science expertise represents an ideal interdisciplinary coalition for this research. Each will support and advise one graduate student and communicate each other through internet and frequently scheduled meetings inviting potential end users from utility organizations. Broad impacts are expected on the frontier research in lifeline engineering, sensor network technology, simulation and neural network-based analysis of complex systems, and on the practice of lifeline operation and management.
|
0.981 |
2002 — 2005 |
Heidemann, John |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr: Mac Protocols Specific For Sensor Networks (Macss) @ University of Southern California
Wireless sensor networking is an emerging technology that has a wide range of potential applications including environment monitoring, smart spaces, medical systems and robotic exploration. While the technology is very promising, it raises serious challenges in network and system design. Sensor networks differ in many ways from than the traditional IP or voice networks, and have their unique features and requirements. Although MAC protocols such as 802.11 and TDMA are ideal for wireless IP or voice, no MAC protocol today meets the needs of sensor networks.
This project will execute a systems-driven research program to address these problems through the development of sensor-network specific MAC protocols. Specifically, we are investigating:
ADAPTIVITY AND ENERGY EFFICIENCY: Sensor network MACs must be adaptive in several dimensions, including energy consumption, traffic loads, and deployment density. Energy is *the key limitation* for battery-powered sensor nodes. The researchers are investigating and will provide designs for ENERGY CONSERVATION approaches that modify node duty cycle to conserve energy while considering user varying application traffic requirements. One approach to energy conservation is to trigger primary node radios with a paging channel, but this requires a second radio (with corresponding cost, space, and money requirements). Sensor nodes already must operate sensors full time, so the researchers are evaluating the use of sensors (acoustic, seismic, etc.) as a PHYSICAL SENSOR PAGING CHANNEL.
MAC INTERACTION WITH THE PHYSICAL LAYER: With low-power, relatively unsophisticated radios, sensor networks applications and MAC protocols are very close to physical layer effects of radio propagation. A serious problem with current sensor applications is dealing with link error conditions such as packet loss and asymmetric communication. Energy-conserving MAC layers already keep track of neighbors, so the researchers will provide a BLACKLISTING SERVICE that allows the MAC to identify and exclude unusable links. The researchers also will study and report on radio POWER CONTROL AND APPLICATION INTERACTIONS to understand how MACs can control neighborhood size.
UNIQUE APPLICATION NEEDS OF SENSOR NETWORKS: Sensor networks are fundamentally different from Internet-style networks of peer nodes in several different ways. A first differences is that sensor networks are primarily quiescent, but they occasionally become very active when something is sensed. The researchers will develop a MAC PROTOCOL WITH MULTIPLE OPERATING MODES to allow the MACs to adjust to this ``feast or famine'' traffic load. In addition, energy conservation introduces a *directional bias* in node communication; sleep/wake-up schedules can either add latency to all directions equally, or can allow rapid communication in one direction and slower communication in the other. The researchers will evaluate how controllable MAC-LEVEL DIRECTIONAL BIAS INTERACTS WITH APPLICATION, exploiting it where possible.
Finally, the research community has a compelling need for a publicly available, freely modifiable Sensor-MAC protocol for experimentation and simulation. Evaluation of our ideas requires implementation and evolution of a MAC protocol. We will provide this implementation to other researchers for use over existing sensor network radios (with a reference implementation on the UC Berkeley Mote hardware) and in simulation (with a reference implementation in the ns-2 simulator). Thus, in addition to opening up sensor-network-specific MAC protocols as a new subfield of research, this work will serve as a catalyst for wider sensor network research that is sorely in need of an appropriate and modifiable MAC.
|
1 |
2004 — 2009 |
Heidemann, John Ye, Wei |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nets-Noss: Sensor Networks For Undersea Seismic Experimentation (Snuse) @ University of Southern California
Abstract:
Sensor networks are today being deployed in scientific applications in fields and buildings, but their use underwater has been quite limited. Undersea sensor networks are either based on expensive wired devices or a small number of relatively large nodes with long-range acoustic communications. The Sensor Networks For Undersea Seismic Experimentation (SNUSE) project is developing fundamental tools to support small, low-cost, wireless undersea sensor nodes. With laboratory and testbed experimentation and simulation, the project investigates:
- Hardware for small, low-power, moderate-range acoustic telemetry for underwater sensor nodes; - Time synchronization, localization, and energy-efficient MAC protocols for environments with high-delay, acoustic communication; - Applications protocols for ultra-low duty cycle operation, including long-duration sleep and wakeup and efficient application-level data caching and forwarding.
This work is directly applicable to applications in undersea sensing, including contaminant and pollution monitoring undersea micro- and macro-organism -life tracking, and seismic monitoring of undersea oil fields. Although focused on undersea applications, a large portion of the research is also applicable to traditional sensor networks.
This work supports undergraduate and graduate research at USC, both directly and through sensor networking laboratory classes and the developing CiSoft Masters Degree in Smart Oil-field technology. The results of this work will be disseminated via technical papers, freely available hardware designs and software on the project web site.
|
1 |
2006 — 2010 |
Heidemann, John Cho, Junghoo 'John' (co-PI) [⬀] Hansen, Mark |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nets-Find: Sensor-Internet Sharing and Search @ University of Southern California
Sensor networks today are many wireless islands, each isolated and connected through the Internet to its owner, not easily usable by others. The goal of NeTS-FIND: Sensor-Internet Sharing and Search (SISS) is to allow owners to easily bridge these sensornet islands, sharing their data with anyone on the Internet, and to help guests discover and explore these shared sensor data. This research therefore seeks to meld evolving sensornet and Internet architectures, with new approaches to:
a. allow easy integration of new sensornets into the Internet and integration of sensornets bridged by the Internet;
b. manage disclosure of sensor data, supporting confidentiality, estimates of data quality, and visualization of trends;
c. support sensor discovery, linking and search.
The results of this work are expected to include new protocols for sharing sensor data streams, for archiving sensor data on the Internet, and for letting users search, aggregate, and process these streams. The PIs will explore this research by designing new algorithms, evaluating them through simulation and prototype implementation. The PIs expect to specify the protocols and elements that enable this work, enabling future optimization of individual components.
Beyond the immediate impact of this research, the goal of this research is to enable the "citizen scientist", allowing amateurs to contribute sensor data and process sensor streams.
|
1 |
2006 — 2010 |
Mitra, Urbashi (co-PI) [⬀] Ortega, Antonio (co-PI) [⬀] Heidemann, John Papadopoulos, Christos |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nets-Nbd: Maltraffic Analysis and Detection in Challenging and Aggregate Traffic (Madcat) @ University of Southern California
Many compromised computers today generate maltraffic, such as denial-of-service (DoS) attacks, spyware reporting home, unauthorized applications, spam, and worms. Current defenses are becoming increasingly brittle. There are several reasons for this challenge: encryption limits packet content inspecting, aggregation at network edge limits use of filtering and blacklisting due to potential collateral damage, increased traffic volumes allow maltraffic to hide, and applications are often cloaked through layered protocols (SOAP over HTTP or varying port allocation) or active concealment.
This proposal applies signal processing and detection theory to network traffic to detect maltraffic in these challenging scenarios. We will use features such as packet timing and frequency, careful design of the measurement and detection systems, and study of inherent behaviors in protocols to address these challenges.
Broader Impact: The results of this work will include (a) the development of a systematic methodology for applying signal processing methods to network traffic; (b) the analysis of new signal representation and detection methods specific to maltraffic; and (c) the identification, understanding, and modeling of key identifying features and inherent behaviors of maltraffic and how they are shaped by the network. Our new approaches will yield a deeper understanding of network traffic, and will be tested with traces of real network traffic, resulting in new tools to combat these problems.
|
1 |
2007 — 2011 |
Heidemann, John Ye, Wei |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cri: Crd: Collaborative Research: Open Research Testbed For Underwater Ad Hoc and Sensor Networks @ University of Southern California
Proposal #: CNS 07-08498 07-09946 07-08420 PI(s): Preisig, James C Ye,Wei Stojanovic, Milica Lee, Freitag Heidenmann, John S. Institution: Woods Hole Oceanographic Inst U Southern California Mass Inst Tech Woods Hole, MA 02543-1041 Los Angeles, CA 90089-1147 Cambridge, MA 02139-4307 Proposal #: CNS 07-09005 07-08938 07-08467 PI(s): Cui, Jun-Hong (June) Levine, Brian; Kurose,James F. Freitag, Lee Rajasekaran,Sanguthevar;Shi, Zhijie;Willett,Peter K.;Zhou,Shengli Institution: University of Connecticut U of Massachusetts WHOI Storrs, CT 06269-1133 Amherst, MA 01003-9242 Woods Hole, MA 02543-1041 Title: Collab Rsch:CRD/IAD:Open Research Testbed for Underwater Ad Hoc and Sensor Networks (ORTUN)
This collaborative project, developing the first open testbed infrastructure for the underwater networking community, enables open access with the capability to conduct experiments remotely. The infrastructure, based on open research platforms, consists of a testbed that enables wide and systematic experimental evaluation and comparison of underwater acoustic networks. The work, involving this rapidly deployable testbed that can be shared by the underwater networking community, aims to demonstrate the ability of the facility to facilitate field experiments. The project represents a higher-level collaborative that arose from two collaborative groups. One group developing the facility, the other working mainly on the experiments utilizing the facility. The testbed is expected to be a buoy-based system that can be easily taken to different environments. When operational, these systems will be deployed 5 or 6 times a year. The infrastructure will consist of two types of nodes with different capabilities. The first type of node of the rapidly deployable testbed will offer a fixed physical layer capability using acoustic modems such as the WHOI micromodem or the ISI S-modem to implement a physical layer with limited reconfigurability interfaced to a reconfigurable network processor. This network processor will support algorithm/protocol implementation and testing at higher network layers. The Network functions on the Fixed Physical Layer testbed will be hosted by a Gumstix processor which will then communicate with physical layer modems such as the WHOI Micromodem or USC/ISI S-modem via a serial port. Ten to fifteen fixed physical layer nodes will be built including up to 3 gateway nodes. Each gateway node of the testbed will be equipped with wireless RF communication enabling real-time monitoring and control of network performance. The fixed physical layer nodes will be smaller and more easily deployed than the second type of node which is the all-layer node. The all-layer node is a more capable node that will ultimately support algorithm/protocol implementation and acoustic data collection at all networking layers. In addition to the equipment included in the fixed physical layer nodes (i.e., a gumstix network processor and the ability to support relatively fixed physical layer modems such as the WHOI Micromodem and the ISI S-modem), the all-layer nodes will also include a general purpose data acquisition system (D/A and A/D) with substantial disk storage and in-situ processing capability. The MIT r-modem software will be implemented on this general purpose hardware and, along with MATLAB, will enable user implementation and testing of algorithms and the gathering of acoustics data at the physical layer in addition to the testing at higher network layers that it will share in common with the fixed physical layer nodes. Three to five all-layer nodes will be built. The rapidly deployable testbed, using two types of nodes with varying capabilities, should significantly enhance research at all network layers while setting the stage for future infrastructure improvements.
Many research groups investigating fundamental questions about how to design such networked systems that utilize acoustic communications in complex underwater environments have had their overall effort significantly slowed by the lack of common means to test and compare protocols under realistic environmental conditions. This infrastructure responds to the need for consensus on analytic or simulation models for underwater networks where researchers need the ability to gather experimental data under real world conditions in order to make progress.
The network stack will be modular by design with sockets used to enable cross layer control and communication. The physical, MAC, Network and Application layers will be populated with sample components to enable users test their own algorithms or protocols without having to populate the entire stack. Users will be able to write modules to test their own algorithms or protocols at different layers and selectively replace the sample modules with their own. While the development of the modular architecture and sample modules for the network stack will be done with close coordination between all participating institutions, the lead institution for the layers that will be provided are Physical Layer (MIT for the all-layer system, WHOI for the Fixed-PHY system), MAC Layer (USC/ISI), Network Layer (UConn, a geo-routing protocol), and Application Layer (UMass, a DTN routine service). The open characteristic of the testbeds and their usefulness for conducting research will be demonstrated by the members of the team (primarily UConn and UMass as described above) and a few selected outside participants. In addition, acoustic receptions suitable for physical layer research will be made available to the general research community via the Internet.
Broader Impacts: This work enables the essential capability of research groups to examine fundamental research questions and their potential solutions in the real world. The infrastructure will directly benefit many on-going research projects in this field A large number of potential users in the community may benefit from this testbed infrastructure. In addition to the significant research impact, the infrastructure is expected to make a very strong educational impact as well, supporting classes bringing remote access to field experiments to students for whom traditional experiments would have been too costly. The infrastructure can accelerate research and education in the underwater networking field.
|
1 |
2008 — 2012 |
Mitra, Urbashi (co-PI) [⬀] Heidemann, John Ye, Wei |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Mri: Development of An Always-Available Testbed For Underwater Networking Research @ University of Southern California
Proposal #: CNS 08-21750 PI(s): Ye, Wei Heidemann, John S.; Mitra, Urbashi Institution: University of Southern California Los Angeles, CA 90089-1147 Title: MRI/Dev.: Dev. of An Always-Available Testbed for Underwater Networking Research Project Proposed: This project, developing an underwater networking testbed instrument (DATURN) that will be always-available and remotely accessible through Internet as a shared infrastructure for the community, aims at supporting research across the whole network stack. The testbed satisfies the need for frequent iteration on algorithms and experiments required by researchers in the areas of MAC protocols, networking, and applications. Two kinds of acoustic modems are planned allowing future modems from external users to be added later. Although the fixed testbed cannot represent all kinds of acoustic channels, the capability to support different and future modems enables researchers to explore how the acoustic channel affects upper layers and to compare protocols over different modems. A pluggable software architecture is envisioned permitting algorithms to be dropped into an existing protocol stack and reuse components that are outside the research focus. Standard APIs will be defined for different layers and at least one reference protocol; implementation will be provided at each layer, allowing, for example, a researcher with a new transport protocol to leverage routing or MAC protocols contributed by other researchers. A library of protocols will be built over time, enabling comparative study of alternatives. At present there is lack of common means to evaluate and compare protocols. Since no consensus currently exists on analytic and simulation models, this project provides a better experimental tool that enables community collaboration, protocol comparison, and reuse of prior work through experimental iteration and researcher iteration. Broader Impacts: This project provides an essential infrastructure to the underwater networking community that is likely to rebound in benefit to society overall. The always-on nature of the testbed instrument also enables significant educational impact. At present, students can only carry experiments on occasional deployments. Hence, this project will accelerate research and education in underwater networking.
|
1 |
2008 — 2010 |
Heidemann, John |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nets-Nbd-Sger: Map/Reduce For Network Traffic Analysis (Mr-Net Sger) @ University of Southern California
0823774
This project explores a new class of parallel algorithms that process very large network datasets. The goal is to be able to analyze 2.7 billion pings to map the Internet address space, process six months of flow records to understand long-term traffic trends, and search a week?s worth of packet headers to retroactively detect zero-day compromised machines. Each of these tasks requires efficient and economical processing of datasets in sizes from 50GB to several terabytes. This leap in dataset sizes by a factor of 100-1000-fold or more requires fundamentally different ways of handling network data than today?s tcpdump and ethereal on a workstation. Two recent developments make this leap possible. First, Google has demonstrated that the map/reduce abstraction be easily parallelized and run efficiently and cost-effectively over clusters of hundreds of commodity PCs. It is the basis of their web search engine and has prompted at least one open source implementation. Map/reduce is the computation engine of Google?s web search engine, increasingly being used in other applications. Map/reduce is the key to processing huge network datasets. Second, recent programs such as PREDICT make massive network datasets available. PREDICT promises to make available packet header traces, address space scans, netflow records, dark address space traffic, and voice over IP (VoIP) call records from several large ISPs. At USC researchers are collecting packet header traces and address space scans. PREDICT is the key to obtaining huge network datasets. This work is proposes as a SGER because it is both timely and, at this point, highly speculative. The researchers must characterize what network problems can be solved by map/reduce before a full proposal will be credible. This proposal is the key to demonstrating the potential impact of for map/reduce processing of huge network datasets to change our understanding of the Internet.
The intellectual merit of this work is to develop a preliminary understanding of how to use map/reduce style processing for network datasets. What algorithms are applicable? What problems parallelize well or poorly? What kind of compute clusters are needed? And more generally, how can networking researchers cope with gigabyte-to-terabyte datasets that are needed to describe a billion-user Internet?
The broader merit of this work is that it will lead to answers of both fundamental and practical questions facing the Internet. Considering these datasets, questions include: What does the Internet look like? At what rate is the Internet address space being consumed? How many Internet users connect with dynamic addresses? How can one respond to intrusions effectively? Can new techniques detect low-rate denial-of-service attacks, spam generation in compromised servers? Can one traceback and detect botnets control networks? More important than these specific questions is the broader question of how can one understand properties in a billion-node Internet and the petabyte-per day of traffic that flows over it.
|
1 |
2015 — 2016 |
Heidemann, John |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ci-P: Planning For Identity and Naming Experimentation Shared Testbed @ University of Southern California
Naming and identification is critical to the Internet today, with the Internet's Domain Name System (DNS) as the glue making the Internet useable by ordinary people by mapping human-readable names such as www.nsf.gov to Internet addresses such as 128.150.4.107. Originally a simple lookup mechanism in a low-risk environment, today naming has additional roles in the network ranging from content delivery to anti-spam systems. DNS spans many applications because is flexible, universally deployed, and has robust infrastructure capable of supporting high traffic loads. But DNS today faces many challenges: needs for improved security and privacy, richer functionality; increasing dynamic content; orders-of-magnitude more names (at the top with DNS expansion and internationalization, and the edges with the Internet of Things). Most critically, DNS is increasingly difficult to evolve, both due to the installed base of critical infrastructure, and because of increasing distance between the research community and the data and infrastructure needed to make credible contributions.
This one year planning project will identify and provide conceptual designs for infrastructure that will counteract this ossification and catalyze research in Internet naming, identification, and trust. The project will bring together researchers on naming, around shared infrastructure that supports: (1) Parallel Resolution Evaluation with simultaneous and safe testing of experiments in the context of live, operational DNS, and (2) Instrumentation And Measurement that helps share real-world DNS query and performance data, employing both technical and legal methods for ethical sharing. Community input will be obtained via focus groups and workshops. The outcomes will be clear statement of the community's infrastructure needs in this area, community-vetted designs for the infrastructure, and assessment of the risks and costs associated with the infrastructure
|
1 |
2017 — 2020 |
Heidemann, John Mirkovic, Jelena Hardaker, Wes |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cici: Rsarc: Ddos Defense in Depth For Dns @ University of Southern California
Distributed Denial-of-Service (DDoS) attacks continue to plague the Internet. Attacks have significant effects on social media, content and many other Internet services. Malicious actors are becoming increasingly sophisticated and are employing methods to hide their attack origins, amplify the size of their attacks, and use huge botnets that simply overwhelm their targets. This project directly addresses these advanced attacks by developing and deploying a defense in depth approach, called Deep Layers, to mitigate the types of DDoS attacks seen most recently. Deep Layers will be deployed to protect B-Root, the critical infrastructure that is one of thirteen providers of the Domain Name System (DNS) service for the top level (root) of Internet names that begin with www.
The Deep Layers solution to DDoS attacks integrates two approaches to filter spoofed traffic, two approaches to identify known-good traffic (hop-count filtering and prioritization of known-good clients), with the addition of a cloud-based scaling component to handle the largest attacks. The combination of these steps address an array of increasingly sophisticated attacks, ranging from those seen today to those that would be possible in the future, significantly increasing DNS resilience to DDoS attacks. The project integrates and extends several recent successful research approaches to DDoS defense and demonstrates their effectiveness on the B-root critical infrastructure. The project demonstrates Deep layers on the B-Root DNS server, and makes it available to other DNS operations as open source software. The work vastly improves the reliability of the DNS critical infrastructure, benefiting all Internet users.
|
1 |
2019 — 2022 |
Heidemann, John Hardaker, Wes |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ccri: Medium: Dns, Identity, and Internet Naming For Experimentation and Research (Diiner) @ University of Southern California
Naming and identification in the Internet is essential to find websites (e.g. www.nsf.gov) and other services. The Domain Name System (DNS), Identity, and Internet Naming for Experimentation and Research (DIINER) project proposes to accelerate research on Internet naming, identification, and the DNS by providing research infrastructure, data, and community building. The project will provide (1) a testbed and tools to allow experimentation on DNS data and to support gradual transition of new approaches from research into experimental use and ultimately to operation. (2) The project will provide data about how DNS is used and how the DNS system and servers perform, in the context of a framework for privacy-sensitive anonymization and controlled data sharing. The DIINER project will also work to (3) foster a collaborative research community by tightening the feedback loop between the creativity and perspectives of academia and the knowledge and real-world problems and data of operation of critical infrastructure, holding workshops about these topics and about these tools and new research methods. The DIINER project builds on the University of Southern California (USC) Information Sciences Institute (ISI) experiences both running operational DNS services such as B-Root and working with the research community to share data and provide research infrastructure. The anticipated outcome of DIINER is scientific progress on how to carry out research on Internet naming, identity, and DNS; improvements to the performance, reliability, security, and privacy of how Internet naming, identity, and DNS are done today; and support of education and research at the USC and in the community.
The Internet's DNS most commonly maps names to addresses (e.g. www.nsf.gov to 128.150.4.107), and its use has grown to include applications like anti-spam and Content Delivery Networks. With DNSSEC (Domain Name System Security Extensions), DNS protects data integrity and can ground trust systems, X.509 communications and Certificate Authorities. But Internet naming, identification, and DNS face many challenges. Security has changed as the Internet has moved from a low-risk academic experiment to a trillion-dollar marketplace, bringing threats from organized crime and nation states. DNS has also gathered great inertia, with a huge, change-resistant installed base, from millions of home routers to sophisticated commercial clusters. Its identification as "critical infrastructure" adds both technical and political inertia. These requirements compound technical challenges, such as minimizing latency, and often leave the research community distant from operational reality, without the data and infrastructure they need to make credible contributions.
The DIINER project proposes to meet these challenges and reverse DNS ossification by enabling new research in Internet naming and trust, and easing transition from research to operational deployment, while preserving stability. Its goal is to unite isolated researchers by growing an Internet naming and identification community around DIINER, a new shared research infrastructure providing: (1) parallel DNS resolution evaluation (PRE) to support safe testing of experiments within live, real-world deployed DNS, and (2) live instrumentation and measurement to share real-world DNS query and performance data, with responsibility supported by technical and legal methods. Today researchers are under-supported, with only limited DNS data available, often long after collection and with limited ability to share, and no support exists for real-world experiments at scale. USC ISI is uniquely prepared to lead this effort with operational responsibility for the B-Root DNS server, long-term involvement in networking research and graduate education, and independence from commercial interests. The DIINER approach spans the DNS ecosystem, from end-computers (stub resolvers), to organization-level recursive resolvers, and to authoritative DNS servers. The proposed infrastructure will integrate with B-Root, second-level-domain authoritative resolvers, and with a recursive resolver. Stakeholders include end users, Internet services providers (ISPs), and other kinds of service providers, from operators of public DNS services, to commercial DNS providers. The DIINER project will release tools it develops as open source, augmenting research-infrastructure-as-service with third-party deployments.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |
2020 — 2023 |
Heidemann, John |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cns Core: Small: Event Identification in Evaluation of Internet Outages @ University of Southern California
The EIEIO (Event Identification and Evaluation of Internet Outages) project will develop new algorithms to (1) improve coverage of Internet outage detection to cover millions of Internet Protocol version 4 (IPv4) (the primary current Internet) networks that are today uncovered, and extending coverage to Internet Protocol version 6 (IPv6) (the new protocol used by about 30% of America today). (2) analyze this raw data to determine identifiable outage events. (3) relate these events to real-world causes using other datasets. The results of this work will be additional information about Internet reliability, allowing citizens to make informed choices of provider, governments to assess the services their citizens receive, and ISPs to evaluate their offerings and justify improvements. Data from this work will help society understand the impact of natural disasters (such as hurricanes) as they are occurring and will be made available to researchers at no cost.
Understanding Internet reliability is critically important to us. Recent methods have emerged that directly measure Internet-wide reliability, but the EIEIO (Event Identification and Evaluation of Internet Outages) project proposes three steps to move our understanding forward: (1) Improving IPv4 coverage to include millions of networks that cannot be reliably measured today, and extending outage detection to IPv6; (2) converting the millions of raw observations into identifiable events; (3) relating these events to real-world causes.
The intellectual merit of the EIEIO project is to fundamentally expand Internet outage detection in these three ways. The EIEIO project will develop new algorithms to increase IPv4 coverage with more sensitive analysis, examine passive and active measurements that reach to IPv6. The EIEIO project will develop clustering algorithms that group raw data to events at Internet scales (millions of events over months of data). The EIEIO project will improve confidence in these results with careful comparison of outage observations and events from multiple data sources, including different observation methods and external data sources.
The EIEIO project will help increase confidence in our ability to use fine-grain observations of Internet outages for IPv4 and IPv6 and provide data that will help network architects change network designs to improve network resilience. Results from the project could help citizens selecting network providers to make informed choices about their network purchases, governments to assess the service their citizens receive, and ISPs compare their offerings and justify improvements. Network outage data can assist first responders and citizens to understand natural disasters (such as hurricanes) as they are occurring. This project will also quantify network reliability to assist policy makers evaluating telecommunications policy and economics, and to carry out long-range planning. Finally, this data will motivate improvements by ISPs to Internet reliability, improving the Internet for all users. Data generated from this project will be made available to researchers at no cost and used to support education and research.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |
2020 — 2021 |
Heidemann, John |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Rapid: Measuring the Internet During Novel Coronavirus to Evaluate Quarantine (Rapid-Minseq) @ University of Southern California
The world is employing social distancing, work-from-home, and study-from-home to limit COVID-19's spread until the availability of early detection and a vaccine for COVID-19. Implementation of these policies varies across the U.S. and globally due to local circumstances. A common consequence is a huge shift in Internet use, with university dormitories and high schools emptying and home use increasing. The goal of this project is to observe this shift, globally, through changes in Internet address usage, allowing observation of early reactions to COVID and, one hopes, a future shift back.
This project plans to develop two complementary methods of assessing Internet use by measuring address activity and how it changes relative to historical trends. The project will directly measure Internet address use globally based on continuous, ongoing measurements of more than 4 million IPv4 networks. The project will also directly measure Internet address use in network traffic at a regional Internet exchange point where multiple Internet providers interconnect. The first approach provides a global picture, while the second provides a more detailed but regional picture; together they will help evaluate measurement accuracy.
This project allows to capture an improved global picture of how different parts of the world react to COVID-19, as seen through their use of the Internet. In particular, the results from this research will help understand changing behavior of networks compared to historical data, and provide a new method based on address usage to characterize how people are staying home, and to look for the effectiveness and onset of work-from-home and shelter-in-place.
The project website will be at https://ant.isi.edu/minceq. The project will make all new data generated by this project available at no cost to researchers at https://ant.isi.edu/datasets. The project website and data will persist indefinitely; as of 2020 the website already provides five years of archival data that serves as a baseline.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |
2021 — 2024 |
Heidemann, John Mirkovic, Jelena Kallitsis, Michael Hardaker, Wes |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ccri: New: Classnet: Community Labeling and Sharing of Security and Networking Test Datasets @ University of Southern California
The Community Labeling and Sharing of Security and Networking Test datasets (CLASSNET) project will provide new, labeled, rich and diverse datasets to the research community to support network and security research. The project will develop a framework for collaborative, community-driven enrichment and labeling of data, enabling use of these datasets for machine learning (ML) in networking and security. Furthermore, the CLASSNET project will make data available to researchers through multiple methods, ensuring privacy of data while enabling flexible data computation. The project will also generate diverse continuous (constantly, automatically updated) and curated (selected by human) datasets for research use.
CLASSNET project will innovate in dimensions of data labeling, data distribution and data sources. In data labeling, the CLASSNET collaborative framework will provide a low-friction framework for sharing annotations among researchers. The framework will incentivize labeling with feedback mechanisms and user credits, and support bulk, automatic, algorithmic labeling. In data distribution, CLASSNET will support multiple ways of data access, ranging from downloading anonymized data to processing data in the cloud, on provider machines or via the code-to-data approach. Finally, CLASSNET data sources will provide new, diverse, continuous, and curated datasets that are useful for network and security research, including traffic packets and flows, network telescope data, Domain Name System (DNS) data and Internet topology data.
The immediate impact of this project will include new types of labeled, curated and continuous datasets that enable new security, networking, and ML research and education, impacting a large community. The broader impact of this data will be to foster research and education, which will make the Internet safer, more stable, and more secure, and will increase the community's knowledge about the Internet. With the Internet's importance for tele-work, tele-medicine, remote learning, e-commerce and e-government, these improvements will have a broad societal impact. In addition, CLASSNET datasets will support data-driven exercises for graduate and undergraduate education, and new PhD research. CLASSNET project's innovations in multiple pathways to data access, combined with The automated and incentivized enrichment framework, will improve the state-of-the-art for responsible data sharing in related disciplines of information technology.
The project website will be https://ant.isi.edu/classnet/. The website will remain operational after the project completes.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |
2022 — 2023 |
Heidemann, John |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Imr: Ri-P: Safe and Flexible Experimental Dataset Access and Sharing-Planning (Safed-Asp) @ University of Southern California
The SAFED-ASP Project (full title: IMR: RI-P: Safe And Flexible Experimental Dataset Access and Sharing--Planning) is an NSF cyberinfrastructure planning grant to work towards new methods to support federated data sharing while managing privacy. This work will be critical to inform new research in protocol design, cybersecurity, and privacy with real-world data. However, privacy requirements make network data difficult to share for several reasons: (1) Data should be anonymized, but to retain research value, that anonymization may require customization; (2) Data may require explicit agreements between researchers and data providers, agreements can augment anonymization and make expectations explicit; (3) Data providers and sponsors need to track data use to demonstrate and understand data utility and support data expiration. This project will conduct outreach to understand community requirements in these areas, and will prototype methods and software to manage the lifecycle of dataset request approval, distribution, and evaluation.<br/><br/>This project plans to develop a roadmap for open-source software that will support the lifecycle of dataset distribution. This lifecycle includes metadata management, efficient handling of datasets request and approvals, and flexible dataset fulfillment. The vision is to develop open-source software to support these activities, and a federated community of dataset providers that either self-host or use a shared platform.<br/><br/>The broader impact of this work is to pave the way to new approaches that will will reduce friction in data sharing and improve data handling responsibility and transparency. Broader dataset availability is key to applying machine learning and big-data techniques to networking and security problems.<br/><br/>The project website is https://ant.isi.edu/safed/. The platform and software will become available as it is developed and as it matures through a full proposal. Tools developed is expected to be hosted at this site and is also suitable for self-hosting.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |
2022 — 2025 |
Heidemann, John |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Cns Core: Medium: a Traffic Map For the Internet @ University of Southern California
To understand the impact of Internet events---attacks, outages, degraded performance, or newly deployed infrastructure---one must understand how they affect peoples' use of Internet services. Researchers, network and service operators, regulators, policy makers, and economists need a map of what portion of Internet use depends on the networks, routes, and servers impacted by the events: an Internet traffic map.<br/><br/>Such a traffic map will inform other networking researchers by providing context to interpret measurement results and proposed improvements, ultimately contributing to improving the Internet for everyone. An Internet traffic map will also help regulators, policy makers, and economists assess the societal impact of networking outside computer science.<br/><br/>Decades of network research have studied components related to---but distinct from---a traffic map, including the autonomous system (AS) and router-level topologies, Internet exchange points (IXPs), content distribution network (CDN) deployments, and general traffic matrices. Internet service providers (ISPs) and content providers have their own, high-quality views of their part of the Internet traffic map, but business- and customer-sensitivities prevent public release. <br/><br/>This project’s goal is to provide a sharable, trusted traffic map. This map traffic will be quantified—with estimates of error; broad—with estimates of completeness; and sustainable—with regular updates. This project proposes to meet this need by:<br/><br/>• Measuring a traffic map that identifies which networks host Internet users and where popular services deploy servers for popular services, the paths between users and services, and relative activity levels on those paths. This map must be derived from replicable approaches and open data so it can be shared. Doing so will require developing new techniques. <br/> <br/>• Evaluating the accuracy of this map by comparing multiple, independent methods, and testing against external information. Comparison against accurate but privileged information will establish confidence in open-source techniques. <br/> <br/>• Providing regularly-updated maps to the research community and tools that allow others to create their own maps.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |