2001 — 2004 |
Feigenbaum, Joan Kannan, Sampath (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Massive Data Streams: Algorithms and Complexity
Title: "Massive Data Streams: Algorithms and Complexity"
Investigators: Joan Feigenbaum and Sampath Kannan
Abstract: Massive data sets are increasingly important in many applications, including observational sciences, product marketing, and monitoring and operations of large systems. In network operations, raw data typically arrive in streams, and decisions must be made by algorithms that make one pass over each stream, throw much of the raw data away, and produce ``synopses'' or ``sketches'' for further processing. Moreover, network-generated massive data sets are often distributed: Several different, physically separated network elements may receive or generate data streams that, together, comprise one logical data set. The enormous scale, distributed nature, and one-pass processing requirement on the data sets of interest must be addressed with new algorithmic techniques. Two programming paradigms for massive data sets are "sampling" and "streaming." Rather than take time even to read a massive data set, a sampling algorithm extracts a small random sample and computes on it. By contrast, a streaming algorithm takes time to read all the input, but little more time and little total space. Input to a streaming algorithm is a sequence of items; the streaming algorithm is given the items in order, lacks space to record more than a small amount of the input, and is required to perform its per-item processing quickly in order to keep up with the unbuffered input. The investigators continue the study of fundamental algorithms for massive data streams. Specific problems of interest include but are not limited to the complexity of proving properties of data streams, the construction of one-pass testers of properties of massive graphs, and the streaming space complexity of clustering.
|
1 |
2002 — 2006 |
Feigenbaum, Joan |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr: Foundations of Distributed Algorithmic Mechanism Design
In traditional theoretical computer science (TCS), computational agents are typically assumed either to be obedient, i.e., to follow the prescribed algorithm, or to be adversaries who ``play against'' each other. On the other hand, the strategic agents in game theory are neither obedient nor adversarial. Although one cannot assume that they will follow the prescribed algorithm, one can assume that they will respond to incentives. Thus, the economics literature traditionally stressed incentives and downplayed computational complexity, and the TCS literature traditionally did the opposite. The emergence of the Internet as a standard platform for distributed computation has radically changed this state of affairs: Ownership, operation, and use by many self-interested, independent parties give the Internet the characteristics of an economy as well as those of a computer. The emerging discipline of Distributed Algorithmic Mechanism Design (DAMD)lies in the intersection of TCS, economics, and networking. This project addresses the economic and computational foundations of DAMD. Fundamental problems addressed include but are not limited to notions of inherent tractability or intractability of DAMD problems, methods of approximation for DAMD problems found to be inherently intractable, and the extent to which agent privacy can be maintained in distributed algorithmic mechanisms.
|
1 |
2002 — 2005 |
Feigenbaum, Joan Shenker, Scott (co-PI) [⬀] Krishnamurthy, Arvind (co-PI) [⬀] Yang, Yang (co-PI) [⬀] Yang, Yang (co-PI) [⬀] Yang, Yang (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Incentive-Compatible Designs For Distributed Systems
This project includes the research activities to obtain theoretical and practical results on mechanisms that are incentive-compatible, scalable and distributed. Specifically, distributed algorithmic mechanism design with insights from game theory is proposed for three related problems in networking: interdomain routing, web caching and peer-to-peer file sharing. The research program on interdomain routing will develop a fundamentally new approach in which many of the routing-related incentive issues are handled by incentive-compatible protocols rather than bilateral contracts; such protocols can more effectively address the system-wide issues of efficient routing and conflicting policy requirements. Within this project also the recently developed techniques for digital-goods auctions will be applied to the peer-to-peer file sharing problem and to the design of incentive-compatible caching mechanisms. This project will help to understand better the behaviors of large-scale, distributed information systems formed by autonomous components such as Internet, and develop incentive-compatible algorithms for these systems accordingly.
|
1 |
2002 — 2006 |
Smith, Jonathan Feigenbaum, Joan Keromytis, Angelos |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Gridlock: a New Scalable Approach to Unifying Computer and Communications Security
The GRIDLOCK hypothesis is that use of a globally specified and locally interpreted policy language for specification of access-control policy can provide a new, unified approach to securing network applications. In particular, this approach can be used to specify network access-control policies and host access-control policies in combination to provide "virtual private services." GRIDLOCK simultaneously provides more security to applications, greater scalability, and unification of network access control and host access control. Policies are specified in a new policy-expression language, modeled on the KeyNote trust-management language. This design supports compliance checking, with which credentials provided by a client can be validated to provide access to a resource. The applicability of this layer-crossing approach to multiple virtual private services is investigated.
The research focuses on the development of formal semantics for the unified access-control policy, as well as a rigorous experimental investigation, using multiple example applications. The expected results include both the new policy-expression language and the demonstration that a scalable access-control model for networked applications is practical.
|
1 |
2003 — 2010 |
Kannan, Ravindran (co-PI) [⬀] Feigenbaum, Joan Silberschatz, Abraham (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Information Technology Research (Itr): Sensitive Information in a Wired World
Increasing use of computers and networks in business, government, recreation, and almost all aspects of daily life has led to a proliferation of sensitive data (i.e., data that, if used improperly, can harm data subjects or other relevant parties), and concern about the ownership, control, privacy, and accuracy of these data has become a top priority. Despite significant technical accomplishments in relevant research fields (e.g., cryptology and security, database systems, and data mining), there is no comprehensive, end-to-end technological infrastructure for handling sensitive data over the entire course of their lifetime, nor is there even widespread social agreement about the rights and responsibilities of major stakeholders in our data-intensive, networked world.
This project is a multi-institutional, multi-disciplinary, multi-modal project that looks comprehensively at sensitive data in a networked world. There are two main academic centers of activity (Yale and Stanford), three smaller-scale academic participants (Stevens Institute of Technology, NYU, and the University of New Mexico), and substantial participation by non-academic partners, including technology companies, (IBM, HP, and Microsoft), representatives of user communities (Citigroup, NIH, Yale Center for Medical Informatics, the Census Bureau, and the Secret Service), and DC-based policy organizations (The Center for Democracy and Technology and The Electronic Privacy Information Center).
A major technical theme of the project is privacy-preserving data mining, and, more generally, techniques for meeting the potentially conflicting goals of respecting individual rights and allowing law enforcement and other legitimate organizations to collect and mine massive data sets. Other technical agenda items include (1) accessibility and reliability of distributed data (2) operating on encrypted databases, (3) remote control of data, (4) repelling hostile data, and (5) auditability of data-management systems. Because these technical goals are affected by lack of agreement about the meanings of basic terms, most notably "privacy," a major goal of the project is the development of a conceptual framework for the study of rights, responsibilities, and public policies focused on sensitive-data handling. This part of the project incorporates the notion of "contextual integrity," which considers both the context and the content of data sets in assessing sensitivity.
Projected outcomes of the project include a next generation of technology for handling sensitive information that is qualitatively better than the current generation's and an effective conceptual framework for policy making and philosophical inquiry into the rights and responsibilities of data subjects, data owners, and data users.
|
1 |
2003 — 2004 |
Feigenbaum, Joan |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Workshop: Scalable Cyber-Security Challenges in Large-Scale Networks: Deployment Obstacles
This is a collaborative between NSF/ANIR and DoE. The purpose of this workshop is to discuss challenges in cyber-security in large-scale networks and identify impediments and deployment obstacles of security on a large scale.
This workshop has potential to develop a long-term research and development vision to address not only the technical impediments to large scale deployment of cyber-security, but also identify the technical challenges and develop strategies to address these impediments.
|
1 |
2004 — 2008 |
Feigenbaum, Joan Shenker, Scott (co-PI) [⬀] Bergemann, Dirk (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
An Economic Approach to Security
Proposal Number: 0428422
Title: An Economic Approach to Security
PI: Joan Feigenbaum
Abstract
Internet security is universally seen as an extremely important problem. Moreover, technical solutions developed over the last three decades represent deep and elegant intellectual contributions. Yet few of these solutions are in widespread use. Clearly something is amiss. It has recently been argued, by Anderson and others, that the missing linkis economics: Only through understanding the incentives inherent in various security proposals can one decide which, if any, would actually lead to greater security. This research project is a three-year, multi-institutional, multi-disciplinary investigation of the economics of security in networked environments. Specific research topics include security of interdomain routing, adoptability of trusted platforms, and markets for private information. The intellectual merit and broader impact of the project are intertwined, both based on the potential not only to solve technical problems but also to develop general analytical techniques for evaluating candidate solutions to real security problems in a manner that gives adoption incentives their just due. If successful, it will lead to greater actual security, rather than simply to more available security technology. Educational activity that integrates security, networking, and economics is also a major goal one on which the investigators have experience on both graduate and undergraduate levels.
|
1 |
2006 — 2009 |
Feigenbaum, Joan |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: New Privacy Frameworks For Collaborative Information Sharing
Collaborative Research: New Privacy Frameworks for Collaborative Information Sharing ------------------------------------------------------------ PI: Vitaly Shmatikov (The University of Texas at Austin) Co-PI: Joan Feigenbaum (Yale University)
When collaborations involve sharing of sensitive data about individuals and organizations, these data must be protected from unauthorized searches, abuse, and misuse. Conventional interpretations of privacy as confidentiality and/or inaccessibility of any individual piece of information are inadequate in collaborative environments, where some collaborators may legitimately be allowed to access parts of the joint dataset, and no trust assumptions can be made about their computing platforms.
The goal of this project is to develop new concepts and frameworks for privacy in collaborative environments, focusing on global properties of the joint dataset such as security against unreasonable searches and abusive information harvesting. Techniques include provably secure data transformations that assure global and individual privacy properties after information has been released in response to a legitimate request. Another research objective is a theory of privacy that explicitly incorporates economic measures of information value. To enforce global privacy policies, this project will develop new cryptographic techniques for dataset obfuscation and sanitization, ensuring that only policy-compliant queries can be computed on the dataset after it has been transferred to the collaborators.
The main objective is to design privacy-preserving data transformations that are provably secure without unrealistic assumptions about "tamper-proof" software or hardware. Privacy technologies developed in the course of the project will enable important collaborative applications, ranging from joint analysis of patient data in multi-institution clinical trials to transaction monitoring by law enforcement agencies that complies with the citizens' Fourth Amendment right to be secure against unreasonable searches.
Project URL: http://www.cs.utexas.edu/~shmat/privacyframeworks/
|
1 |
2007 — 2010 |
Feigenbaum, Joan |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ct-Isg: Collaborative Research: Massive-Dataset Algorithmics For Network Security
As malicious attacks on computer systems increase in severity and sophistication, developing effective methods for protecting the Internet is among the most important challenges facing computer science today. Network-based security mechanisms offer both good coverage and the possibility of early threat detection, but they often conflict with the performance requirements of network elements because of the vast amounts of traffic data that must be analyzed. This project will apply massive-dataset (MDS) algorithmics to network security, bringing together two previously unconnected research areas. The objective is to achieve a qualitative improvement in network security by developing efficient, yet theoretically rigorous, algorithmic defenses that can be deployed at scale in modern networks.
The project addresses both fundamental algorithm-design problems and practical applications. MDS algorithmics provides a set of basic techniques for highly efficient (e.g., one-pass, small-space, polylog-time) analysis of large amounts of data. This project will investigate how these methods can be used for (1) online classification and property testing of packet streams, including efficient inference of streams generated by a mixture of stochastic sources, (2) detection of changes and anomalies in traffic patterns, and (3) development of computationally tractable models of traffic sources that support reasoning about a wide variety of adversarial behaviors and incorporate prior knowledge such as conventional intrusion-detection rules. The algorithmic toolkit developed in the course of the project will be applied to practical network-security problems such as recognizing denial of service activity, worm fingerprinting, and detecting botnets.
|
1 |
2007 — 2011 |
Feigenbaum, Joan |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Sing: Foundations of Next-Generation Routing
The emergence of the Internet is one of the most profound shifts in focus in Computer Science since its inception. Traditionally, Computer-Science research has focused primarily on understanding how best to design, build, analyze, and program computers. Research focus has now shifted to the question of how best to design, build, analyze, and operate networks and the distributed applications that run on top of them. Satisfactorily answering these questions will require the development of a Theory of Networked Computation (ToNC) that is analogous to the Theory of (single-machine) Computation that Computer-Science researchers have already developed. In particular, it will be important to investigate the theoretical foundations of routing in next-generation networks. This work will complement ongoing experimental research by examining three foundational aspects of next-generation routing systems: (1) Policy-based, interdomain routing (focusing on distributed algorithmic mechanisms, payment protocols, solution concepts, and privacy), (2) New routing paradigms (focusing on the intrinsic properties of protocols that are not fully distributed, do not require consistent state, or do not use topology-dependent addressing), and (3) New measures of the complexity of routing protocols (focusing on the notion of dependency complexity recently put forth in the networking-research community). The PIs have consistently played a leading role in ToNC-community formation. The lead PI is the co-chair of the GENI Scientific Council and is thus positioned to pave the way for ToNC-community participation in GENI, which could result in the development of network services with novel functionality and provable properties.
|
1 |
2008 — 2009 |
Feigenbaum, Joan |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cyber Trust Meeting
The PI is hosting and helping to organize the 2008 Cyber Trust PI meeting at Yale University, March 16-18 2008. Approximately, 300 attendees from the PI community, NSF PMs, PMs from other Government agencies, representatives from the U.S. industry research community, selected international visitors are expected. This annual event serves:
? The PI community by giving them the opportunity to present their research results through posters. Also, the PIs are given the unique opportunity to learn about NSF programs other than Cyber Trust and about programs in other Government agencies and in U.S. industry that afford funding opportunities in Cyber Trust-related research. ? Other Government agencies by informing them of the exciting research ongoing in the Cyber Trust program. ? Similarly, U.S. industry, by alerting it of exciting research results. ? NSF, primarily through breakout sessions, by discussions on future research challenges that will influence future solicitations
The two days of the meeting include:
? A panel on future directions for cryptography. ? Sessions on privacy (both future research directions and industry needs), research programs of other Government agencies, research programs in leading U.S. companies that bear on Cyber Trust, International opportunities (with talks by program managers from leading European funding agencies, the new CNCI multi-agency initiative on a bold new program in security ? Breakout sessions on research challenges dealing with: privacy, security and privacy foundations, testbeds, the possibility of a security grand challenge competition, towards an architecture for a future Internet that provides much improved security and privacy over the current Internet.
|
1 |
2010 — 2014 |
Feigenbaum, Joan |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Tc: Small: Collaborative Research: Accountability and Identifiability
The World Wide Web and other networked information systems provide enormous benefits by enabling access to unprecedented amounts of information. However, for many years, users have been frustrated by the fact that these systems also create significant problems. Sensitive personal data are disclosed, confidential corporate data are stolen, copyrights are infringed, and databases owned by one government organization are accessed by members of another in violation of government policy. The frequency of such incidents continues to increase, and an incident must now be truly outrageous to be considered newsworthy. This project takes the view that when security violations occur, it should be possible to punish the violators in some fashion.
Although "accountability" is widely agreed to be important and desirable, there has been little theoretical work on the subject; indeed, there does not even seem to be a standard definition of "accountability," and researchers in different areas use it to mean different things. This project addresses these issues, the relationship between accountability and other goals (such as user privacy), and the requirements (such as identifiability of violators and violations) for accountability in real-world systems. This clarification of the important notion of accountability will help propel a next generation of network-mediated interaction and services that users understand and trust.
The project's technical approach to accountability as an essential component of trustworthiness involves two intertwined research thrusts. The first thrust focuses on definitions and foundational theory. Intuitively, accountability is present in any system in which actions are governed by well defined rules, and violations of those rules are punished. Project goals are to identify ambiguities and gaps in this intuitive notion, provide formal definitions that capture important accountability desiderata, and explicate relationships of accountability to well studied notions such as identifiability, authentication, authorization, privacy, and anonymity. The second thrust focuses on analysis, design, and abstraction. The project studies fundamental accountability and identifiability requirements in real-world systems, both technological and social. One project goal is to use the resulting better understanding of the extent to which accountability is truly at odds with privacy and other desirable system properties to design new protocols with provable accountability properties. Building on that understanding and insights gained in designing protocols, the project also addresses fundamental trade-offs and impossibility results about accountability and identifiability in various settings. The broader impacts of the work include not only engagement with students but also a new perspective on real world accountability in trustworthy systems.
|
1 |
2014 — 2018 |
Ford, Bryan (co-PI) [⬀] Feigenbaum, Joan |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nets: Medium: Collaborative Research: An App-Centric Transport Architecture For the Internet
The Transmission Control Protocol and its socket interface are used by most network applications today---viewed as a software program, it is likely the most popular one in the world. Over thirty years old, this program is now showing signs of age. Web, mobile, and embedded applications use it in unanticipated ways (many interacting flows of different lengths with different objectives); it does not properly support mobile devices with multiple networks; it does not allow an application to adapt quickly to changing network conditions; it does not work well in fast datacenter networks; and it has a fruitless, adversarial relationship with in-network "middleboxes".
App-Centric Transport (ACT) aims to solve these problems. Unlike today's transport architecture, ACT takes application objectives into account in making its transmission decisions. ACT uses a computer-generated transmission control program, which takes models of the workload and objectives to automatically synthesize protocols. This program (Remy) serves both as a way to implement protocols and as a design tool to answer fundamental questions about congestion control. ACT explores centralized control over transmission and path selection for datacenters, and uses machine learning classifiers to select the best network on mobile devices. It aims to resolve tussles between endpoint transport and in-network middleboxes by viewing middlebox interposition as a generalization of multi-path transport. Project participants are collaborating with Facebook and Google on some of these ideas. The education plan introduces protocol design contests where students can run and measure their protocols over trace data and compare their protocols on a leaderboard.
|
1 |
2014 — 2018 |
Ford, Bryan (co-PI) [⬀] Feigenbaum, Joan |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Twc: Medium: Collaborative: Hiding Hay in a Haystack: Integrating Censorship Resistance Into the Mainstream Internet
Freedom and openness of the Internet are under threat. Government censors in non-democratic countries are deploying network filters to block sources of uncensored information, suppress dissent, and prevent citizens from using the Internet to exercise their human rights such as freedom of speech and freedom of assembly.
This project aims to deliver robust censorship-resistant communication systems for today's and future Internet. The main principle behind this research is that censorship resistance must cease being an ad-hoc, add-on component and instead become a first-class property of the networking infrastructure. In contrast to the existing circumvention tools, which are easy to recognize and block, censoring the communication technologies developed by this project comes at a significant technical, economic, and social cost to the would-be censors.
The project encompasses several research thrusts. The first thrust is integrating support for censorship-resistant communications directly into popular networked services (for example, cloud storage and streaming video) that already provide encrypted channels and cannot be easily blocked without causing significant disruption. The second thrust is devoted to new multi-streaming transport protocols and ubiquitous proxying systems that, by design, hide the contents of transmitted information and thus resist censorship. The third thrust is the study of censorship risks in future Internet architectures, as well as the design and implementation of censorship resistance mechanisms for these architectures.
In summary, this project aims to produce censorship-resistant communication systems and networking protocols that help make free Internet accessible to millions of potential users, encouraging and fostering democracy and human rights throughout the world.
|
1 |
2019 — 2020 |
Feigenbaum, Joan |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Student Travel Support For 2019 Symposium On Computer Science and Law
The inaugural Association for Computing Machinery Symposium on Computer Science and Law (ACM CSLAW-19) will be held in October 2019, in New York City. By launching efforts in this nascent field of study, the symposium will have impact on a broad range of societal issues, ranging from online security and privacy to Internet platform regulation to digital intellectual property protection. This award provides full or partial support for approximately 20 students to attend the symposium. A subcommittee of the symposium program committee will select a diverse group of students for travel support, based on applications in which students explain their interest in Computer Science and Law and the ways in which they expect the symposium to further their educational and professional development, and advisors address the quality of the students' work and the reason that travel support is needed for attendance.
The scope of the ACM CSLAW-19 symposium includes: Security, privacy, encryption, and surveillance; Cyber espionage, cyber war, and cyber diplomacy; Cyber crime, cyber law enforcement, and digital forensics; Freedom of expression online (or the lack thereof); Online market structure, platform monopolies, and antitrust law; Online government services; Digital intellectual property; Legal informatics; Automation of legal reasoning and legal services; Fairness, accountability, transparency, and ethics (FATE) in machine learning and data mining; Methodological compatibility and incompatibility between the discipline of computer science and the discipline of law; Educational imperatives and existing educational programs in Computer Science and Law. The main expected outcome of the symposium is a comprehensive report on the findings of break-out-group discussions. The report will explicate a rich agenda for research, practice, education, and fostering the interdisciplinary area of Computer Science and Law and make concrete suggestions about how the research community, ACM, funding agencies, universities, the technology industry, and other interested institutions can help advance this agenda.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |
2021 — 2023 |
Feigenbaum, Joan |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Dass: Legally Accountable Cryptographic Computing Systems (Lachs)
Society is having a hard time governing digital systems, and poorly governed systems lead to gaps in society's willingness to trust these systems with sensitive or high-priority tasks. Part of this challenge arises from the fact that law and software address rules and behavior at very different levels of detail. Laws must be general in their application and thus leave interpretation and detailed requirements to the discretion of software developers. This leaves software developers, who cannot be expected to be legal experts, having to decide what is the proper technical design to comply with often-complex legal rules. For example, when a privacy law such as the General Data Protection Regulation (GDPR) requires that users have a right to delete their data, does that mean all data, including backups? Without clear answers to such questions, developers can’t be confident that they have successfully complied with legal requirements, and then the public who uses these systems has little reason to trust them. This project's novelties are to introduce design patterns that help software developers assemble components reliably and purposefully, in the knowledge that they meet policy requirements effectively and thus warrant the confidence of the public. The project's impact benefits society in two ways. First, by closing the abstraction gap between law and systems, methods and tools developed in this project help software developers build systems that comply with legal obligations. Second, the project furthers the development of a research community in computer science and law.
The LAChS (pronounced "lox") project makes two contributions toward better understanding of how to build accountable software systems. First, policy concepts allow software developers to identify the functional aspects of systems they are developing in order to assess whether the functions of the system are consistent with the policy constraints associated with the computations they are performing. Along with policy concepts, the project introduces policy standards -- functional descriptions of the requirements of law. Together, policy concepts and policy standards provide a software-engineering framework through which developers can more easily build systems that are accountable to legal requirements. Second, the project develops an integrated legal-technical methodology for assessing the accountability properties of a system with respect to a set of legal requirements. Prior work has generally sought to define accountability solely as a property of information systems. The project shows that a full understanding of accountability requires considering the properties of both law and computing systems. In sum, this research is premised on the view that, with respect to key societal priorities such as privacy, the law has actually made considerable progress in defining key rights for the digital, while underlying technology is still struggling to adapt to these challenges. Thus the project aims to bring more clear abstraction, modularization and composability to legal and technical methodologies in order to better meet these challenges.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |