1992 — 1997 |
Impagliazzo, Russell |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nsf Young Investigator: Small Depth Boolean Circuits and Complexity - Theoretic Cryptography @ University of California-San Diego
This award funds continuing investigations into two long-term interests: the complexity of small depth Boolean circuits, and the computational complexity foundations of cryptography. Computational complexity as a field strives to provide a mathematical foundation for computer science by developing techniques to evaluate the inherent difficulty of computational problems. The Boolean circuit seems the most natural and robust concrete model of computational complexity. Much work has gone into developing the theory of circuit complexity in the hope of resolving such questions as P vs. NP. There are strong connections between bounds on circuit depth and issues in the theory of parallel computation, logic (mostly proof theory), learning theory, and the theory of neural nets. Continuing projects in circuit complexity are: establishing links between circuit bounds and Frege proofs; examining the behavior of formulas under random restrictions, (also known as neural nets). Of course, circuit complexity is a dynamic area, and specific projects may change with new results. Work continues on the foundations of cryptography. Recent work in developing the structural complexity of cryptography has been so successful that perhaps all the major, tractable questions in the area have been resolved. Indeed, there seem to be some theoretical roadblocks to further progress in this direction. However, there are several important open questions regarding the relationship between average-case complexity and cryptography, and that between oblivious transfer and secret agreement. More importantly, a large gap remains between theory and practice. Another goal of the project is to develop theoretical tools that will actually lead to implementable cryptosystems, and to handle such questions as virus protection, secure electronic transfer of funds, and such issues as privacy vs. security for electronic mail.
|
1 |
1998 — 2005 |
Rangan, Venkat Belew, Richard (co-PI) [⬀] Ferrante, Jeanne (co-PI) [⬀] Pasquale, Joseph [⬀] Impagliazzo, Russell |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cise Research Infrastructure: the Ucsd Active Web @ University of California-San Diego
EIA-98-02219 Joseph Pasquale University of California, San Diego
CISE Research Infrastructure: The UCSD Active Web
UCSD is investigating system and application support issues for a next-generation World Wide Web, called the "Active Web." The Active Web is premised on the support for active content, content that is rich in multimedia and references to other objects, and for mobile agents, programs that can move about and execute on remote servers, carrying out requests at a distance on behalf of clients. These servers are no longer passive databases as in today's Web, but context-sensitive knowledge networks that contain all kinds of active content. Between the servers, there is a constant exchange of agents, which add to, refine, form interconnections, and make consistent, the distributed content. In the Active Web, there is a high degree of resource sharing, usage is bought and sold as in a market economy, and security is paramount. This grant will allow UCSD to purchase large-scale computer and storage servers and a high-speed network that will connect the various laboratories, and will form a small-scale Active Web prototype.
The project is taking a department-wide coordinated approach, integrating the research efforts in systems, security, multimedia, content-based search, scientific metacomputing, and, tools for software/hardware design and analysis.
|
1 |
1998 — 2001 |
Paturi, Ramamohan (co-PI) [⬀] Impagliazzo, Russell |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Developing a Theory of Heuristics @ University of California-San Diego
Current theory does little to predict, explain or analyze the empirically observed success of search and optimization heuristics for some domains of NP-complete problem and their failure for others. Without such a theory, implementers of heuristics have no guidance as to whether a particular heuristic method is suitable for a particular problem domain, what kind of performance to expect, or how to set the various parameters for better performance. Implementation is often done in a haphazard way based on little more than guess-work or trial-and-error. The following inter-related lines of theoretical research will be explored to address this gap: (1) Extending average-case complexity to consider reductions to and between problem distributions that arise naturally ( in real-life, testing, cryptanalysis or combinatorics). (2) Extending average-case analysis techniques to a more general characterization of the relevant combinatorial properties of random problem instances from naturally arising distributions. In particular, the properties of associated search spaces for such problems. (3) For experimentally successful heuristic methods, identify the combinatorial characteristics of problem instances that determine their performance. (4) Investigate more efficient worst-case algorithms for NP complete problems, identify those NP complete problems with nontrivial worst-case algorithms, and explore the connection between lower and upper bounds. Although the techniques used are solely mathematical, the motivation and justification come from the experimental literature.
|
1 |
1998 — 2000 |
Belew, Richard (co-PI) [⬀] Impagliazzo, Russell |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Empirical Analysis of Search Spaces Using Population-Based Sampling @ University of California-San Diego
Stochastic and population-based search methods (simulated annealing, metropolis, WalkSAT, genetic algorithms) have been successful as optimization heuristics in a number of application areas. However, very little is known about which problems they succeed at, which instances they do well on, or how to make implementation choices to tailor the method to a given application. To address this, recent theoretical work on population-based methods will be used as a guide in an experimental study of the structure of search spaces. Population- based algorithms (in particular go-with-the-winners local optimization, will be used to sample uniformly from the part of the search space that meets a certain minimal standard of optimality. Such samples will be used to discover combinatorial characteristics of search spaces, and then to predict and improve the performance of optimization heuristics. Social Impact: There will be collaboration between theorists and experimenters. This will help forget ties between the two research communities. Graduate students will learn to balance theoretical skills, programming institution, and concern for applications
|
1 |
2001 — 2004 |
Paturi, Ramamohan (co-PI) [⬀] Impagliazzo, Russell |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Quantifying Intractability and the Complexity of Heuristics @ University of California-San Diego
Search and optimization problems are central to all areas of computer science and engineering. Finding the optimal layout for a VLSI circuit or the lowest energy configuration of a crystal are both examples of optimization problems. While such problems are believed to be intractable, requiring exponential time to solve the worst-case instances, many heuristic methods have been observed to be relatively successful on instances that arise in different applications. This project addresses questions concerning the quantitative measures of the intractability of search and optimization problems, as opposed to qualitative notions such as NP-completeness. The following are some of the questions addressed in this project:
1. Which instances of optimization problems are the most intractable ones?
2. Exactly how difficult are these problems?
3. What are good heuristic methods for solving optimization problems ? When and how well do they work?
4. Are specific non-complete problems such as factoring also intractable?
5. How much does randomness help in solving problems?
6. Are hard problems suitable for cryptographic applications? If so, what levels of security do they provide these applications?
Unconditional answers to these questions first require solving the P=NP problem. However, this project will use two approaches to find the most likely answers to these questions. The first approach is to provide proofs resolving these issues under plausible complexity assumptions. The second approach is to examine restricted but powerful classes of algorithms that include the most successful heuristics for the problems under study. This approach will include attempts to both explain the success of such heuristics and to show limitations that can be used as a guide for the likely inherent complexity of the problems.
|
1 |
2003 — 2006 |
Impagliazzo, Russell Micciancio, Daniele [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr: Cryptography: From User Needs to Protocol Design @ University of California-San Diego
Cryptography is a powerful security tool that has not reached its full potential in applications. This project investigates approaches to making cryptography easier to use at multiple levels: easier for the protocol designer, for the software applications, and the policy maker. The project is naturally divided into three areas, roughly corresponding to the above list: - The study of formal verification methods for protocol design. These methods will help the designer to make sure that protocols meet the robust security standards accepted by the cryptographic and complexity theory communities. - The design of versatile cryptographic primitives whose performance can be tuned to the needs of specific applications. This includes the analysis of cryptographic primitives with respect to modern efficiency and security measures, e.g., history independence, incrementality, forward security, and combined uses of the above. - The analysis of the security and privacy desires of individual users and policy makers, and the study of innovative solutions to their problems. For example, the project explores the design of identification cards that provide secure identification without violating the privacy of the users.
Broad impact: Security is vital to ensure public confidence in the information infrastructure, and cryptography is a potent tool for computer security. Yet, cryptography is currently under-utilized and frequently misused. By making cryptographic tools easier to use and understand, the project aims to facilitate increased and better informed use of cryptography. The project also aims to contribute to the national debate on appropriate uses of information technology, and, in particular, explore ways to protect individual privacy without damaging national security. Finally, the project will contribute to the education of cryptographic specialists who can communicate clearly with non-specialists, such as implementers and legal experts.
|
1 |
2005 — 2008 |
Paturi, Ramamohan (co-PI) [⬀] Impagliazzo, Russell |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Duality Between Complexity and Algorithms @ University of California-San Diego
Intellectual Merit: Algorithm design investigates the most efficient ways to solve specific computational problems, whereas complexity theory investigates relationships between general classes of computational problems. This proposal will investigate situations in which answering questions in complexity require us to understand specific algorithmic problems, and designing efficient algorithms require us to answer questions in complexity. Recently, there have been several results that expose the connection between complexity and algorithms. Examples include the connections between algebraic circuit lower bounds and polynomial-identity testing, better exponential algorithms for _ -SAT and circuit lower bounds, limitations of widely used backtracking algorithms and proof complexity, and constructions of error-correcting codes and constructions of pseudorandom generators. The proposed work will elaborate on these connections between combinatorial constructions, efficient algorithms, and complexity. It will use these connections to further our understanding of both algorithms and complexity. It will also seek new connections in the study of randomness in computing, proof complexity, the exact complexity of N P-complete problems, and formal models of algorithm paradigms.
This proposal will investigate issues where algorithm design is key to new results in complexity. Such issues include: _ Which instances of optimization problems are the most intractable ones? Exactly how difficult are these problems?
What are good heuristic methods for solving optimization problems? When and how well do they work?
Can we distinguish between the powers of various general algorithmic methods (e.g., dynamic programming, greedy algorithms, back-tracking, local search, linear-programming relaxation) for solving these problems?
How much does randomness help in solving problems?
What is the relationship between the theory of sub exponential time algorithms and fixed parameter tractability? What other consequences would the existence of sub exponential algorithms have for complexity and cryptography?
While complete answers to most of these questions will probably not be possible in the foreseeable future, researchers in complexity, including the PIs, have made substantial progress on all of them. In particular, it is becoming apparent that these questions are so interrelated that it is impossible to address any one issue in isolation. Instead, success will require a multi-pronged effort that reveals the interconnections, and uses progress in one direction to obtain similar progress on others.
Broader Impact: Search and optimization are central to any computational issue in science and engineering. For example, finding the most probable folding of a protein, finding the smallest area of a VLSI chip, and finding the optimal way to classify data are all combinatorial optimization problems. The same algorithmic techniques are used to solve such problems in a wide variety of application domains. However, many of these techniques are heuristic in that factors that determine the performance are not well understood. This is more than academic issue since the lack of understanding prevents users from matching application areas to the most suitable algorithmic techniques. The work in this proposal is intended to further this understanding and hence may indirectly lead to improvements in many diverse application domains. This proposal will also train graduate students to be top researchers and educators like many of our alumni. 1
|
1 |
2007 — 2011 |
Impagliazzo, Russell |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ct-Isg: Amplifying Both Security and Reliability @ University of California-San Diego
Over the next few years, the basic architecture of the next generation of Internet will be decided, hopefully with a much greater emphasis on security. In a network, security and communication are interdependent. Cryptographic tools will be required to ensure that communication is dependable despite hackers' denial-of-service attacks and other sabotage. On the other hand, proper communications architecture could provide primitives that make powerful cryptographic tools for security, such as secure multiparty computation, e_cient enough to be implemented. Unfortunately, research on reliable communications and computation in a network has traditionally been studied without consideration of security, and vice versa. This research examines a model of communication channels that includes security considerations in a robust way. In this model, protocols can be simultaneously evaluated for the two dual objectives of secrecy and reliability. This model unites previous approaches to these questions from the points of view of cryptography, distributed systems, and error-correction. The model also unites information-theoretic techniques (with security based on the attacker's inability to access certain information) with complexitytheoretic approaches (based on the attacker's inability to solve intractable computational problems). Possible constructions of channels with stronger properties (increased privacy, authentication, or reliability) from those with weaker properties will be explored. For example, is it possible to take an arbitrary channel that gives only slightly more information to the intended receiver than to an attacker and use it to build a highly reliable and almost completely secret channel? Can we use a channel for secret, reliable communication to create a channel emulating a virtual broadcast? What is the relationship between secrecy and anonymity?
|
1 |
2008 — 2013 |
Impagliazzo, Russell Sarnak, Peter (co-PI) [⬀] Wigderson, Avi Bourgain, Jean (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cdi Type Ii: Pseudorandomness @ Institute For Advanced Study
This proposal aims to transform research in mathematics and computer science by bringing together two major research tracks which have been progressing in parallel (albeit with nontrivial interaction). In mathematics, the object of study in many areas (including analysis, number theory, ergodic theory and combinatorics) is captured by the question: ?What random-like properties does a (deterministic) mathematical structure have?? In computer science, the object of study in many areas (including network theory, error correction, computational complexity and derandomization) is captured by the question ?Can we deterministically and efficiently design objects with specific random-like properties??
The PIs view the two Math and CS tracks (respectively) as analytic and synthetic approaches for understanding the same fundamental pseudorandomness phenomena and its interaction with structure.
Computer science has been mostly a passive consumer of mathematics, relying on the mathematical analysis of structures to build desireable objects. We propose to transform this one-way use to a full fledged collaboration, through the use of the computational view of randomness to analyze mathematical structures. Preliminary applications of this tool, many by the PIs, have already led to major breakthroughs both in new understanding of mathematical objects and in the use of these objects as the basis for constructions in computer science. Examples of recent breakthroughs resulting from this cross-fertilization include work on expanders in Cayley graphs, extractors from sum-product theorems, arithmetic (and other) progressions in primes, and the use of Gowers? norms in complexity. Many rely on the conceptual computational tool of pseudorandomness, the inability of any of a class of tests to tell the difference between a random object and the object in question. This notion arose in complexity-theoretic approaches to cryptography, but has since had wide applicability in computer science (e.g. in derandomizing probabilistic algorithms).
|
0.909 |
2012 — 2017 |
Buss, Samuel (co-PI) [⬀] Paturi, Ramamohan (co-PI) [⬀] Impagliazzo, Russell |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Af: Large: Collaborative Research: Exploiting Duality Between Meta-Algorithms and Complexity @ University of California-San Diego
Meta-algorithms are algorithms that take other algorithms as input. Meta-algorithms are important in a variety of applications, from minimizing circuits in VLSI to verifying hardware and software to machine learning. Lower bound proofs show that computational problems are difficult in the sense of requiring a prohibitive amount of time, memory, or other resource to solve. This is particularly important in the context of cryptography, where it is vital to ensure that no feasible adversary can break a code. Surprisingly, recent research by the PIs and others shows that designing meta-algorithms is, in a formal sense, equivalent to proving lower bounds. In other words, one can prove a negative (the non-existence of a small circuit to solve a problem) by a positive (devising a new meta-algorithm). This was the key to a breakthrough by PI Williams, proving lower bounds on constant depth circuits with modular arithmetic gates.
The proposed research will utilize this connection both to design new meta-algorithms and to prove new lower bounds. A primary focus will be on meta-algorithms for deciding if a given algorithm is 'trivial' or not, such as algorithms for the Boolean satisfiability problem. The proposed research will devise new algorithms that improve over exhaustive search for many variants of satisfiability. On the other hand, it will also explore complexity-theoretic limitations on how much improvement is possible, using reductions and lower bounds for restricted models. Satisfiability will provide a starting point for a more general understanding of the exact complexities of other NP-complete problems such as the traveling salesman problem and k-colorability. The proposal addresses both worst-case performance and the use of fast algorithms as heuristics for solving this problem.
This exploration will be mainly mathematical. However, when new algorithms and heuristics are developed, they will be implemented and the resulting software made widely available. This research will be incorporated in courses taught by the PI's, at both graduate and undergraduate levels. Both graduate and undergraduate students will perform research as part of the project.
|
1 |
2019 — 2022 |
Impagliazzo, Russell Lovett, Shachar (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Af: Small: Finding Models of Data and Mathematical Objects @ University of California-San Diego
A central mystery in the philosophy of science is "the unreasonable effectiveness of mathematics." Why do natural phenomena seem to have simple, elegant mathematical models? The success of machine learning raises an analogous issue called "the unreasonable effectiveness of machine learning." Is the success of both mathematical science and machine learning due to some common elements in the phenomena studied? Is it due to the nature of the questions we ask? Or is it a mirage due to cognitive bias, in focusing on successful uses and ignoring failures? The aim of this proposal is to better understand this by relating it to a similar phenomenon in mathematics and theoretical computer science, where complex mathematical objects often have simple "models" that give accurate estimates of many quantities of interest. By relating concepts in pure mathematics to analogs in machine learning, the researchers plan to both use powerful machine-learning techniques to prove new mathematical results, and to use mathematical concepts to extend the scope of machine learning.
More precisely, the research team will explore a deep connection between regularity lemmas in mathematics, hard core theorems in complexity theory, and boosting in machine learning. At the core of it is the following problem. Given a distribution D on some kind of data points, and a class of hypotheses H, find a model M (namely, a distribution on the same underlying set), that is (i) indistinguishable from D with respect to the hypotheses in H; (ii) simple, in that it has a definition in terms of a small number of tests in H; (iii) has as high entropy as possible given the conditions above. With this goal in mind, this proposal will address the following main questions: 1. When do such simple, high entropy models exist? In other words, what does it say about natural phenomena that they have simple, accurate models? What does it say about the questions being asked? This characterization will be studied both in terms of the structure of the distribution D and the class H. 2. What are the quantitative tradeoffs between the different criteria above (indistinguishability, simplicity, and entropy)? 3. When such models exist, can they be found efficiently? What algorithms would lead to the discovery of such models?
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |
2022 — 2025 |
Paturi, Ramamohan (co-PI) [⬀] Impagliazzo, Russell |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Af:Medium: Advancing the Lower Bound Frontier @ University of California-San Diego
The central problem in computational complexity theory is to develop the most efficient algorithms for important computational problems. To prove that an algorithm is the most efficient, the holy grail of complexity theory is to prove unconditional lower bounds, showing that any algorithm, however clever or sophisticated, will inherently require a certain amount of resources (hardware, or runtime) to solve. The goal of this research is to tackle the most basic challenge of proving circuit and runtime lower bounds for explicit problems, as well as similar challenges in proof and communication complexity lower bounds.<br/><br/>In more detail, this project is focused on two approaches. The first approach involves exploiting the two-way connection between circuit lower bounds and meta-algorithms, algorithms whose inputs or outputs are circuits or other algorithm descriptions. Important examples of meta-algorithms are: circuit satisfiability, where the input is a circuit and the output is whether it is satisfiable; proof search, where the input is an unsatisfiable formula and the output is a small refutation of it in a proof system; and PAC learning, where the input consists of labelled examples of an unknown target function to be output. Often, paradoxically, efficient algorithms for meta-computational problems such as these lead to improved lower bounds and vice versa. The second approach is known as lifting, whereby lower bounds in a more complex model of computation are reduced to lower bounds in a simpler model. Through lifting and related ideas, circuit complexity has been related to proof complexity and communication complexity, and lower bounds have been improved for all three settings. The team of researchers will investigate a variety of ideas to extend and deepen these approaches to push past the current barriers in lower bounds. The research is also expected to develop improved algorithms for meta-algorithmic problems, which are of central importance for hardware and software verification, algorithmic learning, and a broad range of other applications.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |