2002 — 2006 |
Hassibi, Babak |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Pecase: Multi-Antenna Communications: Information Theory, Codes and Signal Processing @ California Institute of Technology
Proposal Title: PECASE: Multi-antenna communications: Information theory, codes and signal processing Institution: California Institute of Technology
It is now widely recognized that multiple antennas will figure prominently in future wireless communications systems, since they can significantly boost the channel capacity, as well as lower the probability of error, of a wireless communications link. However, before the above promise can be realized in a practical communications system, there are several key research challenges that must be addressed. This research studies several of the information-theoretic, coding-theoretic, and signal processing challenges encountered, as well as the impact of integrating their solutions into a multi-user wireless network. A common thread encountered throughout is that the tools developed, as well as the results obtained, have implications well beyond multi-antenna communications--both in terms of the introduction of new mathematical methods, as well as in terms of their applicability to more general communication problems. The first research challenge addressed is information-theoretic: the actual channel capacity of a multi-antenna wireless link is known only under certain idealized conditions. For most realistic conditions, the channel capacity is unknown and it is not clear how it depends on the speed of the fading, the number of antennas, and the SNR. Nor is it clear what the optimal transmission strategies should be and what the performance of training-based schemes are. This research will focus on these problems for continuously- and block-fading channels, where the analysis appears to be tractable and where the theory of random matrices plays a major role. The second challenge is that of designing space-time codes that deliver on the high data rates promised by theory, have good error performance, and that lend themselves to efficient encoding and decoding. Compared to conventional codes, the added spatial dimension adds a whole new twist to the code design problem, and a variety of information-theoretic, linear-algebraic, and group-theoretic ideas play a prominent role. The signal processing research challenge is to devise algorithms that are efficient, so that all the processing can be done in real time. Recent work by the researcher has analytically demonstrated that, for a wide range of rates and SNRs, polynomial-time maximum-likelihood decoding of several classes of space-time codes is possible. This research will fully pursue the implications of this result, both in terms of the design of new algorithms and codes, as well as in terms of understanding the tradeoffs between maximum-likelihood performance and computational complexity.
This project was originally funded as a CAREER award, and was converted to a Presidential Early Career Award for Engineers and Scientists (PECASE) award in May 2004.
|
1 |
2003 — 2008 |
Schulman, Leonard (co-PI) [⬀] Murray, Richard [⬀] Effros, Michelle (co-PI) [⬀] Low, Steven (co-PI) [⬀] Hassibi, Babak |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr: Information Dynamics For Networked Feedback Systems @ California Institute of Technology
This project is developing a new framework for investigating the dynamics of information in complex, interconnected systems. The key technical thrusts of the project are: (1) real-time information theory, (2) robust control of networks, (3) packet-based control theory,and (4) computational complexity of network systems. Each of these thrusts explores aspects of information systems that must interact with the real-world in a manner that requires careful control of the timing of the computation and the evolution of the information state of the system. While diverse in application, these thrusts represent a common core of intellectual thrusts that integrate computer science, control, and communications.
The results of the proposed research are being evaluated on two testbeds already at Caltech. The first is the Multi-Vehicle Wireless Testbed, which provides a distributed environment for control of 8-10 vehicles performing cooperative tasks in a real-time environment. The second is the WAN in Lab, a wide area network consisting of high speed servers, programmable routers, electronic crossconnects, and long haul fibers with associated optical amplifiers, dispersion compensation modules and optical multiplexers.
The project is also developing elements of a curriculum that will provide training to students in information systems that blends communications, computation, and control. This includes integration our the research framework into a recently created course at Caltech on the principles of feedback and control, CDS 101, as well as development a second course, IST 201, aimed at bringing together faculty and students interested in working on problems at the boundaries of these traditional disciplines.
|
1 |
2007 — 2011 |
Hassibi, Babak |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Entropy Vectors, Convex Optimization and Network Information Theory @ California Institute of Technology
This research studies network information theory based on the viewpoint of entropic vectors and convex optimization. There is currently great interest in the problem of information transmission over wired and wireless networks. Information theory is well poised to have an impact on the manner in which future networks are designed and maintained, both because wired networks are ripe for the application of network coding and also because wireless networks cannot be satisfactorily dealt with using conventional networking tools. The challenge is that most network information theory problems are notoriously difficult and so the mathematical barriers that must be overcome are often quite high.
The approach adopted in this research is through the definition of the space of normalized entropic vectors, which differs slightly from that in the literature in that entropy is normalized by the logarithm of the alphabet size. This definition is more natural for determining the capacity region of networks and renders the closure of the resulting space convex (and compact), even under constraints imposed by channels internal to the network. For acyclic memoryless networks, the capacity region for an arbitrary set of sources and destinations can be found by maximizing a linear function over the set of channel-constrained normalized entropic vectors and some linear constraints. While not necessarily making the problem simpler, this approach certainly circumvents the ``infinite-letter characterization'', as well as the nonconvexity of earlier formulations, and exposes the core of the problem as that of determining the space of normalized entropy vectors. Much of the research therefore focuses on constructing computable inner and outer bounds to this space using tools from group theory, lattice theory, non-Shannon inequalities, and others.
|
1 |
2009 — 2013 |
Hassibi, Babak |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cps: Small: Random Matrix Recursions and Estimation and Control Over Lossy Networks @ California Institute of Technology
This award is funded under the American Recovery and Reinvestment Act of 2009 (Public Law 111-5).
Many of the future applications of systems and control that will pertain to cyber-physical systems are those related to problems of (possibly) distributed estimation and control of multiple agents (both sensors and actuators) over networks. Examples include areas such as distributed sensor networks, control of distributed autonomous agents, collision avoidance, distributed power systems, etc. Central to the study of such systems is the study of the behavior of random Lyapunov and Riccati recursions (the analogy is to traditional LTI systems where deterministic Lyapunov and Riccati recursions and equations play a prominent role). Unfortunately, to date, the tools for analyzing such systems are woefully lacking, ostensibly because the recursions are both nonlinear and random, and hence intractable if one wants to analyze them exactly. The methodology proposed in this work is to exploit tools from the theory of large random matrices to find the asymptotic eigendistribution of the matrices in the random Riccati recursions when the number of states in the system, n, is large. In many cases, the eigendistribution contains sufficient information about the overall behavior of the system. Stability can be inferred from the eigenanalysis. The mean of the eigenvalues is simply related to the mean of the trace (i.e., the mean-square-error of the system), whereas the support set of the eigendistribution says something about best- and worst-case performances of the system. Furthermore, a general philosophy of this approach is to identify and exhibit the universal behavior of the system, provided such a behavior does exist. Here, "universal" means behavior that does not depend on the microscopic details of the system (where losses occur, what the exact topology of the network or underlying distributions are), but rather on some simple macroscopic properties. A main idea of the approach is to replace a high-dimensional matrix-valued nonlinear and stochastic recursion by a scalar-valued deterministic functional recursion (involving an appropriate transform of the eigendistribution), which is much more amenable to analysis and computation.
The project will include course development and the recruitment of women and minority students to research. It will also make use of undergraduate and underrepresented minority student researchers through Caltech's SURF and MURF programs.
|
1 |
2010 — 2014 |
Hassibi, Babak |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cif: Small: Information Flow in Networks: Entropy, Matroids and Groups @ California Institute of Technology
This research aims to develop an optimization-based approach to network information theory, that goes well beyond current networking theory and practice. There is a great deal of recent interest in the problem of simultaneous information transmission among many users over wired and wireless networks. Information theory is well poised to have an impact on the manner in which such future networks are designed and maintained, both because wired networks are ripe for applications such as network coding (where information streams are actually combined rather than simply routed) and also because wireless networks cannot be satisfactorily dealt with using conventional networking tools. The challenge is that even the simplest network information theory problems are notoriously difficult and, as a result, information theory has not been able to provide many tools to network practitioners. The research aims to remedy this situation by developing tools for more effective network design.
While, in principle, it is possible to obtain the information-theoretic rates in wired networks via convex optimization over the space of entropy vectors, this effort is severely hampered by the fact that an explicit characterization of the entropic space does not appear to be within reach. To circumvent this, the research will consider frameworks that, while possibly suboptimal, apply to arbitrary networks, have reasonable complexity and lend themselves to distributed implementation. The mathematical approach taken is four-fold and makes use of the representation theory of matroids (to design linear network codes), Monte Carlo Markov chain methods to distributedly design "good" network codes, group-theoretic techniques to construct nonlinear network codes from non-Abelian groups, and determinantal inequalities to study the entropic space.
|
1 |
2014 — 2018 |
Hassibi, Babak |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cif: Medium: Collaborative Research: Estimating Simultaneously Structured Models: From Phase Retrieval to Network Coding @ California Institute of Technology
In modern data-intensive science and engineering, researchers are faced with estimating models where available observations are far fewer than the dimension of the model to be estimated. The key to the success of compressed sensing, matrix completion, and other problems of this type, is to properly exploit knowledge about the "structure" of the model. While structures such as sparsity have been separately studied, the problem of "simultaneous structures" has been neglected, since it is implicitly assumed by practitioners that simply combining known results for each structure would solve the joint problem. Interestingly, the PIs recently proved that this approach can result in a significant gap.
This proposal will develop theory and computationally tractable methods for estimating simultaneously structured models with minimal observations. It combines (1) a top-down approach to understand the fundamental limitations based on the geometry of how structures interact, and (2) a problem-specific, bottom-up approach to exploit domain knowledge in constructing appropriate penalties. This work addresses a variety of applications including (1) sparse principal component analysis, a central problem in statistics seeking approximate but sparse eigenvectors, (2) sparse phase retrieval and quadratic compressed sensing in signal processing, and (3) code design for communications and network coding.
The ability to systematically derive structured models from data will have far-reaching impact on engineering challenges in the era of Big Data and ubiquitous computing. Handling models with multiple structures poses deep theoretical and computational challenges that this proposal focuses on. Applications in machine learning, signal processing, and network coding are discussed. The PIs will incorporate research results in their teaching, organize technical workshops to bring together mathematicians and engineers, and seek the involvement of undergraduate students in this work through summer research programs.
|
1 |
2014 — 2017 |
Hassibi, Babak |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cif: Small: Structured Signal Recovery From Noisy Measurements Via Convex Programming: a Framework For Analyzing Performance @ California Institute of Technology
With the advent of ubiquitous sensing (multi-modal sensors, imaging systems and cameras, etc.), various complex social networks, and the deluge of health-care data (DNA sequences, micro-arrays, etc.), society is now officially in the era of Big Data. In such a setting, the ability to systematically and efficiently derive structured models, and recover reliable and actionable information, from barrages of high dimensional data will have far-reaching impact on engineering challenges and on everyday life. Unfortunately, the data is often noisy, inaccurate, or partially missing. This research will develop a comprehensive theory to assess the performance of a very wide class of algorithms designed for this purpose which are based on convex programming techniques. Such performance guarantees will assist practitioners in a wide array of applications in signal processing, machine learning, statistics and data analysis.
Recent years has witnessed some spectacular theoretical and algorithmic advances in convex optimization and compressed sensing that have changed how large noisy data sets are handled. Despite these successes, key challenges remain, including the need for a comprehensive theory that accurately predicts the performance of the algorithms and goes beyond the customary ?order-wise? performance guarantees. The investigators will pursue an ambitious research program to give exact performance evaluations for a wide variety of convex-optimization-based signal recovery methods, including the classical LASSO and its variants. The framework can deal with a wide array of signal-to-noise ratios, different measurement matrix ensembles, and a variety of cost functions and signal structures. The techniques draw upon a host of ideas in high-dimensional geometry, statistics, and signal processing and are the culmination of a flurry of activity by several different research communities.
|
1 |
2015 — 2018 |
Hassibi, Babak |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Coding For Networked Control Systems Over Lossy Links @ California Institute of Technology
The coming decade will witness an explosive growth in the number of devices with wireless communication capabilities and that can sense, measure, actuate, interact, and communicate with each other in ad hoc and/or centralized fashions. Such devices will appear in automobiles, household appliances, wearable electronics, sensor networks, surveillance systems, various controllers and many others. A common catch-all phrase for these emerging devices is the Internet-of-Things. One of the great opportunities that such a plethora of intelligent devices presents is that they can cooperate, coordinate, make joint decisions, and influence the environment they are in by taking joint actions. A main challenge in realizing this opportunity is that these multiple devices, or agents, must do so in a decentralized way and while communicating over unreliable links. Although communication systems have always had to deal with unreliable links, because of the real-time nature of the control and sensing signals that need to be exchanged, the dynamically-changing traffic patterns, and the possible mobility of the agents, traditional schemes designed for communication systems are no longer applicable. The goal of this proposal is to facilitate the design and analysis of networked systems that interact and communicate with each other over noisy communication links and builds on the investigators? prior work of constructing the first efficiently decodable tree codes that can be used to reliably implement any distributed protocol over lossy erasure links.
How best to operate distributed heterogeneous agents over networks of lossy links is an area of active research. Questions include what information should the agents exchange? How best should they quantize and encode their measurements or control signals to combat the uncertainties (delay or message drops) in the network? How can they facilitate cooperation and overcome the competition? How can they do all this in a distributed fashion? And so on. These very challenging questions overlap with the conventional fields of communications, networking and control and combine elements of each in novel ways. As result, a theory that can deal with them in a holistic manner has yet to emerge. This proposal attempts the beginning of such a theory through the study of tree codes, a new paradigm in coding that allows the reliable implementation of interactive protocols over lossy links, as well as through the study of causal source coding. Tree codes were introduced nearly twenty years ago as a bridge between information theory and control, but did not gain much traction since explicit constructions with efficient decoding did not exist. The area of causal source coding is even less explored. The proposed research has five main thrusts: (1) the design of efficiently encodable and decodable tree codes for other classes of channels, such as AWGN and BSC, (2) the study of the performance of control systems and the interaction and tradeoffs between source coding, channel coding and controller design, (3) the study of causal source coding; in particular, causal transform codes and how they can be optimized in the control setting, (4) implementing distributed protocols over erasure links, and (5) determining metrics from control theory to guide the code design.
|
1 |