2001 — 2004 |
Low, Steven Doyle, John (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr/Si(Cise):Optimal and Robust Tcp Congestion Control @ California Institute of Technology
Internet is undergoing an overhaul unprecedented in size, diversity, and reach, with profound im- pact in all aspects of our scientific, social, economic and political life through the integration of networks of communication, transportation, entertainment, utilities, and finance. The stability and robustness of this vital infrastructure demands a rigorous theory to understand the current protocols and evolve them to meet emerging challenges. We propose to develop such a theory for TCP congestion control, and use it to drastically improve the stability, robustness and optimality of the current protocols. A key insight is to view congestion control as a distributed asynchronous computation to maxi- mize aggregate source utility over the Internet; different TCP and active queue management (AQM) schemes correspond to different utility functions and different algorithms to maximize them. Our research hastwo components. First, we will develop a new theoretical model of TCP congestion control based on duality in optimization and multivariate robust control. The theory will clarify the role of source algorithms, such as Tahoe, Reno and Vegas, and active queue management, such as DropTail, RED and REM, in the control of networks and establish performance limits of the current protocols; it will explain the effect on stability when delay, topology, capacity, and load scale up; and it will provide conditions under which the feedback stability ofTCP/AQM algorithms are invariant to these effects. Indeed, such a theory is already emerging from our recent works. Even in its currently preliminary stage, it already provides a fundamental understanding on some widely observed performance and fairness behavior of the current protocols, and uncovers new and surprising stability problems. For example, it shows that the current protocols become unstable and exhibit bifurcation when network capacity increases. Moreover, maintaining stability as capacity scales up arbitrarily imposes severe constraints on how sources adjust their rates (TCP) and what congestion information is fed back (AQM). The current protocol does not satisfy the condition for such stability invariance, and hence may be ill suited for future networks where, pulled by application demand and pushed by technological advances, the capacity will be large. The second component of our research is the design of practical TCP and AQM protocols based on the theory, and the development of prototypes and experiments to demonstrate their effectiveness. We will use the theory to identify the sources of instability in the current protocols when delay, network size, capacity, and traffic load scale up. We will design both enhancements that incrementally evolve the current protocols, and drastically new protocols that have the strongly robust stability property promised by theory. As a concrete application of our algorithms, we will apply them to improve TCP performance over wireless links, both because they are ubiquitous and because they are likely to remain the most important bottlenecks in future networks.
|
1 |
2002 — 2005 |
Low, Steven Newman, Harvey (co-PI) [⬀] Doyle, John (co-PI) [⬀] Bunn, Julian (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Sti: Multi-Gbps Tcp: Data Intensive Networks For Science & Engineering @ California Institute of Technology
The project will develop and deploy "Multi-Gbps TCP" to enable Terascale data transfers across shared networks for data intensive science and engineering. This builds on a foundation of several years of theoretical development and in-depth simulations at Caltech. The testing, optimization and deployment will be accomplished using the regional, national and transoceanic research and production networks available to (and in some cases co-managed and operated by) the Caltech High Energy Physics (HEP) group together with their partners in Abilene, DataTAG, the Starlight-CERN-SURFNet "wavelength triangle", TeraGrid, AMPATH, CANARIE, CalREN and Pacific Light Rail. Work with IETF, ISOC and GGF, and submission of appropriate RFCs and documents during the course of the project, will ensure that the new techniques developed are consistent extensions of existing standards, and available to the worldwide research community.
The goal of this project is to develop practical TCP and AQM (Active Queue Management) schemes, with loss recovery tuning, that maintain stability, high utilization and negligible loss and delay as the network increases in capacity, size, and load. We aim to validate and deploy these techniques on HEP networks, the TeraGrid and other US backbones including Abilene, ESnet, CalREN, and MREN.
It is well-known that the additive-increase-multiplicative-decrease strategy with which TCP probes available capacity performs poorly at large window sizes due to serious dynamic and equilibrium problems. These problems must be solved to scale TCP to the high bandwidth regime. The proponents have developed a mathematical theory that provides a fundamental understanding of the current protocols and exposes their stability problems in high latency, high capacity environments and derived a new class of TCP / AQM algorithms that solve these problems. The central thrust of this work is to compensate for delay, capacity, and routing using the correct scaling. Fortunately, the network structure is such that there is sufficient information to allow the right scaling to be applied in a distributed and decentralized manner, while remaining compliant with the current protocol. These new algorithms, with TCP recovery tuning, will automatically rescale the parameters so as to maintain stability and optimize performance as capacity, delay, routing or loads change.
In simulation the new algorithms in ns-2 and have achieved 98% utilization with an equilibrium queue of less than 100 packets (0.08 ms queuing delay). In this project, we will implement and demonstrate these algorithms in the global HEP research and production networks. In addition, work will occur with both with the standards bodies to help drive emerging standards, and with Grid software developers to deploy them.
The project will leverage the theoretical work on TCP/AQM of Low and Doyle, the leadership and experience of Bunn and Newman in the development and operation of the international network for HEP (and the broader scientific community) over the last 20 years, and the extensive infrastructure of HEP networks to make a quick impact. The project's success will not only benefit directly the HEP community, but will influence research methods in several fields, by enabling the effective use of Grids, globally distributed Terascale computing, and distributed Petascale databases for the first time.
NSF recognizes that transfers of data files of 1018 bytes (exabytes) is important. This project addresses the issue of how to do this in real operations.
|
1 |
2003 — 2009 |
Newman, Harvey Doyle, John (co-PI) [⬀] Psaltis, Demetri Low, Steven Yip, Steven |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ri: Wide-Area-Network in a Laboratory @ California Institute of Technology
The development of robust and stable ultrascale networking, at 100 Gbps and higher speeds in the wide area, is critical to support the new generation of ultrascale computing and petabyte to exabyte datasets that promise to drive discoveries in fundamental and applied sciences of the next decade. Continued advances in computing, communication, and storage technologies, combined with the development of national and global Grid systems, hold the promise of providing the required capacities and an effective environment for computing and science. A key challenge to overcome is that some of the current network control and resource sharing algorithms cannot scale to this regime. The goal of the Caltech FAST Project is to develop theories and algorithms for the future ultrascale networks, implement and demonstrate them in state-of-the-art testbeds, and deploy them in communities that have a clear and urgent need today.
A critical component of this effort is an experimental infrastructure on which to develop, test, and demonstrate new algorithms at both high speed and large delay. It is impossible to conduct this research on network simulators or in the high performance wide-area networks (WAN) available today. The propose project will build such a facility: WAN in Laboratory.
WAN in Lab will consist of four major building blocks: high speed servers, programmable routers, electronic crossconnects, and WDM equipment including long-haul fibers with associated optical amplifiers, dispersion compensation modules and optical multiplexers and demultiplexers.
This "wind tunnel" of networking has five features that combine to make it a truly unique facility in the world: 1. It is literally a WAN, not an emulation, that provides both high speed and large distance, critical for developing protocols for ultrascale networking. 2. It can be easily configured into logical networks of different topologies, link speeds and delays, and can stay cutting-edge by incorporating latest advances in optical technologies, routers, or servers, as need arises and funds become available. 3. WAN in Lab will have built-in passive monitoring facilities that will complement end-to-end monitoring tools such as Web100. These are critical to resolve many performance problems that arise only in live networks but that are hard to debug from just end-to-end measurements. 4. It is a network that can break and is suitable for risky experiments, possibly including those that involve reprogramming the routers. 5. It will be integrated with global high performance research networks and will be a shared resource for the networking community.
Broader impacts: WAN in Lab will be part of the Federated Emulab led by University of Utah's Jay Lepreau and will be managed on a uniform software platform. It complements nicely the existing facility in Emulab, which focuses on large scale (>200 servers) but low speed (100Mbps network interface card). Participation in Federated Emulab makes WAN in Lab instantly available to the Emulab user community and useful to those projects that need both high capacity and large distance, as FAST does. This facility is expected to serve much more projects than the few that motivated it.
Wan in Lab is a key element in the overall process of research/in-lab development/experimental networks for field trials/production networks. We are engaged in this process on an ongoing basis to get the next generation(s) of protocols into production, in the service of science and engineering. We have been working with the HENP community and their partners to achieve the transition to production. We are partnering closely with SURFnet, Starlight and in the future UKlight to provide a global 10 Gbps UltraLight testbed. These activities and WAN in Lab complement and leverage on each other.
|
1 |
2003 — 2008 |
Schulman, Leonard (co-PI) [⬀] Murray, Richard [⬀] Effros, Michelle (co-PI) [⬀] Low, Steven Hassibi, Babak (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr: Information Dynamics For Networked Feedback Systems @ California Institute of Technology
This project is developing a new framework for investigating the dynamics of information in complex, interconnected systems. The key technical thrusts of the project are: (1) real-time information theory, (2) robust control of networks, (3) packet-based control theory,and (4) computational complexity of network systems. Each of these thrusts explores aspects of information systems that must interact with the real-world in a manner that requires careful control of the timing of the computation and the evolution of the information state of the system. While diverse in application, these thrusts represent a common core of intellectual thrusts that integrate computer science, control, and communications.
The results of the proposed research are being evaluated on two testbeds already at Caltech. The first is the Multi-Vehicle Wireless Testbed, which provides a distributed environment for control of 8-10 vehicles performing cooperative tasks in a real-time environment. The second is the WAN in Lab, a wide area network consisting of high speed servers, programmable routers, electronic crossconnects, and long haul fibers with associated optical amplifiers, dispersion compensation modules and optical multiplexers.
The project is also developing elements of a curriculum that will provide training to students in information systems that blends communications, computation, and control. This includes integration our the research framework into a recently created course at Caltech on the principles of feedback and control, CDS 101, as well as development a second course, IST 201, aimed at bringing together faculty and students interested in working on problems at the boundaries of these traditional disciplines.
|
1 |
2004 — 2007 |
Low, Steven Chiang, Mung (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Crcd/Ei: Control and Optimization of Communication Systems @ California Institute of Technology
An exciting paradigm is emerging over the last decade that applies pow-erful control and optimization theories to the design and analysis of communi-cation networks, resulting in intellectual and practical impacts much beyond the established frameworks of the 1980s. This trend has been driven by both new needs from the communications and networking fields and recent advances in control and optimization theories. There is already a substantial body of re-search results and applications that need to be transferred to university cur-riculum and the industry. Moreover, new results are being created at a rapid pace, intensifying the need for students and engineers to understand, and ap-ply, these new insights and techniques. The current project develops a new course on control and optimization of communication systems, and associated courseware, that teach students the mentality of tackling engineering problems as dynamic systems to be controlled and linear/nonlinear objectives to be opti-mized, and equip them with the ability to do so. It covers major communica-tions and networking advances in this area and provide students with hands-on experience on practical problems through numerical and experimental projects. This inter-disciplinary course has a flexible modular structure and the mathe-matical background needed in each module is presented using a 'just-in-time' approach. The project develops instructional materials that can be used in other institutions and the industry. It vigorously pursues various paths of dis-semination. It will transfer recent research advances into the mainstream cur-riculum of CS and EE, and facilitate the spread of knowledge from academia to the industry.
|
1 |
2005 — 2008 |
Low, Steven |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nets-Nr: Counter-Intuitive Behavior in General Networks @ California Institute of Technology
Proposal Number: 0435520 PI: Steven Low Institution: California Institute of Technology Title: Counter-Intuitive Behavior in General Networks
Abstract
It is well-known that counter-intuitive behavior can arise in a distributed system where agents optimize their own objectives. This project studies the intricate interactions in TCP networks. The emphasis is on general network topologies with multiple links and heterogeneous flows and on rigorous mathematical modeling and analysis. The project develops mathematical theory to understand the behavior of large scale networks, verifies these theoretical insights in simulations and real networks, and exploits them in practical design and operation of networks and protocols.
Preliminary results illustrate that interesting and counter-intuitive behavior arises only in a network setting where flows interact through multiple shared links in surprising ways. For example, the researchers have shown that increasing link capacity can reduce the aggregate throughput in a TCP network, and that, contrary to conventional wisdom, a fair bandwidth allocation is not necessarily inefficient. These results demonstrate that the formal approach taken in the project is indeed necessary and rewarding.
The results of this project will help us better understand behavior of large-scale networks such as the Internet, and in particular, help characterize paradoxical phenomenon and derive guidelines to avoid harmful behavior in TCP networks.
Together with Professor Mung Chiang of Princeton EE, the PI is developing a new course on ``Control and optimization of communication networks'', at the introductory graduate and advanced undergraduate level. The results produced in the project will be directly incorporated into the new course. We will also leverage on, and contribute to, the education and outreach activities of this initiative.
|
1 |
2006 — 2007 |
Low, Steven Doyle, John (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Nets-Nbd: Optimization and Games in Inter-Domain Routing @ California Institute of Technology
0520349 0520318
Inter-domain routing can be viewed as a means to implement economic relationships among competing ISPs. Examination of technical details of the BGP routing protocol alone is insufficient to understand the current behavior, or predict future requirements, of global connectivity without taking into account the dynamics of the economic relationships they implement.
A full account of the shortcomings of BGP, and proposals to modify it, must be developed rigorously with a clear understanding of the kinds of economic relationships and service models that it can and cannot implement well. The goal of this project is to develop a theoretical framework together with experimental capability to understand, predict, and design the interplay between economics and technologies that implement Internet connectivity. The principal investigators' (PIs) unique angle is an optimization perspective to inter-domain routing where profit maximization, physical connectivity, and AS pricing interact with routing decisions, peering structure, resource constraints, and traffic matrix. The PIs will develop mathematical models that capture the interactions between routing and economics, characterize the basic structures of BGP equilibrium and dynamics, discover and verify these properties in the operational Internet, and derive practical design and operation guidelines and algorithms.
Broader Impact: Through this research, the PIs will educate a new generation of network researchers with not only strong practical skills, but also the ability and the habit of applying rigorous mathematical techniques to solve a wide range of engineering problems. They can make unique and broad impact to both the academia and the industry.
|
1 |
2009 — 2013 |
Low, Steven Murray, Richard (co-PI) [⬀] Chandy, K. Mani Doyle, John (co-PI) [⬀] Candes, Emmanuel |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Netse: Large: a Theory of Network Architecture @ California Institute of Technology
This project is developing a theoretical foundation for the design of network architecture, which is essential to understanding highly evolved, organized, and complex networks, inspired by and with application to technological, biological, ecological, and social networks, and with strong connections to real-world data. Architecture involves the most universal, high-level, and persistent elements of organization, usually defined in terms of protocols ? the rules that facilitate interaction between the diverse components of a network. Network technology promises to provide unprecedented levels of performance, efficiency, sustainability, and robustness in almost all technological, natural, and social systems. The ?robust yet fragile? (RYF) feature of complex systems is ubiquitous, and a theoretical framework to manage the complexity/fragility spirals of our future infrastructures is critical.
The intellectual merit of this project is both theoretical and practical. A theory of architecture is crucial for the design of future networks and is at the heart of sustainability. Comparison of architectures from biology, ecology, and technology has identified a variety of common characteristic organizational structures. These observations will form the basis for a mathematical theory of architecture.
The broader impact of this project is through the application of network architecture design to multiple areas of engineering and through the development of a unified approach to teaching systems and complexity. The project involves a diverse collection of researchers, including women and underrepresented minorities. Research results are disseminated not only through domain-specific journals, but also in the broad-interest high-impact literature (e.g. Science, Nature, PNAS).
|
1 |
2013 — 2014 |
Low, Steven Dhople, Sairaj Giannakis, Georgios (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Workshop On Control, Computing, and Signal Processing Challenges in Future Power Systems. to Be Held in Arlington,Va in November, 2013 @ University of Minnesota-Twin Cities
Transformative research activities in the last decade focused on sustainable, robust, and reliable power systems have presented several paradigms such as smart grids, self-healing grids, and distributed generation. This workshop seeks to address the Control, Computing, and Signal Processing Challenges in Future Power Systems within the context of developing the tools for a system-level understanding of the impact of these emerging paradigms. It will promote cross fertilization of ideas with the aim of developing the underlying foundational science to realize next-generation electric generation, transmission, and distribution systems.
Intellectual Merit:
The workshop will bring together prominent researchers in relevant disciplines to address key questions relevant to future power systems, including: 1) What emerging control paradigms will improve management of renewables and charging of electric vehicles, with build-in resilience against failures and malicious attacks on critical infrastructure? 2) What is the role of optimization, big data analytics, and grid informatics in revitalizing power delivery subsystems and improving consumer satisfaction? 3) How can advances in statistical signal processing and machine learning be leveraged to improve situational awareness and system reliability? 4) How can we educate next-generation engineers about system-level challenges in future power systems?
Broader Impacts:
This workshop will engage several engineering disciplines to map out the foundational science for research in future power systems. The workshop will also emphasize the related educational aspects, primarily by leveraging the long tradition of excellence in power and energy systems education that the University of Minnesota has established through the NSF-supported Consortium of Universities for Sustainable Power.
|
0.919 |
2013 — 2017 |
Low, Steven Wierman, Adam [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Csr: Small:Collaborative Research: Data Center Demand Response: Coordinating the Cloud and the Smart Grid @ California Institute of Technology
Data centers have become both significant consumers of electricity and substantial sources of greenhouse gas emissions, and so a focus on energy-efficient computing has emerged. Until now, the most common approach for improving sustainability of data centers is a ?local? one, e.g., installing rooftop PV panels. This project takes a different, more ?global? approach: it allows data centers to contribute towards improving the sustainability of the electricity grid as a whole, in order to meet data centers' social responsibility as major energy consumers. Interestingly, engaging in such global sustainability efforts will, in turn, provide data centers with new opportunities to gain financial benefits.
This project will coordinate data centers with the smart grid through programs such as demand response, which allow the utilities to signal consumers to reduce or increase consumption as needed in order to stabilize the grid and tackle the unpredictability of the renewable energy resources. New resource management algorithms for data centers will be developed to facilitate their participation in demand response and other electricity market programs; and the responsiveness that data centers can provide the smart grid through such participation will be quantified and optimized.
This proposal is truly interdisciplinary as it requires a deep understanding not only about data centers and cloud computing but also about electricity markets and the smart grid. In particular, the results will help data centers in terms of both their choice of which electricity market programs to participate in and how to participate in such programs. Additionally, this project will help utility companies in the design of electricity market programs that encourage maximum data center responsiveness to better operate the grid and to reduce the price of electricity.
|
1 |
2015 — 2018 |
Low, Steven Wierman, Adam [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cps: Synergy: Collaborative Research: Beyond Stability: Performance, Efficiency and Disturbance Management For Smart Infrastructure Systems @ California Institute of Technology
Infrastructure networks are the foundation of the modern world. Their continued reliable and efficient function without exhausting finite natural resources is critical to the security, continued growth and technological advancement of the United States. Currently these systems are in a state of rapid flux due to a collision of trends such as growing populations, expanding integration of information technology, and increasing motivation to adopt sustainable practices. These trends beget both exciting potential benefits and dangerous challenges. Added sensing, communication, and computational capabilities hold the promise of increased reliability, efficiency and sustainability from "smart" infrastructure systems. At the same time, new technologies such as renewable energy resources in power systems, autonomous vehicles, and software defined communication networks, are testing the limits of current operational and market policies. The rapidly changing suite of system components can cause new, unforeseen interactions that can lead to instability, performance deterioration, or catastrophic failures. Achieving the full benefits of these systems will require a shift from the existing focus on approaches that analyze each aspect of interest in isolation, to a more holistic view that encompasses all of the relevant factors such as stability, robustness, performance and efficiency, and takes into account the presence of human participants.
This project provides a research roadmap to construct analysis, design and control tools that ensure the seamless integration of computational algorithms, physical components and human interactions in next generation infrastructure systems. Although there has been a great deal of research on stability questions in large scale distributed systems, there has been little effort directed toward questions of performance, robustness and efficiency in these systems, especially those with heterogeneous components and human participants. This research employs coupled oscillator systems as a common modeling framework to (i) characterize stability and performance of infrastructure systems, and (ii) develop distributed controllers that guarantee performance, efficiency and robustness by isolating disturbances and optimizing performance objectives. Practical solutions require that the theory be tightly integrated with the economic mechanisms necessary to incentivize users to enhance system stability, efficiency and reliability; therefore the work will also include the design of economic controls. In order to ground the mathematical foundations, theory and algorithms described above, the results will be applied to three target infrastructure networks where coupled oscillator models have played a foundational role in design and control: power, communication, and transportation systems. This approach allows the development of cross-cutting, fundamental principles that can be applied across problem specific boundaries and ensures that the research makes an impact on these specific infrastructure networks. This project will also incorporate concepts into existing undergraduate and graduate courses.
|
1 |
2016 — 2019 |
Low, Steven |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Design, Stability and Optimality of Cyber-Networks For Frequency Regulation in the Smart Grid @ California Institute of Technology
This research project aims at developing methods to help integrate clean energy into the electric power grid. We are at the cusp of a historic transformation of our energy system into a more sustainable form in the coming decades. One of the key technical challenges is balancing supply and demand at all times at all points of grid. Traditionally, supply-demand balance is maintained by adapting generation to fluctuating demand. This will become ineffective in the future because sources of renewable generation of electricity such as wind and solar power are random and uncontrollable. On the other hand, there will be more and more distributed energy resources such as electric vehicles, smart buildings, smart appliances, storage devices, and other power electronic controllers that not only consume, but may also sense, compute, communicate and actuate. We will design methods for ubiquitous, continuous load-side frequency regulation to supplement the current generation-side control, prove the stability and optimality of these methods in a large network of distributed energy resources, and study their interaction with generator-side control.
A cyberphysical network such as a smart grid consists of a physical network governed by its own laws of physics and a cyber-network that senses, communicates, computes, and actuates on the physical network. Our starting point is the observation that the topology of the cyber-network may be different from that of the physical network. For instance, not every node in the physical network may have sensors or controllers, and this constrains the topology of the cyber-network. On the other hand, two nodes that are not directly connected in the physical network may be able to communicate, e.g., through a wireless channel, and this provides an extra degree of freedom for the design of the cyber-network. We will develop a theory to clarify the impact of the cyber-network on the stability and optimality of the physical network, design effective cyber-networks and frequency regulation algorithms based on the theoretical insights, and validate these designs through simulations.
|
1 |
2016 — 2017 |
Low, Steven |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Pfi:Air - Tt: Optimal Adaptive Charging System @ California Institute of Technology
This PFI: AIR Technology Translation project develops software for adaptive electric vehicle (EV) charging networks and transitions it to the marketplace. We are at the cusp of a historic transformation of our energy system into a more sustainable form in the coming decades. Electrification of our transportation system will be an important component because, today, vehicles consume more than a quarter of our energy and emit more than a quarter of our energy-related carbon dioxide (CO2). Electrification will not only greatly reduce CO2 emission, but EVs can also be critical resources to help integrate renewable sources, such as wind and solar power, into our electric grid. One of the key enablers to mass EV adoption is the availability of smart charging networks.
This project will design a set of novel and sophisticated algorithms that optimize a network of EV chargers, implement them in software, and pilot them in a Caltech garage where the researchers have already installed a network of programmable EV chargers. It will serve as a prototype that validates the technology and business potential of next-generation adaptive charging network (ACN). Compared with state of the art in the EV charging industry, ACN will enable massive deployment of smart chargers and provide the same charging capacity at a fraction of required infrastructure costs.
This project addresses the following technology gaps as it translates from research discovery toward commercial application. At many workplaces in top EV cities in the US, there is a severe shortage of chargers relative to EVs, e.g., there is a charger for every 2-5 EVs. In the future, this ratio should be closer to 1:1. The bottleneck to a large-scale charging facility is, however, not the cost of electricity or chargers, but the limited capacity of electricity distribution system, as well as, in city centers, the real estate. The leading chargers in the current marketplace charge at their peak rates whenever EVs are plugged in. They cannot be deployed at scale without a prohibitively expensive upgrade of the electricity distribution system. The technologies to be developed in this project will optimally schedule the charging process of a network of adaptive chargers to satisfy energy requirements of all EVs within their deadlines without exceeding the capacity of the electricity distribution system whenever possible, and optimally and fairly allocate the available capacity among competing EVs otherwise. The ACN therefore maximally utilizes the most expensive resources in a charging ecosystem to provide a target charging capacity at a much lower infrastructure cost, creating a compelling value proposition.
The project will apply tools from optimization theory, control and dynamical systems, and algorithm design. The focus is to develop optimization software that will be ready for commercialization at the end of the project. The project will involve undergraduate and graduate students. In addition to research and software development, the project participants will be exposed to entrepreneurship and technology transfer.
|
1 |
2016 — 2020 |
Low, Steven Wierman, Adam (co-PI) [⬀] Yue, Yisong (co-PI) [⬀] Chandrasekaran, Venkat |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Aitf: Algorithmic Challenges in Smart Grids: Control, Optimization & Learning @ California Institute of Technology
This project will tackle the algorithmic challenges underlying the transformation of the power grid. Society is at the cusp of a historic transformation of our energy systems, driven by sustainability. Daunting challenges arise in the stable, reliable, secure, and efficient operation of the future grid that will be much more distributed, dynamic, and open. This project will push the boundaries of control, optimization, and learning to develop practical solutions to some of these difficulties. It will advance state of the art in both the science of general cyber-physical systems and its application to smart grids. It will support education and diversity through a tight integration of the research with educational courses and the training of female and minority students.
The theory and algorithms to be developed in this project will contribute directly towards the historic transformation of energy systems to a more sustainable future. Specifically, the project will focus on three core algorithmic challenges facing cyber-physical networks such as a smart grid: control, optimization, and learning. First, this project will develop an optimization-based approach to the design of feedback controllers for cyber-physical systems so that the closed-loop system is asymptotically stable, and every equilibrium point of the closed-loop system is an optimal solution of a given optimization problem. Second, this project will develop a new hierarchy of convex relaxations for exponential programs based on relative entropy optimization. This will immediately yield a fundamentally new approach for solving Optimal Power Flow (OPF) problems, which underlie numerous power system applications and are non-convex and NP-hard in general. Third, this project will develop methods to learn a policy that is near-optimal efficiently, despite not having access to the objective function at run time. This will allow power systems to "learn to optimize" in real time, addressing one of the biggest challenges in power systems -- that data about the system is too expensive or impossible to obtain in real time.
|
1 |
2017 — 2020 |
Low, Steven |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cps: Medium: Collaborative Research: Demand Response & Workload Management For Data Centers With Increased Renewable Penetration @ California Institute of Technology
The confluence of two powerful global trends, (1) the rapid growth of cloud computing and data centers with skyrocketing energy consumption, and (2) the accelerating penetration of renewable energy sources, is creating both severe challenges and tremendous opportunities. The fast growing renewable generation puts forth great operational challenges since they will cause large, frequent, and random fluctuations in supply. Data centers, on the other hand, offer large flexible loads in the grid. Leveraging this flexibility, this project will develop fundamental theories and algorithms for sustainable data centers with a dual goal of improving data center energy efficiency and accelerating the integration of renewables in the grid via data center demand response (DR) and workload management. Specifically, the research findings will shed light on data center demand response while maintaining their performance, which will help data centers to decide how to participate in power market programs. Further, the success of data center demand response will help increase renewable energy integration and reduce the carbon footprint of data centers, contributing to global sustainability. The PIs will leverage fruitful collaboration to eventually bring the research to bear on ongoing industry standardization and development efforts. The PIs teach courses spanning networks, games, smart grid and optimization, and are strongly committed to promoting diversity by providing research opportunities to underrepresented students.
Built on the PIs expertise on data centers and the smart grid, this project takes an interdisciplinary approach to develop fundamental theories and algorithms for sustainable data centers. The research tasks are organized under two well-coordinated thrusts, namely agile data center DR and adaptive workload management. The strategies and decisions of data center DR will be made based on the workload management algorithms that balance quality of service and energy efficiency and determine the supply functions. The workload management algorithms will optimize quality of service under the electric load constraints imposed by DR accordingly. This project will make three unique contributions: (1) new market programs with strategic participation of data centers in DR, instead of passive price takers, (2) fundamental understanding of the impacts of power network constraints on data center DR and new distributed algorithms for solving optimal power flow with stochastic renewable supplies, and (3) high-performance dynamic server provisioning and load balancing algorithms for large scale data centers under time-varying and stochastic electric load constraints and on-site renewable generation.
|
1 |
2019 — 2022 |
Low, Steven |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Epcn: Learning Power Grids From Limited Measurements: Fundamental Limits and Practical Algorithms @ California Institute of Technology
This project develops methods to make the management of future power grids more efficient, secure, and robust. The development of an intellectual basis and practical algorithms for future energy systems is an important challenge and a goal of this project. Specifically, network topology and line parameters are critical information for the management of power systems. Even though errors in such information can significantly impact system operation, there is relatively little work in identifying them, especially when measurements are limited both temporally and spatially. This problem is even more severe in medium-voltage and low-voltage distribution grids, not only because the lack of monitoring infrastructure today, but also because topology may change more frequently in response to changes in load or solar generation. Our goal is to develop a theory and algorithms for topology and line parameter identification.
We will focus on cases where measurements are available only at limited locations or/and for a limited time period. These settings are difficult but realistic, as, e.g., most distribution systems today have measurements from substations where a distribution grid interfaces with the bulk transmission grid, and from smart meters at end users, but not much measurements in between. Future applications may have to make identification decisions in near real-time based on limited numbers of samples. The proposed research consists of three thrusts. Thrust 1 (Learning with limited spatial measurements) focuses on the theory and algorithms when not all network nodes are observable, but sufficient samples can be collected over time from the observable nodes before an estimation has to be made. Thrust 2 (Learning with limited temporal measurements) focuses on the case where all network nodes are observable, but only a limited number of samples are available for identification. Thrust 3 (Integrated learning with limited measurements) integrates the theory and algorithms developed in Thrusts 1 and 2 into an overall identification system that is applicable where measurements are limited both spatially and temporally.
The methods developed in the proposal are applicable to other network systems (social, communications, transportation, financial). The project integrates research with education including curriculum development and linking to entrepreneurship.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |
2019 — 2022 |
Low, Steven |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cps: Ttp Option: Small: Adaptive Charging Network Research Portal @ California Institute of Technology
This project is motivated by challenges in future energy cyber-physical systems (CPS). It will build a one-of-a-kind research facility that will enable new CPS research on smart grid and electric vehicles. It will advance the state of the art in the clean-energy space and facilitate the development and adoption of these technologies. The result of this project will be integrated with the power system analysis course at California Institute of Technology (Caltech). Preliminary work on this project has already been used at Caltech and other places around the world for classroom projects as well as smart grid research. This project will equip a new generation of students with the passion and the interdisciplinary knowledge for tackling one of the biggest challenges of our time. A seed community of contributors and users has already formed organically. This project will grow this open-source community.
The PI's Lab has been working with a Caltech startup and Caltech Facilities department to design, deploy, and manage an Adaptive Charging Network (ACN) for electric vehicles at one of Caltech's parking garages. The primary goal of this project is to develop a suite of open-source software and hardware that will make Caltech ACN not just a production facility, but also a live testbed for CPS researchers at, and outside, Caltech. A secondary goal is to use the ACN Research Portal to study the joint optimization of distributed energy resources to provide both charging service to electric vehicles as well as energy services to electricity markets and utility companies. The key idea is to treat the production ACN as a physical layer and create an inter- mediate layer, called the ACN Research Portal, to allow safe, secure and easy access to charging data as well as push of charging commands for real-world testing of new algorithms on ACN. It will make large high-resolution charging data, data-driven simulator and experimental platform publicly available for the first time. This will allow CPS researchers to learn detailed statistical models, perform data analysis of real-world charging behavior, and easily and fairly compare control algorithms in a common platform driven by detailed real data. This is currently impossible.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |
2021 — 2024 |
Low, Steven Wierman, Adam [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Cps: Medium: Enabling Der Integration Via Redesign of Information Flows @ California Institute of Technology
This NSF CPS project aims to redesign the information structure utilized by system operators in today's electricity markets to accommodate technological advances in energy generation and consumption. The project will bring transformative change to power systems by incentivizing and facilitating the integration of non-conventional energy resources via a principled design of bidding, aggregation, and market mechanisms. Such integration will provide operators with the necessary flexibility to operate a network with high levels of renewable penetration. This will be achieved by a comprehensive bottom-down approach that will first identify the intrinsic cost of utilizing novel renewable resources and accommodate the operational ecosystem accordingly. The intellectual merits of the project include novel theories and algorithms for operating a vast number of distributed resources and testbed implementations of markets and controls. The project's broader impacts include K-12 and undergraduate programs, including in-class and extra-curricular STEM activities through, e.g., Hopkins in-class and extra-curricular STEM activities, and the Caltech WAVE summer research program.
Introducing distributed energy resources (DERs) at a large scale requires rethinking power grid operations to account for increased uncertainty and new operational constraints. The proposed research undertakes this task by overhauling the information structure that markets and grid controls utilize. We seek to characterize and shape how information is exchanged and used to manage the grid to improve efficiency, stability, and incentive alignment. The research is organized into three thrusts. Thrust 1 emphasizes the role of information in coordination. It seeks to characterize DER costs and constraints, designing bidding strategies tailored to convey information about the atypical characteristics of DER costs. Thrust 2 aims to develop aggregation strategies that efficiently manage resources by accounting for their cost and constraints, integrating DERs via an aggregate bid that protects sensitive user information and is robust to market manipulation. Finally, Thrust 3 characterizes the overall impact of DERs on operations. We will examine how user incentives that span across markets implicitly couple market outcomes and develop design mechanisms to mitigate inter-market price manipulation. We will also design pricing schemes that provide efficient DER allocation while preserving real-time operational constraints such as frequency regulation.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |
2021 — 2024 |
Low, Steven Wierman, Adam [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Cns Core: Medium: Dynamic Data-Driven Systems - Theory and Applications @ California Institute of Technology
Modern computer systems must be continually optimized in a data-driven manner to maintain performance, even as their deployment and workload environments change. This holds for traditional systems like content delivery networks and emerging architectures such as edge/cloud systems. The design of dynamic data-driven systems requires both theoretical advancements and new systems architectures. A key challenge is a tradeoff between optimality, i.e., choosing an optimal deployment for the current environment in terms of performance and/or cost, and smoothness, i.e., ensuring that the deployment changes are not too costly at any point. This project seeks to develop tools at the intersection of machine learning and optimization that enable systems to balance between optimality and smoothness. Further, this project deploys and empirically evaluates these tools in the context of 360 video streaming as a representative case study.
Smoothness is not a traditional system performance measure, and so it is typically enforced only in ad hoc ways by existing systems. However, it is a crucial consideration for systems that seek to continuously optimize their configuration since the switching costs associated with changing configurations can be significant. Managing the tradeoff between optimality and smoothness in a rigorous fashion can lead to dramatic improvements; however, it is challenging since it requires a robust data-driven design that can determine whether it is worth incurring a switching cost in the present, without knowledge of the future environment. This project develops analytic tools that enable the design of algorithms for dynamic systems that balance optimality and smoothness through the integration of data-driven and optimization approaches. There are also planned test-bed deployment activities for 360 video streaming.
The project will provide new foundational tools for the design of dynamic systems across multiple application areas. While we choose video streaming as our target application, the proposed fundamental research is applicable much more broadly. Notably, this project broadens the participation of underrepresented groups in STEM areas through programs at both K-12 and undergraduate levels. Planned activities include developing accelerated mathematics programs for middle-school students, summer programs for middle-school and high-school students, and summer research programs for undergraduate students.
This is a collaborative project with investigators from the University of Massachusetts Amherst, California Institute of Technology, and the State University of New York at Stony Brook. The results of this project will be maintained on the project website at https://groups.cs.umass.edu/hajiesmaili/soco/. These will include technical reports of the research findings, software prototypes of the algorithms designed, datasets, and experimental results collected for the 360 video streaming experiments.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |