2008 — 2011 |
Ying, Lei |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Nedg: Network Scheduling and Routing Under Partial Information Structure
Despite the clear success stories of wireless networks in the public, commercial, and governmental realms, recent work has shown that poor routing/scheduling can significantly impair their performance. We note that the unacceptable performance loss observed is not a consequence of a lack or deficiency of scheduling/routing algorithms in the literature; rather the cause is the information structure (channel state, queue state, topology, etc) that these algorithms assume. This proposal steers down a different path, initiating a conceptual shift toward the primary importance of information structure. We consider practical scenarios where only a small fraction of the network state can be explored. The goals of this proposal are two-fold: (a) characterizing the fundamental impact of partial/delayed network state information (NSI) on network throughput and other performance metrics such as delay and reliability, and (b) developing high-performance and distributed algorithms that can operate optimally subject to partial information. Broader Impact: We believe this work can contribute towards furthering basic network science required to design high-performance scheduling and routing with limited NSI. We also plan to bring some of the questions both as interesting undergraduate projects, as well as develop a new graduate-level course on the mathematics of the design of communication networks.
|
0.948 |
2010 — 2015 |
Ying, Lei |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Career: Meeting Deadlines: Theories and Algorithms to Support Delay Constrained Communication in Wireless Networks
Wireless technology has emerged as a low-cost and infrastructure-free method to deploy communication networks and has inspired a wide range of applications such as wireless mesh networks for public safety, wireless sensor networks for unmanned surveillance, and vehicular networks for accident warnings. Many of these applications require effective delay control for desired performance, which, however, is one of the most difficult problems in wireless network design due to the inherent weaknesses of wireless communication such as limited bandwidth, channel fading, and interference.
In the past few years, a major breakthrough in wireless network research has been to harness the power of optimization theory and stochastic network theory for network design. These advances, however, shed little light on communication latency (or delay) because the focus is almost exclusively on the long-term throughput. This project takes a bold step to break away from today's throughput-first mentality, and embraces a delay-oriented approach where delay is a primary design objective, not a byproduct of throughput-oriented designs.
The expected results of this project include: (i) new network theories that quantify fundamental delay and throughput limits of wireless networks; (ii) transformative algorithms (functionalities spanning multiple network layers and their interactions) that are optimized for communications requiring delay guarantees; and (iii) distributed and low-complexity implementations. Theoretically, this project will lead to a union of stochastic analysis and optimization for quality of service control in wireless networks. Practically, this project will produce transformative algorithms for supporting delay constrained applications.
|
0.948 |
2010 — 2014 |
Ying, Lei |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nets: Medium: Collaborative Research: Modeling, Design and Emulation of P2p Real-Time Streaming Networks
Emerging trends suggest that a significant share of Internet traffic will be due to real-time streaming applications. However, given the unpredictable nature of popularity of any content and the significant swings seen in user demand, provisioning a centralized content distribution system that can guarantee the strict quality of service constraints needed for real-time streaming is a very difficult task. Therefore, this project studies modeling paradigms and systems design for utilizing end-user resources for real-time streaming in a peer-to-peer (P2P) fashion. The rationale is that P2P networks can dynamically adapt to changing network conditions and service requirements more easily than centralized content distribution systems.
This research effort takes a holistic approach in which, on one hand, we study fundamental performance limits of peer-to-peer streaming and design distributed algorithms to achieve these limits. On the other hand, we use these algorithms as building blocks to construct practical protocols that are implemented and validated on an emulation testbed. We bring together a rich set of analytical tools drawn from stochastic processes, optimization and control theory to merge them with real-life multimedia issues of video coding, frame dropping and trans-coding as well as practical implementation issues. Besides the technological impact of developing viable video-streaming ideas for an increasingly networked world, our project also informs and educates both students and the broader society with curriculum development, outreach and integration of under-represented groups.
|
0.948 |
2012 — 2016 |
Ying, Lei |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Resource Allocation in Clouds: a Stochastic Modeling and Control Perspective
Cloud computing services (such as Amazon EC2 system, Google AppEngine, and Microsoft Azure) are becoming ubiquitous and are starting to serve as the primary source of computing power for both enterprises and personal computing applications. A cloud computing platform (or simply, a cloud) can provide a variety of resources, including infrastructure, software, and services, to users in an on-demand fashion. Compared to traditional own-and-use approaches, cloud computing services eliminate the costs of purchasing and maintaining the infrastructures for cloud users, and allow the users to dynamically scale up and down computing resources in real time based on their needs.
A cloud consists of a number of machines (computers), each with a certain amount of resources (CPU, RAM, hard disk space, etc.). Each machine can be subdivided into virtual machines, where each virtual machine (VM) behaves like a small machine with a certain amount of dedicated resources. When a user submits a job to the cloud, he or she requests a certain amount of resources from the cloud and the cloud responds by creating a VM with the required resources in a machine. The resource allocation problem is to figure out how to allocate jobs to machines. Further, when several jobs are waiting for service, the cloud must also decide which job to select for service next. The goal of this project is to design resource allocation algorithms for efficient operation of the cloud, and to design pricing mechanisms to maximize the cloud service provider revenue while providing good quality of service to competing users.
Intellectual Merit: The prior art in this area is to view the problem as a sequence of static problems as follows: consider the jobs that are currently waiting for service and allocate them to machines by solving a combinatorial optimization problem. Static approaches which ignore the dynamic nature of the system lead to instability. Our viewpoint here is fundamentally different: we consider the resource allocation problem as a dynamic stochastic network control problem. We will use queue length information about waiting jobs as the feedback signal to take resource allocation decisions such as routing jobs to machines and scheduling jobs on machines. To this end, we will answer a number of fundamental questions: what is the stability region of a cloud? ; is there a tradeoff between computational complexity and stability? ; how can we characterize the performance of resource allocation algorithms beyond stability? ; and how should a cloud provider price its resources for maximizing social welfare or profit? From a theoretical perspective, the novelty in the proposed approach lies in the design of control and performance analysis algorithms while taking computational complexity considerations in account. Broader Impact: The PIs teach graduate-level courses spanning networks, games, control theory, and optimization. We were among the first to incorporate network applications in control courses and control applications in networking classes. The proposed project provides new opportunities for such cross-fertilization by opening up a new application area, namely cloud computing, for control-theoretic methodologies. The PIs have a strong record of advising undergraduate students and graduate students from underrepresented groups. We will continue our recruitment efforts from such student groups for this project also. We will also use specific opportunities for this purpose as applicable, such as the NSF Alliance Graduate Education and the Professoriate (AGEP) program, which is a coordinated effort by Iowa universities to recruit minorities, and the Graduate Minority Assistantship Program (GMAP), which provides funds for recruiting minority students on research assistantships.
|
0.948 |
2012 — 2017 |
Ying, Lei |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nets: Small: Control of Partially Observable Wireless Networks: Fundamental Limits, Optimal Algorithms and Practical Implementation
State dependent resource allocation is critical for improving the network efficiency. Over the past two decades, remarkable progress has been made on the design of wireless networks with maximum throughput and low latency by using state dependent resource allocation algorithms. However, most of these works assume the network is fully observable and the network state information is perfectly known. This assumption is becoming increasingly questionable because of the multi-carrier technology, which makes it extremely expensive to obtain the complete network state information, and transmission delays and measurement errors, which make it impossible to know the exact network state. With wireless networks become pervasive in our daily life, new theories and algorithms are needed for partially observable wireless networks.
In this project, the PI models a partially observable wireless network as a partially observable Markov process, and then applies the framework of Markov decision processes (MDP). While important structure properties may be discovered using the MDP framework, finding optimal solutions in general is an NP-hard problem. To overcome this difficulty, this project uses drift-based competitive analysis and drift-based large-deviations analysis for quantifying fundamental limits and deriving optimal or near optimal algorithms. The project is expected to lead to breakthroughs in managing partially observable wireless networks. Fundamental limits of partially observable wireless networks, novel resource allocation algorithms with provable throughput and latency guarantees, and implementations on a real-world test bed will have a significant impact on the design and implementation of future wireless networks.
|
0.948 |
2015 — 2018 |
Ying, Lei Zhang, Junshan [⬀] Kitchen, Jennifer |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ears: Joint Optimization of Rf Design and Smartphone Sensing: From Adaptive Sniffing to Waze-Inspired Spectrum Sharing @ Arizona State University
The past decade has witnessed a skyrocketing demand for commercial wireless spectrum. These sharp increases in mobile traffic (particularly smartphone traffic) are projected to continue in the foreseeable future, creating an urgent need to develop innovative spectrum sensing and sharing technologies. The proposed research is inspired by Waze, a highly successful GPS-based mobile navigation application program that provides real-time traffic information, based on user-submitted travel times and route details. Inspired by Waze, this project is centered around the vision that future generation smartphones will be able to carry out real-time spectrum sensing and sharing of the communication environment, through built-in spectrum sniffers and the help of shared usage information from mobile devices. This research shall enable a paradigm shift from existing cognitive radio design approaches to usage aware spectrum sharing. Innovative implementation of a low-cost integrated broadband RF design for smartphone spectral sniffing, in combination with novel spectrum sharing techniques, will have a compelling and transformative impact in smartphone design and enable efficient spectrum sharing. The educational activities will develop skilled workforce in this area of national need by inspiring and engaging the middle and high school student population in engineering.
Built on this vision, this project advocates joint design of radio frequency (RF) hardware and spectrum sensing for smartphones, aiming to enable usage-aware spectrum sharing with minimal dynamic range and power consumption requirements placed on the smartphone hardware. Under this common theme, this project consists of the following research and educational thrusts. I) Low-cost RF architecture for smartphone sensing. Thrust I focuses on low-cost, integrated, broadband RF design for spectrum sensing and data transmission. The spectral sniffer leverages shared usage information to detect RF signals from 1 GHz to 18 GHz with both high dynamic range and sensitivity, and the digital transmitter enables modulation-agnostic data transmission with low handset power consumption. Built on the low-overhead design of RF hardware in this thrust, database assisted spectrum access and distributed spectrum access are explored in Thrusts II and III, respectively. II) Waze-inspired database assisted spectrum sharing. Along the same line as in the Waze application, a database is used to gather spectrum usage information from smartphones, and sends both real-time location-specific usage information and the set of potentially vacant channels in response to the requests from individual users. Each user then carries out refined sensing, followed by data transmissions. III) Waze-inspired usage-aware distributed spectrum sharing. Thrust III is dedicated to study distributed spectrum sharing, where usage information is shared only among smartphone users in the vicinity. IV) Integrate research into education and outreach by performing a diverse set of activities that include curricular tasks and K-12 outreach.
|
0.939 |
2016 — 2019 |
Ying, Lei |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Resource Allocation For Time-Critical Communications in Wireless Networks @ Arizona State University
Time-critical instantiations of wireless networks such as vehicular networks, robotic systems, real-time surveillance, and networked control require stringent deadline requirements on packet transmissions. These emerging networks demand reliable and predictable transmissions over an unreliable, time-varying wireless medium to support applications such as warning messages, voice calls and video streaming. Despite remarkable progress in the design of wireless networks with maximum throughput and low delays over the last two decades, communications with hard deadlines in wireless networks remains a challenging open problem. Most existing works on delay performance in wireless networks focus on reducing average delays rather than guaranteeing hard deadlines. The goal of this project is to develop fundamental theories and distributed algorithms for transmitting packets with hard deadlines in wireless networks for time-critical applications. Technical advances in this project will contribute to the improvement of wireless systems for time-critical communications, including the so-called internet of things, cyber-physical systems, emergency response wireless networks, and sensor networks. The problem of supporting communications with hard deadlines in ad hoc wireless networks is very challenging because the capacity region of the network is arrival-dependent, and packet deadlines induce special types of spatial and temporal correlation. This project focuses on a comprehensive resource allocation solution for time-critical communications in wireless networks by utilizing the following four novel techniques: (1) a virtual link approach to achieve spatial-temporal resource allocation to guarantee end-to-end hard deadlines, (2) a deadline-aware random access algorithm, (3) a virtual frame approach to deal with the infinite temporal correlation among packets, and (4) deadline-aware PHY/MAC resource allocation. By exploiting these transformative technical approaches, the project will (1) design distributed algorithms for mission-critical wireless networks, (2) design resource allocation algorithms for supporting hard end-to-end deadlines for physical layer models beyond the collision model and (3) design joint routing/scheduling algorithms for multihop traffic flows with end-to-end deadline constraints. Education is a core component of this project. During the course of this project, research and education will be integrated by including new theories and algorithms developed in this project into graduate-level courses. Every effort will be made to involve undergraduates and students from under-represented students in this project.
|
0.939 |
2016 — 2019 |
Zhang, Junshan (co-PI) [⬀] Ying, Lei |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Twc Sbe: Small: Towards An Economic Foundation of Privacy-Preserving Data Analytics: Incentive Mechanisms and Fundamental Limits @ Arizona State University
The commoditization of private data has been trending up, as big data analytics is playing a more critical role in advertising, scientific research, etc. It is becoming increasingly difficult to know how data may be used, or to retain control over data about oneself. One common practice of collecting private data is based on "informed consent", where data subjects (individuals) decide whether to report data or not, based upon who is collecting the data, what data is collected, and how the data will be used. This model is becoming untenable, with vague privacy policies and a behind-the-scenes data brokerage market becoming the norm. In practice, there are two fundamental issues that need to be addressed: (i) data subjects have no control of data privacy after transferring private data to the data collector; and (ii) the data collector has sole ability to protect users' private data. This project takes a new, market-based approach: data subjects control their own data privacy by reporting noisy data, and data collectors provide incentives in exchange for receiving more accurate data. This research will enable a paradigm shift from the traditional practice of informed consent for private data collection to a market-based approach where data collectors have only the fidelity of data needed, reducing the potential damage from data breach and giving data subjects greater control over use of their private data.
In particular, the problem under consideration is studied in a game-theoretic setting, for general private data models and for a variety of privacy notions, with focus on quantifying two fundamental tradeoffs: the tradeoff between cost and accuracy from the data collector's perspective, and the tradeoff between reward and privacy from a data subject's perspective. The research tasks include (i) devising effective incentive mechanisms for data collectors to collect quality data (controlled by individuals) with minimum cost; and (ii) developing private-preserving reporting algorithms that maximize data subjects' payoffs by taking both payment and privacy loss into account. New theories and mechanisms developed in this project will be integrated into undergraduate and graduate courses.
More information about this project can be found at the project homepage http://inlab.lab.asu.edu/data-privacy/
|
0.939 |
2017 — 2020 |
Zhang, Junshan [⬀] Ying, Lei |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cps: Medium: Collaborative Research: Demand Response & Workload Management For Data Centers With Increased Renewable Penetration @ Arizona State University
The confluence of two powerful global trends, (1) the rapid growth of cloud computing and data centers with skyrocketing energy consumption, and (2) the accelerating penetration of renewable energy sources, is creating both severe challenges and tremendous opportunities. The fast growing renewable generation puts forth great operational challenges since they will cause large, frequent, and random fluctuations in supply. Data centers, on the other hand, offer large flexible loads in the grid. Leveraging this flexibility, this project will develop fundamental theories and algorithms for sustainable data centers with a dual goal of improving data center energy efficiency and accelerating the integration of renewables in the grid via data center demand response (DR) and workload management. Specifically, the research findings will shed light on data center demand response while maintaining their performance, which will help data centers to decide how to participate in power market programs. Further, the success of data center demand response will help increase renewable energy integration and reduce the carbon footprint of data centers, contributing to global sustainability. The PIs will leverage fruitful collaboration to eventually bring the research to bear on ongoing industry standardization and development efforts. The PIs teach courses spanning networks, games, smart grid and optimization, and are strongly committed to promoting diversity by providing research opportunities to underrepresented students.
Built on the PIs expertise on data centers and the smart grid, this project takes an interdisciplinary approach to develop fundamental theories and algorithms for sustainable data centers. The research tasks are organized under two well-coordinated thrusts, namely agile data center DR and adaptive workload management. The strategies and decisions of data center DR will be made based on the workload management algorithms that balance quality of service and energy efficiency and determine the supply functions. The workload management algorithms will optimize quality of service under the electric load constraints imposed by DR accordingly. This project will make three unique contributions: (1) new market programs with strategic participation of data centers in DR, instead of passive price takers, (2) fundamental understanding of the impacts of power network constraints on data center DR and new distributed algorithms for solving optimal power flow with stochastic renewable supplies, and (3) high-performance dynamic server provisioning and load balancing algorithms for large scale data centers under time-varying and stochastic electric load constraints and on-site renewable generation.
|
0.939 |
2017 — 2020 |
Tong, Hanghang (co-PI) [⬀] Ying, Lei |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Iii: Small: Towards a Theoretical Foundation For Diffusion Source Localization @ Arizona State University
Diffusion processes have been used to model many real-world phenomena, including rumor spreading on the Internet, epidemics in human beings, emotional contagion through social networks, and even gene regulatory processes. Diffusion source localization is to identify the source(s) of a diffusion process based on observations such as the states of the nodes and a subset of timestamps at which the diffusion process reaches the nodes. The solutions to this problem can answer a wide range of important questions and have significant societal and economic impacts. For example, epidemic diseases are great threats to global health. The 2009 H1N1 virus alone resulted in 151,700 to 575,400 deaths globally. Locating an epidemic source can help identify the transmission media of the disease. This project develops fundamental theories and effective algorithms for fast and accurate diffusion source localization in large-scale networks and with partial information. The results have immediate applications for identifying patient zero in epidemiology, for tracking the spreading of computer viruses/malware in cyber security, for locating the sources of leaked classified information or rumors in social networks, for identifying infusion hubs of human diseases, etc. Existing research on social networks almost exclusively focuses on deriving realistic but mathematically trackable network models and diffusion models. The problem of locating diffusion sources in realistic networks has not been well studied. The key to accurately locating the diffusion source is to identify characteristics of infection subnetworks that are unique "signatures" of the source. By identifying and leveraging unique source signatures, this project advances the state of the art of diffusion source localization by addressing the following three challenges: (1) On the theory side, this project establishes the fundamental limits of source localization for realistic networks. (2) On the algorithm side, this project develops a suite of effective and scalable diffusion source detection algorithms whose theoretical properties are well-understood. (3) From the evaluation perspective, this project comprehensively evaluates the proposed source detection algorithms using both simulation studies and real application scenarios.
|
0.943 |
2018 — 2021 |
Ying, Lei |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nets: Small: Collaborative Research: Towards Adaptive and Efficient Wireless Computing Networks @ Arizona State University
Today's mobile devices are not merely smart, they are becoming intelligent as artificial intelligence applications such as Facebook Caffe2 and Google Tensor-Flow Lite are being pushed into mobile devices and as mobiles devices are being integrated into the cloud-fog-mobile architecture. This calls for efficient and adaptive computing/communication co-design of wireless networks to optimize application-level latency (including both communication latency and computing times) and to achieve energy efficiency (considering energy consumed by both communications and computing). This project develops fundamental theories and novel architectures of low-latency, energy-efficient, and computing-centric wireless networks to support emerging mobile intelligence applications. Theories and algorithms developed by the PIs are constantly integrated into the undergraduate and graduate courses taught at the two universities. This project also provides hand-on experiences to undergraduate and high school students with state-of-the-art wireless technologies.
Computing/communication co-design, while new for wireless networks, is a central topic in data center networks. However, the proposed solutions, while inspiring, are not directly applicable to wireless computing networks because of the unique features of wireless networks such as wireless interference, channel fading and limited energy. This project focuses on provably optimal mechanisms that dynamically and adaptively schedule computing tasks and data transmissions to meet application-level performance requirements, and consists of three interdependent thrusts: (i) Optimal computing/communication co-design. This thrust develops mathematical models and theoretical limits of wireless computing networks, (ii) Robust computing/communication co-design. This thrust focuses on robust computing and communication co-design that achieves desired performance with imperfect state information and under unavoidable short-term system overload and (iii) Learning-aided adaptive computing/communication co-design. This thrust further improves the performance of wireless computing networks by leveraging both historical data and predictable user behaviors.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.943 |
2018 — 2021 |
Ying, Lei |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Specees: Collaborative Research: Leveraging Randomization and Human Behavior For Efficient Large-Scale Distributed Spectrum Access @ University of Michigan Ann Arbor
An explosion of low-cost wireless devices promises new applications and services in diverse domains, including health, transportation, energy, manufacturing, and entertainment. This project focuses on developing energy and spectrum-efficient, distributed multi-access strategies for dynamic and large-scale wireless networks under the stringent energy and delay requirements that are expected in emerging applications. This work will enable the development of a multitude of technologies that can improve the life of society-at-large. For example, this work can support the next generation of communication technologies for large-scale Internet of Things (IoT) applications and autonomous vehicle applications. Moreover, education is a core component of this project. New theories and algorithms developed in this project are integrated into the graduate-level courses at the three universities. Undergraduate and graduate students are involved in the project through the undergrad capstone and masters graduation projects at the Ohio State University.
This project explores the fundamental energy and spectrum-efficiency tradeoff of distributed spectrum access methods, and develops adaptive and correlated strategies that embrace and control randomness with efficiency guarantees for dynamic users with delay-sensitive traffic. In addition, the design incorporates humans into the loop by observing how humans react in simple multi-access games, providing simple human behavior models and simple human-perceived quality metrics, and by designing methods that can adapt to unexpected events or actions. A combined analysis and implementation approach of this project exploits high-dimensionality in the system while also overcoming difficulties for large-scale implementation and testing. In particular, the project develops mean-field techniques and analyses for large-scale spectrum access. Novel real-world experimentation strategies developed in this project emulate large-scale system operation in a small testbed by utilizing the simplification due to our randomized solutions and the integration of the aforementioned mean-field methods.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.943 |
2022 — 2025 |
Ying, Lei |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Cif: Small: Nonasymptotic Analysis For Stochastic Networks and Systems: Foundations and Applications @ Regents of the University of Michigan - Ann Arbor
The confluence of cloud computing, machine learning/artificial intelligence and Internet-of-things/sensor technology is transforming society in unprecedented ways, and leading to innovations in autonomous systems, healthcare, bioinformatics, social networks, online and in-store retail industry, and education. Breakthrough developments in these widely disparate fields use machine learning and cloud computing with millions of servers to aid data-driven decision-making using terabytes of data, some in real-time and some offline. At the heart of these large-scale machine-learning and cloud-computing applications are stochastic dynamical systems of enormous scale. Analyzing and optimizing such systems are often difficult because of the size and the unknown statistical description of the underlying randomness in such systems. This project is aimed at understanding the performance of large stochastic systems by developing a new analytical method that synthesizes tools from probability, machine learning, and stochastic networks, and will lead to new advances in the design of fast and more efficient computing systems for training large-scale machine-learning models, while yielding new fundamental insights into deep reinforcement-learning algorithms. The project will contribute to education and workforce development by integrating the theories and algorithms into the graduate-level courses and by involving undergraduate and students from underrepresented groups in the research. <br/><br/>This project develops a new analytical method for obtaining non-asymptotic bounds using Lyapunov drift analysis. The method combines drift analysis with ideas from Stein's method, dimensionality reduction from state-space collapse and properties of reproducing kernel Hilbert spaces, as appropriate. The project leverages three key ideas to advance the state-of-the-art: Stein's method to choose appropriate Lyapunov functions to study mean-field limits, identifying lower-order models using the notion of state-space collapse, and using moment-generating functions or characteristic functions as test functions to obtain higher-moment bounds on the performance of stochastic systems. During the course of this project, the method is applied to two applications: (i) robust and ultra-low latency computing networks for supporting complex machine-learning jobs with concurrent and dependent tasks, which are processed in heterogeneous server farms; and (ii) deep reinforcement-learning for deriving new performance bounds for neural temporal-difference learning and for the Actor-Critic algorithms.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.933 |
2022 — 2024 |
Ying, Lei |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Towards a Theoretic Foundation For Optimal Deep Graph Learning @ Regents of the University of Michigan - Ann Arbor
Graph learning has become the cornerstone in numerous real-world applications, such as social media mining, brain connectivity analysis, computational epidemiology and financial fraud detection. Graph neural networks (GNNs for short) represent an important and emerging family of deep graph learning models. By producing a vector representation of graph elements, GNNs have largely streamlined a multitude of graph learning problems. In the vast majority of the existing works, they require a given graph, including its topology, the associated attribute information and labels for (semi-)supervised learning tasks, as part of the input of the corresponding learning model. Despite tremendous progress being made, a theoretical foundation of optimal deep graph learning is still missing, a gap that this project aims to fulfill. The outcomes of this project have broader impacts on education and society. The results of this project enrich the curriculum as well as summer outreach programs at participating institutions, and are further disseminated to the community through a variety of formats to create synergies and advance understandings of different disciplines. This project benefits a variety of high-impact graph learning based applications, including recommendation, power grid, neural science, team science and management, and intelligent transportation systems.
This project examines the fundamental role of the input data, including graph topology, attributes and optional labels, in graph neural networks. There are three research thrusts in this project. The first thrust seeks to understand how sensitive the GNNs model is with respect to the input graph; how to quantify the uncertainty of the GNNs model; and how that impacts the generalization performance of the GNNs model. The second thrust develops algorithms to optimize the initially provided graph so as to maximally boost the generalization performance of the given GNNs model. The third thrust develops active learning methods based on deep reinforcement learning with entropy regularization to optimally obtain the additional labels to further improve the GNNs model. This project investigates new theoretic foundations in terms of the sensitivity, the uncertainty and the generalization performance of graph neural networks. It develops new algorithms for learning optimal graphs and active GNNs with better efficacy whose fundamental limits, including sample complexity, generalization error bound, optimality and convergence rate, are well understood.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.933 |
2022 — 2025 |
Anastasopoulos, Achilleas (co-PI) [⬀] Ying, Lei Kim, Hun-Seok |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Swift: Instantaneous Feedback-Based Adaptive Communications and Networks @ Regents of the University of Michigan - Ann Arbor
This research program investigates instantaneous feedback-based adaptive communications (IFAC) for next generation wireless communication systems. The proposed IFAC enables sub-bit-level real-time adaptation of the wireless communication signal to exploit dynamic tradeoffs in spectral efficiency, reliability, and latency based on the instantaneous feedback from the receiver. Most conventional systems are designed for message-level link adaptation based on the average noise power without utilizing sub-bit-level noise observation. Such message-level feedback schemes are suboptimal for emerging and future applications that require ultra-reliable low latency communication (URLLC) in massive machine-type communication networks where the number of devices can be much greater than that of today's consumer mobile cellular networks. This research program will investigate the feasibility of instantaneous feedback-based wireless communication systems to attain a superior performance closer to the theoretical channel capacity and reliability limit, outperforming state-of-the-art conventional approaches. The proposed IFAC has great potential as a critical enabling technology for the next generation (beyond 5G) URLLC that targets 100 micro-second latency and 99.99999% reliability for short message transmission. Attaining such stringent URLLC goals with IFAC can enable various classes of new applications including autonomous micro-robots, intelligent automated factories, and virtual/augmented reality systems for people with disabilities. The proposed research program includes undergraduate- and graduate-level curricula development as an integral part. It will contribute to new course materials that cover interdisciplinary topics that range from the wireless communication theory and data-driven reinforcement learning to real-time hardware system prototyping.<br/><br/>The proposed IFAC scheme is inspired by an early result by Viterbi in 1965 that utilizes instantaneous feedback for reliable communications. The term "instantaneous" in the proposed IFAC indicates that the feedback latency is negligible as it is significantly shorter than the average symbol length. In the original Viterbi's work, instantaneous feedback was a theoretical assumption that was not realizable in practical systems at that time. However, the proposed research aims at demonstrating the feasibility of end-to-end real-time systems to show that instantaneous feedback is indeed possible thanks to the advancement of digital integrated circuit technologies that attained astonishing 9 orders of magnitude (a factor of a billion) improvement since 1965. IFAC is a rate-less scheme that adapts to the channel realization at the sub-bit level to optimize the spectral efficiency, error rate, and/or the latency given the target outage probability. This research program will study various IFAC schemes that are constructed with an application-specific optimization criterion as a Markov decision process or reinforcement learning problem for the instantaneous feedback decision at the receiver. IFAC can enable highly adaptive, online learning based novel methods for massive machine-type communication, accomplishing superior efficiency and performance that are unattainable from conventional wireless communication schemes.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.933 |