1986 — 1987 |
Iyer, Ravishankar |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Engineering Research Equipment Grant: Investigation of Lispmachine Architecture Reliability and Performance @ University of Illinois At Urbana-Champaign |
0.915 |
1987 — 1989 |
Wah, Benjamin (co-PI) [⬀] Iyer, Ravishankar Banerjee, Prithviraj [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Engineering Research Equipment Grant: Algorithm Development and Performance Evaluation of Hypercube Multiprocessors @ University of Illinois At Urbana-Champaign
Researchers at the University of Illinois-Urbana will purchase a hypercube multiprocessor. The equipment will be used for research in computer science, including the following three research projects: (1) Design and implementation of sophisticated computer-aided design tools for VLSI that can run efficiently on the Hypercube multiprocessor. Specifically, tools will be developed for cell placement, wire routing, timing, logic and fault simulation, circuit extraction from mask layouts, and automatic test generation. (2) Intelligent heuristic searches for artificial intelligence applications. Strategies for parallel processing of heuristic searches on the Hypercube will be developed. Also to be considered will be program restructuring for parallel evaluation of logic programs. (3) Performance evaluation of the Hypercube system.
|
0.915 |
1999 — 2003 |
Iyer, Ravishankar |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Adaptive Software Implemented Fault-Tolerance For Networked Systems @ University of Illinois At Urbana-Champaign
Adaptive Software-Implemented Fault Tolerance for Networked Systems (Project Summary)
This project will conduct an experimental study to investigate developing a set of general-purpose, fault tolerance services in a networked environment. The research will focus on designing a software-implemented fault tolerance (SIFT) layer, Chameleon, which will provide fault tolerance services to user applications, manage user processes across the network, provide rapid error detection, and initiate recovery from errors in the hardware, the operating system, applications, and the SIFT layer. In other words, Chameleon will protect all of the key components in a distributed system, including itself.
Our primary objective in developing Chameleon is to define and demonstrate the SIFT architecture, which includes static and dynamic reconfigurability and an extensive suite of error detection and recovery protocols in an integrated environment.
We will investigate ways to make Chameleon both statically and dynamically reconfigurable. Dynamic reconfigurability will allow Chameleon to change the level of reliability services that it provides during the lifetime of the target. In addition, Chameleon will be constructed to facilitate the creation of new fault tolerance techniques for the hardware, operating system, and applications. For static reconfigurability, we envision providing a library of fault tolerance techniques from which a customized fault tolerance solution can be provided to a target application. A specific solution will be composed of a range of distributed error detection and recovery techniques to provide the level of dependability required by the application.
|
0.915 |
2000 — 2004 |
Sanders, William Iyer, Ravishankar Hwu, Wen-Mei (co-PI) [⬀] Lumetta, Steven (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr: Experimental Validation of Large-Scale Networked Software Systems @ University of Illinois At Urbana-Champaign
Large-scale networked software systems are hard to design, and even more difficult to validate. Validation of such systems is increasingly important, since they are more and more being called on to perform critical functions. This validation difficulty stems from the inherent complexity of these systems, and often is due to the fact that they are often designed to adapt to variable workloads and operating conditions at the process, node, and network levels. Incorrect operation during periods of dynamic adaptation can lead to unpredictable and potentially hazardous consequences. In order to ensure that such systems operate correctly in critical environments, one must perform validations to confirm that they will function reliably in the presence of faults/failures, have predictable performance, and will continue to operate when intrusions occur. Validation of multiple behavior dimensions (e.g., reliability/availability, performance, and survivability) is also critical. This research will develop the theory, methodology, and tools necessary to experimentally validate the reliability/availability, performance, and survivability of large-scale networked software systems. The intention is to develop a comprehensive framework for experimentally validating large-scale networked software systems. Taken as a whole, this work will provide a sound and fundamental approach to validation of networked software and applications.
|
0.915 |
2004 — 2009 |
Kumar, Panganamala Iyer, Ravishankar Vaidya, Nitin Veeravalli, Venugopal (co-PI) [⬀] Bernhard, Jennifer (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
RR: Wireless Wind Tunnel: a Testbed For Experimental Evaluation of Wireless Networks @ University of Illinois At Urbana-Champaign
This project, evaluating protocols for wireless networks and developing scaling techniques for physical environments, aims at deploying an anechoic chamber for interference control, forming a testbed, referred to as the Wireless Wind Tunnel. The uses of the testbed focus on: Evaluation of wireless protocols (WP) in controlled environments, Development of channel models suitable for simulation-based evaluation of WPs, and Evaluation of techniques for scaling the physical environment to facilitate realistic wireless experiments. The Wireless Wind Tunnel (WWT) addresses some of the limitations based on computer evaluations resulting from the present insufficient understanding of channel and system models for wireless networks. These are not well understood and brute force accurate simulation of the wireless environment are at present too complex. Existing hardware testbeds suffer from one or both of the following shortcomings: Experiments often cannot be repeated due to interference by other wireless devices operating in the same frequency range and The parameters of the experiment (such as the mobility patterns of the mobiles and scatterers in the environment) are not fully controllable.
Broader Impact: This work impacts the education mission, including course work, laboratories, student projects. The testbed will serve as a demonstration tool. New educational opportunities will open involving experimental research providing better training and motivation. Facilities will be made available to a larger pool of researchers. Additional impact is expected on communications systems in practice.
|
0.915 |
2004 — 2008 |
Sanders, William Iyer, Ravishankar Adve, Vikram (co-PI) [⬀] Plante, Raymond |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ngs: a Compiler-Enabled Model- and Measurement-Driven Adaptation Environment For Dependability and Performance @ University of Illinois At Urbana-Champaign
Abstract: CNS-0406351
Next-generation parallel and distributed computing must be dependable and have predictable performance in order to meet the requirements of increasingly complex scientific and commercial applications. The large-scale nature and changing user requirements of such applications, coupled with the changing fault environment and workloads in which they must operate, dictate that their dependability and performance must be managed in an online fashion, reacting to changes in anticipated and observed faults, demands placed on the system, and changes in specified dependability, performance, and/or functional requirements. This project will create a compiler-enabled model- and measurement-driven adaptation environment that allows distributed applications to perform as expected despite faults that may occur. Achieving those capabilities will require fundamental advances in and synergistic combinations between 1) compiler-based flexible dependability mechanisms, 2) efficient online model-based prediction and control, and 3) measurement-driven and compiler-enabled early error detection. The project will validate and apply the adaptation environment by using it for two important applications from the scientific and commercial domains: the CARMA (Combined Array for Research in Millimeter-Wave Astronomy) image pipeline, a data-intensive Grid application for radio astronomy, and iMobile, an enterprise-scale mobile services platform.
|
0.915 |
2006 — 2011 |
Sanders, William Iyer, Ravishankar Hwu, Wen-Mei [⬀] Nahrstedt, Klara (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cri-a Configurable, Application-Aware, High-Performance Platform For Trustworthy Computing @ University of Illinois At Urbana-Champaign
This project, investigating new sets of application-aware methods to provide customized levels of trust (specified by application) via an integrated approach involving re-programmable hardware and novel compiler methods to extract security and reliability properties, supported by a configurable OS and middleware, develops a lab to support application aware hardware for trustworthy computing. The work enables ground-breaking experimental research in creating large-scale, demonstrably trustworthy, cluster computing platforms for on-demand/utility computing and/or adaptive enterprise computing. The infrastructure augments a cluster of computers, each with hardware and software support allowing certain application functions to be executed in silicon. The facility supports innovative research in new software that takes advantage of the reconfigurable logic available from the Trusted ILLIAC system, a validation system considered the cornerstone for quantitative assessment of alternative designs and solutions. Exploring customized trust models via an integrated approach involving compiler, hardware, OS, and middleware, the cluster architecture includes programmable hardware where many designs can be tested or optimized for applications without the costs of new chips. The Trusted ILLIAC supports a rich set of research projects that span online hardware-software assessment, efficient programming environments for heterogeneous multiprocessor systems, SW bug detection, HW validation, configurable trust-providing mechanisms, automated fault management, on-line model-based adaptation strategies, middleware support for trustworthiness, application-based placement detectors, and smart card utilization.
Broader Impact: Trusted ILLIAC represents a fundamental change in how computing is accomplished (i.e., direct representation of tasks in silicon) enabling that paradigm by merging the new architecture with existing cluster and operating system functionality. In the field of trusworthiness, it provides customizable computing technology to the broader community of students, researchers, and institutions, enabling the creation of integrated trustworthy computing testbeds. The infrastructure benefits technology transfer efforts from research to real world environments, enabling collaborations with government and industry developers to determine how trustworthy hardware assists and how software stacks can be integrated into products.
|
0.915 |
2010 — 2014 |
Iyer, Ravishankar Kalbarczyk, Zbigniew |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Tc: Small: Data Driven Analysis of Security Attacks in Large Scale Systems @ University of Illinois At Urbana-Champaign
Despite sophisticated monitoring tools for runtime detection of intruders and techniques designed to protect computing systems from a wide range of attacks, attackers continually penetrate even well-protected systems. Attack data from real, large-scale production environments (National Center for Supercomputing Applications (NCSA) at Illinois, in this work) are used as a basis for characterizing and modeling attacker behavior and for uncovering deficiencies of the monitoring infrastructure. Increased understanding of attacks arising from these analysis and modeling activities significantly contributes to improvements in secure systems analysis and design. The analyses uncover new and realistic attack scenarios that can guide the design of enhancements to improve system protection against malicious activities at every level. Understanding real attack patterns and classes through detailed forensics pinpoints the open holes in a network/system and characterizes attacker behavior. In-depth study of the data allows investigating actions and intentions of the attacker, and creates a foundation for the design of an automated tool to assist in data collection, analysis, and response. The size and variety of the data enable a flexible framework to be developed that can incorporate insights gained from attacks yet unseen.
This research produces sound methods for automated (semi-automated) analysis of large populations of data on security attacks and develops tools to facilitate the analysis and detection. The goals are to understand the attack patterns, establish comprehensive models to capture attacker behavior, and use the models to enable development of techniques for rapid detection of malicious tampering with the system.
|
0.915 |
2013 — 2017 |
Slagell, Adam Iyer, Ravishankar |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Twc: Option: Medium: Collaborative: Semantic Security Monitoring For Industrial Control Systems @ University of Illinois At Urbana-Champaign
Industrial control systems differ significantly from standard, general-purpose computing environments, and they face quite different security challenges. With physical "air gaps" now the exception, our critical infrastructure has become vulnerable to a broad range of potential attackers. In this project we develop novel network monitoring approaches that can detect sophisticated semantic attacks: malicious actions that drive a process into an unsafe state without however exhibiting any obvious protocol-level red flags. In one thrust, we conduct a measurement-centric study of ICS network activity, aimed at developing a deep understanding of operational semantics in terms of actors, workloads, dependencies, and state changes over time. In a second thrust, we develop domain-specific behavior models that abstract from low-level protocol activity to their semantic meaning according to the current state of the processes under control. Our goal is to integrate these models into operationally viable, real-time network monitoring that reports unexpected deviations as indicators of attacks or malfunction. A separate "Transition to Practice" phase advances our research results into deployment-ready technology by integrating it into the open-source Bro network monitor. Overall, our work will improve security and safety of today's critical infrastructure by providing effective, unobtrusive security monitoring tailored to their specific semantics. In addition, we tie a number of educational activities to the research and involve students at all levels.
|
0.915 |
2013 — 2017 |
Sinha, Saurabh (co-PI) [⬀] Lumetta, Steven [⬀] Robinson, Gene (co-PI) [⬀] Iyer, Ravishankar Jongeneel, Cornelis Victor |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Mri: Development of a Novel Computing Instrument For Big Data in Genomics @ University of Illinois At Urbana-Champaign
Proposal #: 13-37732 PI(s): Lumetta, Steven S. Iyer, Ravishankar; Jongeneel, Cornelis Victor; Robinson, Gene E.; Sinha, Saurabh Institution: University of Illinois - Urbana-Champaign Title: MRI/Dev.: Novel Computing Instrument for Big Data Project Proposed: This project, developing CompGen, an instrument that adopts a hardware-software co-design approach, aims to provide a - Vehicle for biologists and computer scientists to collaborate and develop new algorithms that are significantly faster and more accurate at a scale essential for handling the data deluge; - Software framework and tool set for algorithm development that support diverse data analysis and visualization; - Framework for developing accelerators and mapping to heterogeneous computational resources and hierarchical database storage. Promising technologies include emerging die-stacked and non-volatile memory technologies as well as accelerators (GPUs, FPGAs, APUs). The project brings together a multidisciplinary team of geneticists, bioinformatics specialists, computer and algorithms designers, and data mining experts. The research to be enabled includes a wide and eclectic variety of problems with direct impact on health and social issues. Some directions include understanding the impact of climate change on gene expression and ecosystems, bringing genetic analysis into medical clinics, identifying effective antibiotics, and exploring socio-genomics relations between stress, depression, and genetics among low-income African-American mothers. CompGen provides an environment that enables managing and processing genomic information and developing new algorithms. The instrument brings disruptive computing architectures and algorithmic techniques to facilitate analysis of genomic data while providing high accuracy results, resilience to errors, and scalability with growing volumes of data. It enables addressing the challenges of scale and diversity in genomic data through the development of new algorithms, models, and statistical methods. The instrument development focuses on reduction of data volume, optimization of storage hierarchy, identification and implementation of computational primitives, data visualization, mathematical toolkit optimization, and performance and reliability assessment. These developments are expected to lead to new computational structures and hardware/software architectures that can be incorporated into hierarchical databases as well as heterogeneous processors for data analysis, compression, and optimization. Broader Impacts: In addition to serving many areas, CompGen will serve as a tool for educating students and professionals in efficient ways to process and analyze genomic data and for handling big data in general. The instrument will serve multidisciplinary classes in which students gain hands-on research experience and introductory classes that expose students to applications and tools. Existing outreach and education programs will be utilized to expose the instrument. Plans include Open House events attracting thousands of visitors, Coursera courses, and minority outreach workshops. A mentoring tool, Mytri, will be used for networking among female students. Moreover, the CompGen design will be made available to others by fundamentally changing the methods by which big datasets are handled in genomics research. To this effect, an R&D consortium of hospitals, companies, and universities has been established to help identify needs, provide sources of data, act as early adopters, and ensure that new technologies are transferred smoothly into widespread use.
|
0.915 |
2014 — 2015 |
Iyer, Ravishankar |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
I/Ucrc Planning Grant: Computing and Genomics - An Essential Partnership For Biology Breakthroughs @ University of Illinois At Urbana-Champaign
The proposed ?I/UCRC for Computing and Genomics - An Essential Partnership for Biology Breakthroughs? proposed by the U. of Illinois at Urbana-Champaign, the U. of Chicago, and Mayo Clinic will enhance the research, education, and entrepreneurship while performing the important technology transfer by bringing together an interdisciplinary team of industry partners from computer systems, health care/pharmaceuticals, and life sciences working in collaboration with genomic experts to address the colossal big-data challenge. The application of genomics across the life sciences industry is currently challenged by an inadequate ability to generate, interpret, and apply genomic data quickly and accurately for a wide variety of applications. One challenge has been that of integrating thought and market leaders across what had historically been orthogonal industries: those involved with computer sciences, and those involved with biological sciences. With the advent of Next Generation Sequencing technology, those industries are now interdependent and have a critical need to synthesize and coordinate activities at the interface of computing and genomics. The participating sites propose to establish a collaborative environment that improves the applicability, timeliness, efficiency, and accuracy of the computational infrastructure to address the pressing genome-based challenges. The CompGen consortium?s vision is to engineer and optimize computing systems needed by industry for genome analysis.
The CompGen Center will address the experimental process for genomic data. A variety of questions on health and social problems will be addressable, enabling much needed biological and healthcare breakthroughs. Outcomes will enrich research infrastructure, develop next generation of leaders in engineering and science, improve the quality of workforce, and involve international partners. Collaborations will produce artifacts such as new algorithms, optimizations, and statistical models, in turn driving the design of the computing enterprise. The goal is to generalize those artifacts to drive the design and evaluation of computational models and hardware/software co-designed architectures, tightly coupled with new memory and computing technologies for scalability and accuracy.
|
0.915 |
2015 — 2018 |
Slagell, Adam Withers, Alexander Iyer, Ravishankar Kalbarczyk, Zbigniew Butler, Randal |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Si2-Sse: Attacktagger: Early Threat Detection For Scientific Cyberinfrastructure @ University of Illinois At Urbana-Champaign
The cyber infrastructure that supports science research (such as the cyberinfrastructure that provides access to unique scientific instrumentation such as a telescope, or an array of highly distributed sensors placed in the field, or a computational supercomputing center) faces the daunting challenge of defending against cyber attacks. Modest to medium research project teams have little cyber security expertise to defend against the increasingly diverse, advanced and constantly evolving attacks. Even larger facilities that have with security expertise are often overwhelmed with the amount of security log data they need to analyze in order to identify attackers and attacks, which is the first step to defending against them. The challenges of the traditional approach of identifying an attacker are amplified by the lack of tools and time to detect attacks skillfully hidden in the noise of ongoing network traffic. The challenge is not necessarily in deploying additional monitoring but to identify this malicious traffic by utilizing all available information found in the plethora of security, network, and system logs that are already being actively collected. This project proposes to build and deploy, is needed in research environments, an advanced log analysis tool, named AttackTagger, that can scale to be able to address the dramatic increase in security log data, and detect emerging threat patterns in today's constantly evolving security landscape. AttackTagger will make science research in support of national priorities more secure.
AttackTagger will be a sophisticated log analysis tool designed to find potentially malicious activity, such as credential theft, by building factor graph models for advanced pattern matching. AttackTagger will integrate with existing security software so as to be easily deployable within existing security ecosystems and to offload processing and computational work onto better suited components. It can consume a wide variety of system and network security logs. AttackTagger accomplishes advanced pattern matching by utilizing a Factor Graph model, which is a type of probabilistic graphical model that can describe complex dependencies among random variables using an undirected graph representation, specifically a bipartite graph. The bipartite graph representation consists of variable nodes representing random variables, factor nodes representing local functions (or factor functions , and edges connecting the two types of nodes. Variable dependencies in a factor graph are expressed using a global function, which is factored into a product of local functions. In the practice of the security domain, using factor graphs is more flexible to define relations among the events and the user state compared to Bayesian Network and Markov Random Field approaches. Specifically, using factor graphs allows capturing sequential relation among events and enables integration of the external knowledge, e.g., expert knowledge or a user profile.
|
0.915 |
2015 — 2018 |
Slagell, Adam Withers, Alexander Iyer, Ravishankar Butler, Randal Marsteller, James |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cici: Secure Data Architecture: Shared Intelligence Platform For Protecting Our National Cyberinfrastructure @ University of Illinois At Urbana-Champaign
This research is expected to significantly enhance the security of campus and research networks. It addresses the emerging security challenge of open, unrestricted access to campus research networks, but beyond that it lays the foundation for an evolvable intelligence sharing network with the very real potential for national scale analysis of that intelligence. Further it will supply cyber security researchers with a rich real-world intelligence source upon which to test their theories, tools, and techniques. The research will produce a new kind of virtual security appliance that will significantly enhance the security posture of open science networks so that advanced high-performance network-based research can be carried out free of performance lags induced by more traditional security controls.
This research will integrate prior research results, expertise and security products from from both the National Science Foundation and the Department of Energy to advance the security infrastructure available for open science networks, aka Science DMZs. Further the effort will actively promote sharing of intelligence among science DMZ participants as well as with national academic computational resources and organizations that wish to participate. Beyond meeting the security needs of campus-based DMZs, the effort will lay the foundation for an intelligence sharing infrastructure that will provide a significant benefit to the cybersecurity research community, making possible the collection, annotation, and open distribution of a national scale security intelligence to help test and validate on-going security research.
|
0.915 |
2015 — 2018 |
Iyer, Ravishankar Kalbarczyk, Zbigniew |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ci: New: Collaborative Research: Computer System Failure Data Repository to Enable Data-Driven Dependability @ University of Illinois At Urbana-Champaign
Dependability has become a necessary requisite property for many of the computer systems that surround us or work behind the scenes to support our personal and professional lives. Heroic progress has been made by computer systems researchers and practitioners working together to build and deploy dependable systems. However, an overwhelming majority of this work is not based on real publicly available failure data. Unfortunately, an open failure data repository for any recent computing infrastructure that is large enough, diverse enough and with enough information about the infrastructure and the applications that run on them does not exist.
This project will address this pressing need. The research team appreciates that this effort is challenging on many levels. Failure data are considered sensitive and are usually unveiled only before trusting eyes of a small subset of the people at the organization. As part of a current one-year planning grant, this team has collected specific requirements for the repository from a wide audience, collected failure and usage data from the largest centrally managed computing cluster at Purdue and performed preliminary analysis to reveal the workload usage patterns. The goal of this full-scale project is to collect data from a variety of computational infrastructure at the two participating universities, and from several of the NSF-funded large cyberinfrastructure projects.
The project will collect, curate, and present public failure data of large-scale computing systems in a repository called FRESCO. The data sets will include static information, dynamic information about the workloads, and failure information for both planned and unplanned outages. The data collection from production machines will have to obey several practical constraints -- no changes to the workload, little performance perturbation, and minimal changes to the operating system. Further, the data have to be sanitized for removing sensitive information and processed to make it interpretable by a broad group of researchers. This project will also provide analysis tools to answer certain commonly occurring questions, such as the correlation between workload and failure and the performance implications of using one library over another, as well as an intuitive graphical front-end which will allow people to explore the data sets and download the relevant ones.
Widespread use of the data and the associated analysis tools will give computer systems researchers an unprecedented ability to do data-driven research and offer computing infrastructure providers an analytic-driven capability to run more efficient reliable infrastructures.
|
0.915 |
2016 — 2021 |
Hudson, Matthew (co-PI) [⬀] White, Bryan (co-PI) [⬀] Hwu, Wen-Mei (co-PI) [⬀] Robinson, Gene (co-PI) [⬀] Iyer, Ravishankar |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
I/Ucrc: Computing and Genomics-An Essential Partnership For Biology Breakthroughs @ University of Illinois At Urbana-Champaign
The application of genomics across the life sciences industries is currently challenged by an inadequate ability to generate, interpret, and apply genomic data quickly and accurately for a wide variety of applications. Major Innovations in the applicability, timeliness, efficiency, and accuracy of computational genomic methods are needed, and these innovations will develop best when an interdisciplinary team of scientists, engineers, and physicians from academia and industry, spanning computer systems, health care/pharmaceuticals, and life sciences, work together. The University of Illinois at Urbana-Champaign (UIUC) and the Mayo Clinic are building on their longstanding collaboration to form the Center for Computational Biotechnology and Genomic Medicine (CCBGM), which will bring together their excellence in computing, genomic biology, and patient-specific individualized medicine. Working closely with industry, the CCBGM's multidisciplinary teams will use the power of computational genomics to advance pressing societal issues, such as enabling patient-specific cancer treatment, understanding and modifying microbial communities in diverse environments related to human health and agriculture, and supporting humanity's rapidly expanding need for food by improving the efficiency of plant and animal agriculture. The CCBGM will leverage UIUC's long-standing prowess in large-scale parallel systems, big data analytics, and hardware and software system design, to develop new technologies that enable future genomic breakthroughs. A key element of the Center's vision is to advance breakthroughs at the interface of biology and computing to transform health-care delivery while enhancing efforts that focus on the health science needs of underrepresented minorities.
The CCBGM will bring together an interdisciplinary team to address the colossal genomic data challenge. Academia/industry partnerships will enhance research, education, and entrepreneurship while performing important technology transfer. The Center will achieve transformational computing innovations on three fronts. (1) It will innovate computing and data management to deal with issues of scaling to the ever-growing volume, velocity, and variety of genomic data. It will concentrate initially on scaling the computation of epistatic interactions (interactions between two or more genes or DNA variants) in genome-wide association study data, generating lists of genomic features that are maximally predictive of phenotypes, and information-compression algorithms for genomic data storage and transfer. (2) It will revolutionize the generation of actionable intelligence from multimodal structured and unstructured data, to generate knowledge from big data. The emphasis will be on the processing and integration of genomic and multi-omic data, and on the merging of unstructured phenotypic data with information from curated data sources (e.g., electronic medical records, annotation databases). The integration of these diverse data types will improve discovery research, predictive genomics, diagnostics, prognostics, and theranostics. Application areas include targeted cancer therapy, pharmacogenomics, crop improvement, and predictive microbiome analysis. (3) It will achieve systems innovation by designing computer systems specially suited for computational genomics, providing unprecedented speed and energy efficiency while preserving the accuracy of the analytics. The systems will be used to quantify and improve the accuracy of detecting genomic variation and, more generally, to optimize computing architectures for the execution of genome analysis workflows.
|
0.915 |
2016 — 2019 |
Iyer, Ravishankar Kesavadas, T. Kalbarczyk, Zbigniew |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cps: Breakthrough:Towards Resiliency in Cyber-Physical Systems For Robot-Assisted Surgery @ University of Illinois At Urbana-Champaign
Since 2000, surgical robots have been used in over 1.75 million minimally invasive procedures in the U.S. across various surgical specialties, including gynecological, urological, general, cardiothoracic, and head and neck surgery. Robotic surgical procedures promise decreased complication rates and morbidity, due to the minimally invasive nature of the procedures. A detailed analysis (also reported to the FDA) of the adverse events associated with the surgical robot indicates that despite the increased number of robotic procedures and their greater utilization, the rate of adverse events has remained relatively steady over the last 14 years. Even though current surgical robots are designed with safety mechanisms in mind, in practice several significant challenges exist in enabling timely and accurate detection and mitigation of adverse incidents during surgery. Toward this goal, the project will address (i) an in-depth analysis of incident causes, which takes into account the interactions among the system components, human operators, and patients; (ii) resiliency assessment of the robotic systems in the presence of realistic safety hazards, reliability failures, and malicious tampering; and (iii) continuous monitoring for detection of safety, reliability, and security violations to ensure patient safety.
The intellectual merit of this work lies in: (i) systems-theoretic approach driven by real data on safety hazards and medical equipment recalls, to identify causes leading to violation of safety constraints at different layers of the cyber and physical system-control-structure; (ii) creation of a unique safety hazard simulation engine to perform injections into robot control software and emulate realistic safety hazard scenarios in a virtual environment; (iii) an adaptive method for rapid detection of events that lead to safety violations, based on continuous monitoring of human operator actions, robot state, and patient status, in conjunction with a probabilistic graph-model that captures dependencies between the causal factors leading to safety hazards; and (iv) experimental validation using the real robot to assess monitoring and protection mechanisms in the presence of realistic safety hazards, reliability faults, and security exploits (recreated using safety hazard simulation engine). The broader impact of the project is a methodology for design and resiliency assessment of a larger class of control cyber-physical systems, which involve humans in the on-line decision making loop. Application of the methodology to robot-assisted surgery demonstrates the strength and practicality of the approach and is likely to attract interest from areas of academia and industry in which cyber-physical systems are either a subject of study or the basis for delivering a service (e.g., transportation or electric power grids). This project's educational outreach encompasses strategies for broadening participation in multi-disciplinary projects spanning medicine and engineering.
|
0.915 |
2018 — 2021 |
Iyer, Ravishankar Kalbarczyk, Zbigniew |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Satc: Core: Small: Data-Driven Study of Attacks On Cyber-Physical Infrastructure Supporting Large Computing Systems @ University of Illinois At Urbana-Champaign
This project addresses security attacks that: (i) masquerade as failures and (ii) are delivered via self-learning malware that monitors the target system and launches the attack at a time and system location to have a maximal impact, by injecting a strategic failure. The target systems are cyber-physical systems (CPS) that manage or control large computing enterprises (e.g., the cooling or power distribution of high-performance system or cloud infrastructure). In this scenario, the vulnerability of the cyber-physical system acts as a weak point that lowers the security barrier of an otherwise well-protected system. This project addresses a full range of issues, including: (i) design of self-learning malware; (ii) launch vectors for failure injection attacks; and (iii) mitigation and defense against such attacks. The research draws on data from a cyber-physical system that supports the cooling of the Blue Waters supercomputing system at the University of Illinois.
The intellectual merit of this work lies in: (i) Development of scientifically sound methods to jointly study reliability failures of, and malicious attacks on, a cyber-physical system infrastructure that provides critical services for the uninterruptible operation of a large computing infrastructure. Possible advances in cyber-attacks in the context of indirectness, automation (driven by self-learning and adaptive malware), and reduced barriers to unauthorized entry to the system are considered; (ii) Study of feasibility of deploying attacks through self-learning malware (inserted into the auxiliary CPS), which takes advantage of a low-security barrier. In cyber-physical systems, the real-time control sequences rely on online measurements from the sensors distributed across the system. By monitoring those measurements one can build smart malware that is able to learn (interpret) the system state and then trigger an attack sequence at an opportune time; (iii) Develop attack mitigation methods that can be integrated with an existing intrusion detection system and combined with the monitoring from the cyber and physical layers of the tenant system; (iv) Validation of the mitigation strategies using a data-driven simulation testbed that emulates the cyber-physical system behavior and enables experimentation with different attack scenarios. While we focus on a large computing infrastructure, the problem has broader implications, e.g., the proliferation of "Internet of Things" devices creates an environment for attackers to exploit vulnerabilities in such entities as a stepping stone for launching attacks against highly-valuable assets. In the broader context of cyber security, this project aims to identify potential advances in threats by demonstrating the feasibility of masquerading a security problem as a reliability failure. Demonstrating a proof-of-concept on a potential threat and presenting an effective mitigation method will promote the development of detection and mitigation techniques against new threats.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |
2020 — 2021 |
Iyer, Ravishankar Hwu, Wen-Mei (co-PI) [⬀] Nahrstedt, Klara (co-PI) [⬀] Kramer, William Xu, Tianyin |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Pposs: Planning: Inflight Analytics to Control Large-Scale Heterogeneous Systems @ University of Illinois At Urbana-Champaign
The goal of this project is to fundamentally reinvent the design of the system, from hardware to application, using fast, novel inflight analytics to control and optimize large-scale heterogeneous computer systems to meet the performance and resiliency requirements of emerging applications such as data mining, artificial intelligence, and individualized medicine. Towards that goal, advanced machine-learning (ML) methods along with domain knowledge will be employed to support real-time system-state estimation and decision-making, including resource management, congestion/failure detection and mitigation, preemptive intrusion detection, and configuration management. Innovations across the system stack will be needed to achieve optimal results by taking full advantage of contextual information collected from multiple layers of the system and adapting rapidly to the deployment environment, workloads, and application requirements. ML-driven inflight analytics methods, developed in this effort, will be demonstrated on a heterogeneous ?rack-scale? computing system, with the ultimate future objective of scaling up the framework to a warehouse-scale computing system.
The project will be organized around the following research activities. (i) Work with noisy and incomplete telemetry data (e.g., hardware telemetry, OS-level logs, and application-level traces) available from monitors across the system stack to perform system-state estimation (e.g., resource utilization). Telemetry data are often noisy and inconsistent in terms of semantics, modalities, and time granularities, making systems only partially observable. Bayesian deep-learning models will be developed to accurately capture system states and cope with data noise and incompleteness. (ii) Design models and algorithms for practical inflight analytics that make decisions (e.g., on scheduling or failure mitigation) based on the estimated system state to enhance system performance, reliability, and security. Such a framework will consist of an ensemble of interdependent ML models based on partially observable Markov decision processes (POMDPs) augmented with domain knowledge (e.g., interconnect topology) and trained in real time. (iii) Synthesize hardware accelerators for fast, low-cost inflight analytic. Toward that end, a compiler and a runtime framework will be developed that take high-level declarative probabilistic programs (i.e., the POMDPs), automatically compile them onto accelerators, and plan their execution across heterogeneous hardware (FPGAs, ASICs, and CPUs/GPUs). (iv) Assess the trustworthiness of inflight analytics. For that, a trust-assessment framework will be created to evaluate resiliency to failures and attacks due to residual imperfections of heterogeneous components, input uncertainty, and the use of stochastic ML algorithms. While in the planning stage, this project will focus on design of inflight analytics in the context of rack-scale systems. The methods and algorithms developed will be useful in helping smaller-scale sites with limited resources manage their systems more efficiently. Students involved in this project will have a rare opportunity to participate in the design of heterogeneous ML-driven systems with broad applicability. The integration of ML methods and algorithms into real systems can be attractive to a diverse range of individuals, including underrepresented minority students. The goal is to raise awareness of scientific and engineering challenges in design and deployment of next-generation computing systems to support emerging applications.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |