1975 — 1978 |
Robinson, Lawrence (co-PI) [⬀] Levitt, Karl |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
The Verification of Hierarchically Structured Programs |
0.907 |
1991 — 1992 |
Archer, Myla Levitt, Karl |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
1991 International Workshop On the Hol (Higher Order Logic) Verification System and Its Applications--Davis, Ca, August 26-28, 1991 @ University of California-Davis
HOL is a general theorem proving system developed at the University of Cambridge. Developed as a mechanization of a higher-order logic, HOL is now being used in a variety of applications ranging from hardware design verification to reasoning about communication protocols. The purpose of this workshop is to bring together the HOL user community to exchange ideas about the future directions for HOL. The workshop topics include: progress in integrating HOL into more traditional design and development environments; scrutiny of HOL from logicians expert in higher-order logics; application of HOL as a formal methods tool in verification.
|
1 |
2003 — 2006 |
Bishop, Matt Levitt, Karl |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Balancing Privacy and Analysis in Data Sanitization @ University of California-Davis
CCR-0311671
Balancing Privacy and Analysis in Data Sanitization
Matt Bishop
When handling an intrusion, one organization may need to share logs and network traces with other organizations. However, the data to be shared may contain sensitive information that the first organization does not wish to disclose to the other parties. The solution is to sanitize the data by removing information from the data. This protects the privacy of the organization. However, the data removed may be essential for the analysis of the data that the other organizations must perform. This research explores the tension between privacy and security analysis. The goals of this research are to: (1) develop a sanitizing language to describe the requirements for privacy and security analysis in such a way that the requirements can be automatically checked for inconsistencies; (2) determine the conditions under which "perfect sanitization" can occur, if any; and (3) examine the problem of sanitizing a dynamic data set that changes as the sanitization proceeds. This will involve developing and testing the sanitization language on both data extracted from a network, and on files containing student grades. The former will provide examples of both static and dynamic data. Perfect sanitization will be studied by translating the sanitization process into functions and analyzing their ranges and domains.
The significance of this work is in the balance of privacy and security. Previous work focuses on sanitizing data in an ad hoc manner, rather than analyzing the balance between privacy and security and allowing the sanitizers to choose among particular requirements when the needs of privacy and security analysis conflict. Its impact is that if successful, the results can be used in a wide variety of fields in which privacy and analysis (not just security analysis) must be balanced.
|
1 |
2003 — 2007 |
Rowe, Jeffrey Bishop, Matt Levitt, Karl |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr: a Semantic-Based Approach For Automated Response to Attacks @ University of California-Davis
Abstract
Proposal number: 0313411 Title: A Semantic-Based Approach for Automated Response to Attacks PI: Karl Levitt
Historically computer security efforts have focused on relatively simple prevention mechanisms, on detecting attacks that are not prevented, and manual efforts to stop the attack and/or clean up afterwards. These procedures are ineffective when faced with fast moving, programmed attacks or in jumping ahead of an attacker using a complex series of malicious procedures. In these cases, automated response procedures are required. Practitioners, however, are wary about automatic response because of the high false positive rates generated by today's intrusion detection systems. If automated response systems are allowed to block detected attacks, many legitimate transactions would be accidentally blocked. Furthermore, an attacker could exploit the automated response to launch a denial of service attack. This project will investigate a new approach to automated response reasoning. The approach is based upon a semantic model of the effects on the system due to both attacks and possible responses. Planning of responses is performed with the goal of removing attacker capabilities without affecting the critical capabilities of the system. A successful response system will stop fast-moving programmed attacks automatically without hindering the normal operations of the protected system.
|
1 |
2003 — 2007 |
Wu, Shyhtsun Rowe, Jeffrey Bishop, Matt Levitt, Karl |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nrt: Collaborative: Testing and Benchmarking Methodologies For Future Network Security Mechanisms @ University of California-Davis
Networks and computer systems are becoming increasingly attractive targets to large-scale programmed attacks such as worms and Distributed Denial of Service attacks (DDoS), which can compromise a vast number of vulnerable targets in a few minutes. Critical end-user applications vulnerable to such attacks include e-commerce, e-medicine, command-and-control applications, video surveillance and tracking, and many other applications. While there is a growing body of research techniques, prototypes, and commercial products that purport to protect these applications and the network infrastructure on which they rely, there is little existing scientific methodology by which to objectively evaluate the merits of such claims. Moreover, thorough testing of a defense system for worms or for attacks on the infrastructure cannot be evaluated safely on a live network without affecting its operation.
To make rapid advancements in defending against these and future attacks, the state of the art in the evaluation of network security mechanisms must be improved. This will require the emergence of large-scale security testbeds coupled with new standards for testing and benchmarking that can make these testbeds truly useful. Current shortcomings and impediments to evaluating network security mechanisms include lack of scientific rigor;lack of relevant and representative network data;inadequate models of defense mechanisms; and inadequate models of both the network and the transmitted data (benign and attack traffic). The latter is challenging because of the complexity of interactions among traffic, topology and protocols.
The researchers propose to develop thorough, realistic,and scientifically rigorous testing frameworks and methodologies for particular classes of network attacks and defense mechanisms. These testing frameworks will be adapted for different kinds of testbeds, including simulators such as NS, emulation facilities such as Emulab, and both small and large hardware testbeds. They will include attack scenarios; attack simulators; generators for topology and background traffic; data sets derived from live traffic; and tools to monitor and summarize test results. These frameworks will allow researchers to experiment with a variety of parameters representing the network environment, attack behaviors, and the configuration of the mechanisms under test.
In addition to developing testing frameworks, the researchers propose to validate them by conducting tests on representative network defense mechanisms. Defense mechanisms of interest include network-based Intrusion Detection Systems (IDS); automated attack traceback mechanisms;t raffic rate-limiting to control DDoS attacks; and mechanisms to detect large-scale worm attacks. Conducting these tests will require incorporating real defense mechanisms into a testbed, and applying and evaluating frameworks and methodologies. Conducting these tests will also help us to ensure that the testbed framework allows other researchers to easily integrate and test network defense echanisms of their own.
The research team includes experts in security, networking, data analysis, software engineering, and operating systems who are committed to developing these challenging integrated testing frameworks.
Intellectual Merit: The development of testing methodologies for network defense mechanisms requires significant advances in our understanding of network attacks and the interactions between attacks and their environment including:deployed defense technology, traffic, topology, protocols, and applications. It will also require advances in our understanding of metrics for evaluating defenses.
Education: The research into testing methodologies for network defense mechanisms will involve graduate students and provide new curriculum material for universities.
Broader Impact: By providing new testing frameworks, the work will accelerate improvements in network defense mechanisms and facilitate their evaluation and deployment. The researchers will hold yearly workshops to disseminate results and obtain community feedback.
|
1 |
2003 — 2007 |
Gertz, Michael [⬀] Levitt, Karl |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Security Analysis and Re-Engineering of Databases @ University of California-Davis
Many of today's mission critical databases have not been designed with a particular focus on security aspects such as integrity, confidentiality, and availability. Even if security mechanisms have been used during the initial design, these mechanisms are often outdated due to new requirements and applications, and do not reflect current security polices, thus leaving ways for insider misuse and intrusion. The proposed research is concerned with analyzing various security aspects of mission critical (relational)databases that are embedded in complex information system infrastructures. We propose four complementary avenues of research: (1) models and techniques to profile the behavior of mission critical data stored in databases, (2) algorithms to correlate (anomalous) data behavior to application/user behavior, (3) techniques to determine and model user profiles and roles from behavioral descriptions, and (4) the integration of techniques, algorithms, and mechanisms into a security re-engineering workbench for (relational) databases. Two major themes build the core of the proposed approaches. First, the analysis of database vulnerabilities and violations of security paradigms is data-driven, i.e., first the behavior of the data is analyzed and modeled before it is correlated to users and applications. Second, we introduce the concept of access path model to uniformly model and correlate data flow and access behavior among relations, users, and applications. This model allows security personnel to fully inspect database security aspects in complex settings in a focused, aspect (policy) driven fashion.
|
1 |
2003 — 2007 |
Wu, Shyhtsun Rowe, Jeffrey Bishop, Matt Pandey, Raju (co-PI) [⬀] Levitt, Karl |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Verification of Security Run-Time Verification Systems @ University of California-Davis
Proposal #: 0341734 TITLE: Verification of Security Run-Time Verification Systems PI: Karl N Levitt CoPI: Raju Pandey
Through runtime verification (RTV) a system's behavior is checked against a specification. Almost always, specifications are easier to create and are more efficient in execution than code, thus making RTV an appealing approach to determine if a system is performing as expected. This project addresses new aspects of RTV: (1) to detect security incidents using the technique of specification-based intrusion detection originally developed at UC Davis, (2) to check that response measures taken to mitigate an attack in progress and to effect recovery perform as expected, (3) to use automated formal methods (i.e., ACL2) to verify the specifications for RTV with respect to overall security policies, (policies will be developed for Unix privileged programs and for numerous network protocols), (4) to seek a general approach to RTV that unifies security and fault tolerance, and (5) to use the NASA test-bed to run experiments on an RTV (security attacks and natural faults) to identify components of the specifications that are needed in practice and to develop metrics for the effectiveness of RTVs, e.g., false positive and false negative rates.
|
1 |
2004 — 2008 |
Rowe, Jeffrey Levitt, Karl Yoo, S.j.ben Akella, Venkatesh (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Nets-Nr: High-Performance Next Generation Networking Technology and Systems For Heterogeneous Networks @ University of California-Davis
This project investigates the Next Generation Network Technology and Systems geared towards high-performance internetworking across heterogeneous networks. The proposed approach adopts an overarching architecture across the subnetwork boundaries while taking into account the the underlying technologies such as wireless, satellite, opti-cal, and electrical wireline networks. In particular, this project explores new networking protocols and programmable network elements for enhancing end-to-end performance, providing flexible programmability, and achieving intelligent network control and man-agement. We will investigate the following four key research topics: (1) design of seg-mented transport protocols, which can be aware of the underlying network technologies and be adaptive to various subnetwork domains; (2) design, simulation, and testing of intelligent programmable network elements (IPNEs), which are middle boxes that are lo-cated at the boundary of heterogeneous network domains to provide impedance matching between the diverse networks and to support segmented transport protocols; and (3) de-sign of intelligent network control and management (NC&M) system, which will conduct Observe-Analyze-Act to measure network status, analyze possible implications, and take proactive actions. This project will place special attentions to simulation studies of seg-mented transport protocols in heterogeneous networks, the functional design of IPNEs with programmable/adaptive features, and the development of NC&M system that works with IPNEs. By addressing these issues, this project will provide flexible protocols and programmable network elements for high-performance heterogeneous networks, and set a solid framework for the deployment of the next generation networking technologies and systems.
|
1 |
2005 — 2008 |
Wu, Shyhtsun Rowe, Jeffrey Olshausen, Bruno (co-PI) [⬀] Chuah, Chen-Nee (co-PI) [⬀] Levitt, Karl Yoo, S.j.ben |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Nets-Nbd: Intelligent and Adaptive Networking For the Next Generation Internet @ University of California-Davis
This project investigates the Next Generation Network Technology and Systems capable of understanding and learning the high-level perspective of the network. The proposed approach pursues a new cognitive intelligent networking paradigm that maintains the success of today's Internet but which also incorporates cognitive intelligence in the network--a new networking technique that provides the ability for the network to know what it is being asked to do, so that it can step-by-step take care of itself as it learns more. In particular, we explore new networking architecture and network elements that will lead to a future network with (a) improved robustness and adaptability, (b) improved usability and comprehensibility, (c) improved security and stability, and (d) reduced human intervention for operation and configuration. This project pursues a set of comprehensive studies that seek innovations through the design and modeling of a new brain-reflex cognitive intelligence architecture, an intelligent programmable network elements architecture, and an intelligent network control and management design.
Broader Impact: The team approach covering neuroscience, datamining, computer science, systems engineering, artificial intelligence, and networking will provide rich opportunities for students to learn beyond their primary fields of study. New courses developed by the faculty members will disseminate the new material covering neuroscience and information technology.
|
1 |
2005 — 2009 |
Wu, Shyhtsun Rowe, Jeffrey Bishop, Matt Levitt, Karl Pandey, Raju (co-PI) [⬀] Balasubramanyam, Poornima |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nets-Noss: Snids: Sensor Network Intrusion Detection Systems @ University of California-Davis
This research is directed at filling a critical need in building secure sensor networks. Using techniques other than cryptographic, the research develops intrusion detection and response systems (IDRS) that are suited especially for the challenges of a deployed wireless sensor network.
Specific topics being pursued include (i) developing lightweight specification-based IDS detectors that apply to various sensor network protocols, (ii) employing formal reasoning and verification techniques on these detectors to develop a formal framework for proving that, for a given set of assumptions, the IDS will trigger an alarm whenever the policy is violated, regardless of the correctness of the protocol or its implementation, (iii) building a cooperative, distributed, lightweight ID architecture to explore issues in data acquisition, aggregation, correlation and analysis as well as appropriate dissemination of IDS alerts and response directives suitable for this domain, and (iv) employing the NSF-NeTS UCDavis SENSES project software infrastructure that facilitates the task of developing, deploying and managing sensor network applications in order to generate IDS schemas from the above developed security specifications. This will be used to develop common IDS components for building the IDS middleware. The key here is to develop very fine-grained and scalable components that can be synthesized to fit on a wide range of devices.
This research will lead to the development and deployment of new architectures and new IDRS methodologies suitable for the realization of effective and evaluatable security for sensor networks.
|
1 |
2010 — 2015 |
Levitt, Karl Peisert, Sean |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Tc: Small: Collaborative Research: Symbiosis in Byzantine Fault Tolerance and Intrusion Detection @ University of California-Davis
Two principal components for providing protection in large-scale distributed systems are Byzantine fault-tolerance (BFT) and intrusion detection systems (IDS). BFT is used to implement strictly consistent replication of state in the face of arbitrary failures, including those introduced by malware and Internet pathogens. Intrusion detection relates to a broad set of services that detect events that could indicate the presence of an ongoing attack. IDSs are far from perfect -- they can both miss attacks or misinterpret events as being malicious. In addition, IDSs themselves are vulnerable to attack. These two components approach different parts of system security. Each, however, has the potential to improve the other, which is the theme of this project. The integration of these two efforts, at both the fundamental and system levels, has proven elusive. Fault-tolerant distributed algorithms have been designed to use failure detectors for some time, but only as an abstraction. Intrusion detection has been, for the most part, a service that gives some general improvement in system security. Attempting to marry these two approaches could be a large step towards making BFT a truly practical approach in multisite systems, and gives a novel way to integrate multiple IDSs to improve the security in a multisite system with nonuniform and varying trust.
Some examples of such benefit are (1) Any evidence gathered by BFT about suspicious behavior can be useful for an IDS, since it could indicate that the system has been compromised. (2) Information from an IDS can be used by BFT to influence its behavior towards the servers of the replicated service. This could, for example, allow BFT to stop using a site even though the service has not (yet) been affected, or to assume a more benign set of failures for a site that appears to be well managed. (3) The way that BFT reacts to suspicious behavior is a complex policy that could, at least in part, be moved to IDS. Doing so would allow the policy to be tuned. (4) A further detection method is to compare the internal suspicions of BFT with the external suspicions of the IDS. (5) BFT can be used to detect and cope with attacks on an IDS. (6) IDS can confirm that parties in a BFT set are behaving according to the BFT protocol which if so can improve the performance of a BFT system. This research explores this potential of a merged system by developing a version of BFT for wide-area networks that is designed with several IDSs as part of the architecture. The IDS will serve as a suspicion detector that allows BFT to define sets of sites that trust each other, and can thus use a lower latency protocol among them. The IDSs will use BFT to agree upon detection states to make more useful detections. Information collected by BFT will be used by the IDS to detect malicious behavior. And, BFT and IDS will, where possible, check each other to increase the detection power of the system. A prototype of the system will be implement and a simple synthetic application to measure performance and sensitivity to a set of simulated attacks will be built.
|
1 |
2011 — 2014 |
Wu, Shyhtsun Levitt, Karl |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Geni: Eager: Geni Experiments to Explore Adoption of New Security Services @ University of California-Davis
This EAGER project will carry out three classes of experiments on GENI, all related to security and privacy and all involving low-level network facilities. Of particular interest is the issue of adoptability: will real users (system administrators) accept novel security solutions and under what circumstances? Three experiments are proposed to address these challenges.
The first experiment focuses on a monitoring system to detect infrastructure attacks: Currently, the protocol WATCHERS is known to be able to detect almost all attacks on routers, but its behavior under realistic network traffic is unknown. This experiment will ask questions like: What is the effect of monitoring on the infrastructure itself? How do the benefits of monitoring weighed against the cost of monitoring affect the adoption of the service by autonomous network entities?
The second experiment is focused on attack mitigation with modified infrastructure services. Specifically, they will investigate how incremental adoption of a DNS protocol modification might affect the global domain name service when both standard and modified protocols operate simultaneously. Key questions are: Does a new infrastructure attack mitigation scheme interfere with the vulnerable service in widespread use? How do the new and old services compete with one another during the adoption phase?
The third experiment focuses on distributed private online social networking. The PIs propose to explore deployment of secure and privacy-flexible p2p-client platforms for migrating from a centralized to a decentralized peer-to-peer social on-line network. The PIs are proposing to develop a social caching/name-resolution server, analogous to DNS for IP networks, to assist the p2p clients (or super peers) connected through GENI as they manage their privacy settings for their communities of interest. Key questions are: how can superpeers identify communities of interest and other social groups? Can they interface with centralized social networking frameworks, like facebook? Can they aid in the protection of privacy of their constituent clients?
As for the broader impacts, GENI will be used for experimentation in six security-related university classes. The PIs also participate in the UC Davis COSMOS (Computer Security, Privacy, and Cybervillainy) program, which provides high achieving high school students the opportunity to explore advanced topics in math and science in a university setting. Laboratory experimentation is 30% of the COSMOS program curricula for which GENI will serve as an ideal platform.
|
1 |
2011 — 2015 |
Rowe, Jeffrey Levitt, Karl |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Tc: Small: Collaborative Research: An Argumentation-Based Framework For Security Management @ University of California-Davis
Computer users are increasingly faced with decisions that impact their personal privacy and the security of the systems they manage. The range of users confronting these challenges has broadened from the early days of computing to include everyone from home users to administrators of large enterprise networks. Privacy policies are frequently obscure, and security settings are typically complex. Missing from the options presented to a user is a decision support mechanism that can assist her in making informed choices. Being presented with the consequences of decisions she is asked to make, among other information, is a necessary component that is currently lacking.
This work introduces formal argumentation as a framework for helping users make informed decisions about the security of their computer systems and the privacy of their electronically stored information. Argumentation, a mature theoretical discipline, provides a mechanism for reaching substantiated conclusions when faced with incomplete and inconsistent information. It provides the basis for presenting arguments to a user for or against a position, along with well-founded methods for assessing the outcome of interactions among the arguments. An elegant theory of argumentation has been developed based on meta rules characterizing relationships between arguments. Rules for argument construction and evaluation have been devised for specific domains such as medical diagnosis. This project investigates argumentation as the basis for helping users make informed security- and privacy-related decisions about their computer systems. Three specific aims are addressed: 1) Implementation of an inference engine that reasons using argumentation, 2) Facilitate security management through an argumentation inference engine, a rule base specialized for security management, and sensors providing security alerts all enhanced with an interactive front-end. 3) Reason about the consistency and completeness of domain knowledge, as it evolves. To understand the kinds of domain-specific inference rules required, diverse security applications are studied, such as determining if an attack imperils a particular system, finding the root cause of an attack, deciding on appropriate actions to take in the presence of an uncertain diagnosis of an attack, and deciding on privacy settings. Emerging from this project will be a prototype towards the practice of usable security. The team is working with organizations responsible for the security administration of large enterprise networks and will make the prototype tools available to these organizations. The team is working with everyday users from a cross-section of community members. Curricular modules that cover the intersection of argumentation and security are being developed and shared.
|
1 |
2012 — 2016 |
Rowe, Jeffrey Levitt, Karl |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Twc: Medium: Collaborative: Towards Securing Coupled Financial and Power Systems in the Next Generation Smart Grid @ University of California-Davis
For nearly 40 years, the United States has faced a critical problem: increasing demand for energy has outstripped the ability of the systems and markets that supply power. Today, a variety of promising new technologies offer a solution to this problem. Clean, renewable power generation, such as solar and wind are increasingly available. Hybrid and plug-in electric vehicles offer greater energy efficiency in transportation. The power grid that manages the generation, transmission and distribution of electric power, however, was designed and constructed in the 1960's and is ill-suited to handle these emerging energy technologies. Operating the electrical grid using power sources with random and uncertain availability, like solar and wind, requires new sensing and control methods. Widespread use of plug-in/hybrid electric vehicles (PHEV) will not only require far greater power capacity, but will also radically change the peak usage profile, with large evening demand that cannot be shifted. To address this problem, our current power grid must be upgraded with a control system that uses the full power of modern sensor and computing technology to increase efficiency. This new power grid, with an integrated, modern IT control plane is commonly referred to as the Smart Grid, which uses distributed control, customer integration and market based control mechanisms.
It is critical to build security features into this Smart Grid from the beginning to ensure fairness, to provide warnings of misuse, to provide control algorithms that minimize damage from malicious behavior, and most importantly, to provide robustness and high-availability of power delivery even in the presence of bad-faith actors. This project develops methods to achieve security in power and market delivery. This entails a study of economic market models with stability as one objective but also in consideration of new sources of power and usage, both on the producer and the consumer sides. To achieve security, the following techniques are used synergistically: vulnerability discovery by formal analysis; on-line monitoring, anomaly and specification-based intrusion detection; and recovery and reconstitution by feedback control. Unique to this project, it is emphasized that the security enhancements take place at both the market level and the system level, requiring separate state-estimation models. These seemingly disparate domains are unified through mapping functions among the states of the respective models. By integrating the two control models, future Smart Grids can detect and respond to activity, either malicious or caused by natural disturbances, that threaten either level; the unification of the models permits the investigation of attacks that possibly impact both levels. Results of this work would lead to a secure and reliable Smart Grid architecture that is robust in the face of attacks on both the power delivery and market control systems. The inherent cross-disciplinary nature of the research will educate future researchers to be conversant in both cyber-security and associated economic issues, through co-advising between the departments of Computer Science and Economics at both UC Davis and Pennsylvania State University and through course modules developed under this work, again involving both campuses. Results will be transitioned to partnership with PG&E, SMUD, the West Davis Village, and other utilities in California, Pennsylvania, and Connecticut.
|
1 |