1986 — 1988 |
Appel, Andrew |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Implementation of An Efficient Reducer For Lambda Expressions |
1 |
1988 — 1990 |
Appel, Andrew |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Unifying Compile-Time and Run-Time Evaluation
Programming languages with higher-order functions (like ML or scheme) have a flexible abstraction mechanism, but in the implementation of these languages the abstraction is almost always evaluated at runtime. This project examines a way to use the same abstraction mechanism (higher-order functions) for both compile-time and runtime abstraction This has the advantage that the programmer needs to use only one method; more importantly, the programmer no longer needs to constantly decide whether phrases are to be evaluated at compile-time or runtime.
|
1 |
1990 — 1992 |
Appel, Andrew |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Standard Ml of New Jersey Software Capitalization
The language ML has received much attention during the past few years among programming language theorists because of an interesting set of capabilities. A "Standard ML" language, suitable for implementation, has been defined. An implementation supporting this language is known as "Standard ML of New Jersey." The purpose of this project is to further develop the compiler and to assist in its maintenance and distribution.
|
1 |
1990 — 1992 |
Appel, Andrew |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Using Immutable Types For Debugging and Parallelism
In many programming languages, especially those with a "functional" style of programming, the vast majority of objects are not modified after they are created. The ML language is unusual in that the compile-time typechecker can distinguish in advance objects that may be modified from objects that may not. This advance information about immutability can be used by implementations in several ways: by optimizing compilers, where it simplifies aliasing calculations; by garbage collectors, where it can improve locality of reference; by parallelizing compilers, where it simplifies sharing of data; and by debuggers, where it allows efficient replay of program execution. Since ML is one of the few imperative languages that provides compile-time guarantees of immutability, there has not been much research into the uses of immutability. There are many ways that compile-time guarantees can be translated into run-time savings; several of these will be studied.
|
1 |
1992 — 1996 |
Appel, Andrew |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Optimization of Space Usage
While most optimizing compilers attempt to minimize the execution time or the size of programs, little has been attempted in the area of minimization of the storage needed by programs during execution. This project contains a framework for reasoning about data size in compiled programs, and the investigator intends to formulate some space-saving optimizations and implement them in Standard ML of New Jersey.
|
1 |
1998 — 2002 |
Appel, Andrew Felten, Edward [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Tools, Interfaces, and Access Control For Secure Programming
9870316 It is becoming increasingly common for mutually untrusting software components to be parts of the same program; examples include Java applets, ActiveX controls, Component Object Model (COM) objects, and extensible operating systems. Methods and tools are created that make it easier for programmers to write software components that will function securely when linked with potentially hostile components. This is done by combining research results on several topics: information hiding and language design, hierarchical modularity, dynamic linking, and access control. One result of this work is a set of tools that let programmers precisely control the visibility and accessibility of pieces of their software, allowing different views of a software component to be presented to different software clients depending on the provenance of the client code. Software tools are implemented and distributed to support this for programs written in the Java and ML languages.***
|
1 |
1999 — 2003 |
Appel, Andrew |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Applying Compiler Techniques to Proof-Carrying Code
Proposal Number: CCR-9974553
Title: Applying Compiler Technology to Proof-Carrying Code PI: Andrew W. Appel
Proof-carrying code (PCC) is an exciting new development in mobile-code security. An untrusted code provider provides a native-code program along with a proof of safety; the code consumer can check the proof before executing the program. Since proof-checking is a simple and easily mechanized process, the trusted computing base of this system can be quite small. However, PCC is an immature technology and faces several challenges: keeping proof size small, proving safety of complex operations such as allocating and initializing data structures, and making the code consumer's safety policy universal and language-independent. This project will address each of these issues, applying science and technology developed for compilers to the area of program verification.
|
1 |
2002 — 2006 |
Appel, Andrew Walker, David (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: High-Assurance Common Language Runtime
The proposed research focuses on the design and implementation of new technologies for building high-confidence component software systems. The new technology is directly relevant to improving security of commercial virtual machines such as the Java virtual machine (JVM) and Microsoft's .NET Common Language Runtime (CLR). The work concentrates on three areas:
1. High-level specifications for low-level software. General and flexible logic-based type systems (LTS) are being designed. The type systems are derived from the Certified Binaries technology developed by the authors and extend the scope, expressiveness and precision of verification techniques used in current JVM and CLR implementations.
2. A high-assurance virtual machine. Using the authors Foundational Proof-Carrying Code technology, higher-assurance, validated implementations of the JVM or CLR infrastructure are being built. The authors are engaged in technology transfer of their ideas to a virtual machine being built at Intel.
3. Resource certification. New technologies for specifying, composing, and verifying advanced properties such as resource bounds on memory and network bandwidth are being developed. These properties are crucial for safe and secure interoperation between untrusted components in large-scale systems.
|
1 |
2006 — 2011 |
Appel, Andrew Clark, Douglas (co-PI) [⬀] Martonosi, Margaret (co-PI) [⬀] August, David (co-PI) [⬀] Walker, David [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ct: Well-Typed Trustworthy Computing in the Presence of Transient Faults
David Walker Princeton University 0627650 Panel: 060970 Well-typed trustworthy computing in the presence of transient faults
Abstract
In recent decades, microprocessor performance has been increasing exponentially, due in large part to smaller and faster transistors enabled by improved fabrication technology. While such transistors yield performance enhancements, their lower threshold voltages and tighter noise margins make them less reliable, rendering processors that use them more susceptible to transient faults caused by energetic particles striking the chip. Such faults can corrupt computations, crash computers, and cause heavy economic damages. Indeed, Sun Microsystems, Cypress Semiconductor and Hewlett-Packard have all recently acknowledged massive failures at client sites due to transient faults.
This project addresses several basic scientific questions: How does one build software systems that operate on faulty hardware, yet provide ironclad reliability guarantees? For what fault models can these guarantees be provided? Can one prove that a given implementation does indeed tolerate all faults described by the model? Driven in part by the answers to these scientific questions, this project will produce a trustworthy, flexible and efficient computing platform that tolerates transient faults. The multidisciplinary project team will do this by developing: (1) programming language-level reliability specifications so consumers can dictate the level of reliability they need, (2) reliability-preserving compilation and optimization techniques to improve the performance of reliable code but ensure correctness (3) automatic, machine-level verifiers so compiler-generated code can be proven reliable, (4) new software-modulated fault tolerance techniques at the hardware/software boundary to implement the reliability specifications, and finally (5) microarchitectural optimizations that explore trade-offs between reliability, performance, power, and cost.
|
1 |
2006 — 2009 |
Appel, Andrew |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
End-to-End Source-to-Object Verification of Interface Safety
ABSTRACT
0540914 Appel, Andrew Princeton University
End-to-End Souce-to-Object Verification of Interface Safety
The purpose of this research is to strengthen the internal protection barriers in software built from components and run in software virtual machines such as Sun's Java and Microsoft's .Net. In such software, programmers can design the interfaces between components--using standard techniques such as abstraction, object orientation, and information hiding--to limit the damage that rogue components can do. Rogue components are those that are designed maliciously or, more commonly, that have bugs making them vulnerable to attack and take-over. However, the protection barriers in Java and .Net (type systems that limit access by one component to data belonging to another component) are themselves vulnerable to attack if the type-checkers and compilers that implement them have bugs.
In order to ensure that no compiler bugs can cause security vulnerabilies, the researchers will develop formal models to relate type systems at the source language--where programmers reason about them to design their protection interfaces--to the machine language that actually executes. The researchers will construct machine-checkable specifications and design ways to construct compilers with machine-checkable proofs of security.
|
1 |
2009 — 2013 |
Appel, Andrew |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Tc: Large:Collaborative Research: Combining Foundational and Lightweight Formal Methods to Build Certifiably Dependable Software
This award is funded under the American Recovery and Reinvestment Act of 2009 (Public Law 111-5).
This research focuses on combining foundational and lightweight formal methods to verify the safety, security, and dependability of large-scale software systems. Foundational approaches (to formal methods) emphasize expressiveness and generality, but they have not been successfully scaled to large-scale systems. Lightweight approaches emphasize automation and scalability, but they are less effective on low-level programs and on providing explicit proof witness. The PIs are developing new high-level and low-level lightweight methods and adapting them into various foundational verification systems. More specifically, at the high level, the PIs are developing lightweight dependent-type systems and separation-logic shape analysis, and connecting current methods such as conventional type systems, slicing analysis, and flow analysis; at the low level, the PIs are designing specialized provers and decision procedures to verify low-level programs and to support certified linking of heterogeneous components. If successful, this research will dramatically improve the scalability of foundational verification systems and provide new powerful technologies for building trustworthy software systems. Combining lightweight and foundational approaches also provides a common thread that will pull different verification communities together and unify them into a single framework.
|
1 |
2014 — 2017 |
Appel, Andrew |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Shf: Medium: Collaborative Research: Principled Optimizing Compilation of Dependently Typed Languages
Title: SHF: Medium: Collaborative Research: Principled Optimizing Compilation of Dependently Typed Languages
The project focuses on a form of software engineering with program verification: writing programs with logical, machine-checkable proofs of correctness, reliability, or security. Such proofs can be done for programs written in conventional programming languages, but the "proof theory" of these languages is complex and difficult, which makes verification time-consuming and expensive. New, "dependently typed" programming languages use principles of functional programming and advanced type systems to achieve a smoother and cleaner proof theory, so that verifying the correctness and safety of software (written in these new languages) is much easier. In this project, the research is to build compilers for these new languages that are efficient and are verified correct.
The intellectual merits are in the challenges and opportunities that occur in dependently typed, purely functional contexts, such as the Coq proof assistant. The exciting opportunities that arise only in this context are the ability to choose evaluation order for any sub-term (since Coq's language is pure and total), the ability to open a compiler to user-specified (and certified) rewrite rules that support application-specific optimization, and the ability of programmers to use the Calculus of Inductive Constructions (CiC) proof theory of Coq to verify their programs. The challenges we face, unique to this setting, include the need to determine and erase those terms that are computationally irrelevant. A major foundational challenge is attempting to preserve types (and hence proofs) as we perform lowering transformations, such as conversion to continuation-passing style (CPS) or static single assignment (SSA). Our strategy is to use type and proof-preserving compilation where possible, and where not, to prove a simulation-based notion of correctness. The broader impacts will include (1) pedagogical materials on verified functional programming in Coq; (2) training of graduate students; and (3) improvements in software engineering practice, in enabling practical verified functional programming.
|
1 |
2015 — 2020 |
Appel, Andrew |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Expeditions in Computing: the Science of Deep Specification
In our interconnected world, software bugs and security vulnerabilities pose enormous costs and risks. The Deep Specification ("DeepSpec", deepspec.org) project addresses this problem by showing how to build software that does what it is supposed to do, no less and (just as important) no more: No unintended backdoors that allow hackers in, no bugs that crash your app, your computer, or your car. "What the software is supposed to do" is called its specification. The DeepSpec project will develop new science, technology, and tools--for specifying what programs should do, for building programs that conform to those specifications, and for verifying that programs do behave exactly as intended. The key enabling technology for this effort is modern interactive proof assistants, which support rigorous mathematical proofs about complex software artifacts. Project activities will focus on core software-systems infrastructure such as operating systems, programming-language compilers, and computer chips, with applications such as elections and voting systems, cars, and smartphones.
Better-specified and better-behaved software will benefit us all. Many high-profile security breaches and low-profile intrusions use software bugs as their entry points. Building on decades of previous work, DeepSpec will advance methods for specifying and verifying software so they can be used by the software industry. The project will include workshops and summer schools to bring in industrial collaborators for technology transfer. But the broader use of specifications in engineering also requires software engineers trained in specification and verification--so DeepSpec has a major component in education: the development of introductory and intermediate curriculum in how to think logically about specifications, how to use specifications in building systems-software components, or how to connect to such components. The education component includes textbook and on-line course material to be developed at Princeton, Penn, MIT, and Yale, and to be available for use by students and instructors worldwide. There will also be a summer school to train the teachers who can bring this science to colleges nationwide.
Abstraction and modularity underlie all successful hardware and software systems: We build complex artifacts by decomposing them into parts that can be understood separately. Modular decomposition depends crucially on the artful choice of interfaces between pieces. As these interfaces become more expressive, we think of them as specifications of components or layers. Rich specifications based on formal logic are little used in industry today, but a practical platform for working with them will significantly reduce the costs of system implementation and evolution by identifying vulnerabilities, helping programmers understand the behavior of new components, facilitating rigorous change-impact analysis, and supporting maintainable machine-checked verifications that components are correct and fit together correctly. This Expedition focuses on a particularly rich class of specifications, "deep specifications." These impose strong functional correctness requirements on individual components such that they connect together with rigorous composition theorems. The Expedition's goal is to focus the efforts of the programming languages and formal methods communities on developing and using deep specifications in the large. Working in a formal proof management system, the project will concentrate particularly on connecting infrastructure components together at specification interfaces: compilers, operating systems, program analysis tools, and processor architectures.
|
1 |
2020 — 2023 |
Appel, Andrew |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Shf: Small: Veriffi -- Formally Verified Functional+C Programs
Formal deductive verification -- machine-checked proofs of program correctness using tools such as proof assistants -- is an effective way of building very high assurance secure components, such as operating-system kernels and cryptographic communications primitives. Existing logics, tools, and methods for program verification apply either to low-level languages (such as C) or high-level functional languages. But real-world systems are programmed in a mix of languages: low-level imperative languages for components such as operating-system (OS) kernels and cryptographic primitives; high-level languages for most applications, for OS-management code, for cryptographic protocols, and for the runtime systems of the high-level languages. This project develops methods and tools for formal verification of high-level language programs linked with low-level programs, proved correct to a unified specification that crosses the boundary with no gaps. The high-level language is the functional programming language inside the Coq proof assistant, verified using Coq?s logic, and compiled (provably correctly) with the CertiCoq compiler. The low-level language is C, verified using the Verified Software Toolchain (VST), and compiled (provably correctly) using the CompCert verified C compiler. The new Verified Foreign Function Interface (VeriFFI) exploits synergies between CertiCoq?s toolchain and VST/CompCert to enable such unified verifications.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |
2022 — 2025 |
Appel, Andrew |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Fmitf: Track I: Formally Verified Numerical Methods
When computer algorithms calculate the behavior of physical systems -- planets, self-driving cars, rockets, wi-fi antennas, medicines – some of the mathematical equations can't be solved exactly. Thus, the algorithms approximate the answers as precisely as needed. Numerical analysis is the science of calculating how accurate these algorithms are. The more precision is needed in a certain type of computation, the more time it takes to compute the answer. When one absolutely needs to know in advance how accurate the answers will be, traditional numerical methods are hard to combine into a complete and trustworthy solution. This project's novelties are (1) to integrate all the reasoning about numerical software into a single system, (2) to connect all the levels of reasoning, from mathematics down to software code, producing an end-to-end guarantee of correctness and accuracy, and (3) to produce that guarantee in the form of a mathematical theorem whose proof can be checked fully automatically. The project's broader significance and importance are both scientific and economic: the novel methods and tools developed can (1) prevent software bugs that can be extremely expensive to diagnose by conventional means; (2) allow faster algorithms to be used (when one can now prove that they are accurate); and (3) assure the safety of control software in safety-critical applications such as aircraft and vehicle control.<br/><br/>The project takes a layered approach to foundational verification of correctness and accuracy of numerical software---that is, formal machine-checked proofs about programs (not just algorithms), with no assumptions except specifications of instruction-set architectures. The researchers build, improve, and use appropriate tools at each layer: proving in the real numbers about discrete algorithms; proving how floating-point algorithms approximate real-number algorithms; reasoning about C program implementation of floating-point algorithms; and connecting all proofs end-to-end in the Coq proof assistant.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |