There is a growing awareness, both in industry and academia, of the crucial role of formally proving the correctness of safety-critical portions of systems. If one is to prove that the high-level specification is correctly implemented at the lower level, one needs to verify the compiler which performs the translation. Verifying the correctness of modern optimizing compilers is challenging due to the use of complex hardware and sophisticated analysis and optimization algorithms. The introduction of new microprocessor architectures, such as the EPIC class of architectures exemplified by the IA-64, places even a heavier responsibility on optimizing compilers.
Translation validation is a novel approach that offers an alternative to the verification of translators in general and compilers in particular. According to the translation validation approach, rather than verifying the compiler itself, one constructs a validation tool which, after every run of the compiler, formally confirms that the target code produced on that run is a correct translation of the source program.
The talk will describe the formal notion a target program being a
correct translation of a source program, and the methods by which
such a relation can be formally established. We begin with
structure preserving optimizations which admit a clear mapping of
control points in the target program to corresponding control
points in the source program. Most high-level optimizations
belong to this class. For such transformations, we apply the well
known method of simulation which relates the execution between
two target control points to the corresponding source execution.
A more challenging class of optimizations does not guarantee such
correspondence and, for this class, we present specialized
approaches to which we refer as meta-rules which establish their
correctness. Typical optimizations belonging to this class are
loop splitting and fusion, loop tiling, and loop interchange.
Type safety of languages traditionally assumed safety of the runtime services. In order to minimize the trusted computing base, we need to eliminate those assumptions. One such runtime service is memory management. Previous work in this area only accounted for some restricted form of memory management based on regions.
Building on an idea by Wang and Appel, We present a lambda-calculus with regions and intensional type analysis and show how it can be used to provide a simple and provably type-safe stop-and-copy tracing garbage collector. We explore the ways in which intensional type analysis allows us to work around some of the restrictions of Wang and Appel's approach.
This is joint work with Zhong Shao and Bratin Saha.
The proofs of ``traditional'' proof carrying code (PCC) are type-specialized in the sense that they require axioms about a specific type system. In contrast, the proofs of foundational PCC explicitly define all required types and explicitly prove all the required properties of those types assuming only a fixed foundation of mathematics such as higher-order logic. Foundational PCC is both more flexible and more secure than type-specialized PCC.
For foundational PCC we need semantic models of type systems on von Neumann machines. Previous models have been either too weak (lacking general recursive types and first-class function-pointers), too complex (requiring machine-checkable proofs of large bodies of computability theory), or not obviously applicable to von Neumann machines. Our new model is strong, simple, and works either in lambda-calculus or on Pentiums.
This is joint work with David McAllester.
Cryptography is information hiding. Polymorphism is also information hiding. So is cryptography polymorphic? Is polymorphism cryptographic?
To investigate these questions, we define the _cryptographic lambda-calculus_, a simply typed lambda-calculus with shared-key cryptographic primitives. Although this calculus is simply typed, it is powerful enough to encode recursive functions, recursive types, and dynamic typing. We then develop a theory of relational parametricity for our calculus as Reynolds did for the polymorphic lambda-calculus. This theory is useful for proving equivalences in our calculus; for instance, it implies the non-interference property: values encrypted by a key cannot be distinguished from one another by any function ignorant of the key. We close with an encoding of the polymorphic lambda-calculus into the cryptographic calculus that uses cryptography to protect type abstraction.
Our results shed a new light upon the relationship between cryptography and polymorphism, and offer a first step toward extending programming idioms based on type abstraction (such as modules and packages) from the civilized world of polymorphism, where only well-typed programs are allowed, to the unstructured world of cryptography, where friendly programs must cohabit with malicious attackers.
This is joint work with Benjamin Pierce.
Constraint propagation is the most widely used constraint-solving technique
in constraint programming. Since there are a variety of domains and
propagation rules, it is difficult to implement efficient constraint solvers
in low-level languages. We present a rule-based high-level language, called
APEARL, for constructing reactive systems in general and constraint
propagators in particular. APEARL combines the goal-oriented execution model
of logic programming with the event-driven execution model. A subset of the
language has been implemented and integrated with B-Prolog
(www.probp.com). The language has been
successfully employed to construct
constraint solvers for trees, finite-domains, Boolean, and sets. It has also
been intensively used to implement the constraint-based graphics library of
B-Prolog.
Local CPS conversion is a compiler transformation for improving the code generated for nested loops by a direct-style compiler. The transformation consists of a combination of CPS conversion and light-weight closure conversion, which allows the compiler to merge the environments of nested recursive functions. This merging, in turn, allows the backend to use a single machine-level procedure to implement the nested loops. Preliminary experiments with the Moby compiler show the potential for significant reductions in loop overhead as a result of Local CPS conversion.