This instance of NJPLS will take place at Princeton University.
All are welcome and there is no registration fee. To register, send me an email at wmansky@cs.princeton.edu.
Travel instructions may be found here. The meeting will be held in the Convocation Room of the Friend Center, which is adjacent to the Computer Science Building.
9:00am | Breakfast |
9:30am |
|
10:30am | Break |
11:00am |
|
12:15pm | Lunch |
1:45pm |
|
3:15pm | Break |
3:30pm |
|
This talk presents gradual liquid types, an extension to refinement types with gradual refinements that range over a finite (pre-defined) set of predicates. This range restriction allows for an algorithmic inference procedure where all (finite) concretizations of the gradual refinements are exhaustively checked. Due to exhaustive search we have at hand the concretizations (ie. the concrete refinements) that rendered the program gradually type safe. We observe that these concrete refinements contain all the potential inconsistencies of the non-gradual refinement type checking, thus can be used for error explanation. We implemented gradual liquid types in GuiLT, a tool that interactively presents all safe concretizations to the user and we used it for Haskell to Liquid Haskell, user-aided migration of three commonly used list manipulation Haskell libraries.
QWIRE is a circuit generation language for writing quantum programs. Semantically, every quantum circuit is a function on density matrices, which represent distributions over quantum states. QWIRE is unique in that it provides a function to translate a circuit directly to its mathematical denotation. We use this translation to mechanically prove properties about quantum programs using the Coq proof assistant.
In this talk, we will introduce the syntax and semantics of QWIRE along with some examples of formally verified quantum programs. These examples will illustrate the kinds of proof techniques most relevant to quantum verification. No prior knowledge of quantum programming or quantum mechanics is expected.
This talk presents TaskProf, a profiler that identifies parallelism bottlenecks in task parallel programs. TaskProf leverages the structure of a task parallel execution to perform fine-grained attribution of work to various parts of the program. Its use of hardware performance counters to perform fine-grained measurements minimizes perturbation. TaskProf's profile execution runs in parallel using multi-cores. TaskProf's causal profile enables users to estimate improvements in parallelism when a region of code is optimized even when concrete optimizations are not yet known. We have used TaskProf to isolate parallelism bottlenecks in twenty-three applications that use the Intel Threading Building Blocks library. We have designed parallelization techniques in five applications to increase parallelism by an order of magnitude using TaskProf.
Cache contention in the form of false sharing and true sharing arises when threads overshare cache lines at high frequency. Such oversharing can reduce or negate the performance benefits of parallel execution. Prior systems for detecting and repairing cache contention lack efficiency in detection or repair, contain subtle memory consistency flaws, or require invasive changes to the program environment.
In this talk, we introduce a new way to combat cache line oversharing via the Thread Memory Isolation (TMI) system. TMI operates completely in userspace, leveraging performance counters and the Linux ptrace mechanism to tread lightly on monitored applications, intervening only when necessary. TMI's compatible-by-default design allows it to scale to real-world workloads, unlike previous proposals. TMI introduces a novel code-centric consistency model to handle cross-language memory consistency issues. TMI exploits the flexibility of code-centric consistency to efficiently repair false sharing while preserving strong consistency model semantics when necessary. TMI has minimal impact on programs without oversharing, slowing their execution by just 2% on average. We also evaluate TMI on benchmarks with known false sharing, and manually inject a false sharing bug into the leveldb key-value store from Google. For these programs, TMI provides an average speedup of 5.2x and achieves 88% of the speedup possible with manual source code fixes.
Commonly used implementations of cryptography software such as OpenSSL and Google's BoringSSL contain more than eighty thousand lines of heavily hand-optimized assembly code. Ensuring the correctness of such optimized code after any small modification is a challenge. In this talk, we show how to address this challenge by developing an automatic equivalence checker for the heavily optimized assembly code and a specification of the algorithm in our domain specific language. We demonstrate how to formalize the specification of the cryptographic algorithm and the semantics of x86 instructions to our domain specific language. We have shown the functional correctness of borringSSL's implementation of SHA-256, ChaCha20, and AES-128 using our tool.
Program analyses necessarily make approximations, which often leads them to derive relatively few true alarms interspersed among many false alarms. We propose a new approach to interactively guide program analyses towards true alarms and away from false alarms. Our approach associates each alarm with a confidence value in its ground truth by performing Bayesian inference on a probabilistic model derived from the analysis rules. In each iteration, the user inspects the alarm with the highest confidence and labels its ground truth, and the approach recomputes the marginalized confidences of all alarms given the feedback. It thereby maximizes the return on the user's effort in inspecting each alarm. We have implemented our approach for program analyses specified in the logic programming language Datalog. Experiments on two state-of-the-art analyses---a static datarace detector for Java and a static taint analysis for Android apps---show significant improvements across a range of metrics, including false alarm rates, areas under the curves (AUC), and iterations needed to discover all true alarms.
Having a powerful automatic simplification mechanism is a very important piece of any interactive theorem prover, but Coq's built-in simplifiers simpl and compute are not very powerful. Here we describe an ml-plugin that significantly improves on these simplifiers. Our library contains customized simplification algorithms for functions with associative and/or communitive libraries, transitive closure, contextual rewriting and user-extensible conditional term rewriting. In addition, several optimizations were added: expressions are interned into a master single DAG, giving improved caching; function symbols and variable names can also be interned; and, natural numbers can be interned as members of the ml int type and Coq's unary notation avoided.
Java 8 default methods, which allow interfaces to contain (instance) method implementations, are useful for the skeletal implementation software design pattern. However, it is not easy to transform existing software to exploit default methods as it requires analyzing complex type hierarchies, resolving multiple implementation inheritance issues, reconciling differences between class and interface methods, and analyzing tie-breakers (dispatch precedence) with overriding class methods to preserve type-correctness and confirm semantics preservation. In this paper, we present an efficient, fully-automated, type constraint-based refactoring approach that assists developers in taking advantage of enhanced interfaces for their legacy Java software. The approach features an extensive rule set that covers various corner-cases where default methods cannot be used. To demonstrate applicability, we implemented our approach as an Eclipse plug-in and applied it to 19 real-world Java projects, as well as submitted pull requests to popular GitHub repositories. The indication is that it is useful in migrating skeletal implementation methods to interfaces as default methods, sheds light onto the pattern's usage, and provides insight to language designers on how this new construct applies to existing software.
Designing and prototyping new features is important in industrial projects. I will present our experience using the Coq proof assistant as a prototyping environment for building a query compiler for use in the IBM's ODM Insights product. I will discuss the pros and cons of using Coq for this purpose and describe our methodology for porting the compiler to Java for product integration.
The initial Coq prototype has evolved into Q*cert (https://querycert.github.io/), an open-source platform for the specification, verification, and implementation of query compilers. This platform includes support for SQL and OQL. It can generate code for execution on JavaScript, Spark and Cloudant. It internally relies on familiar database intermediate representations, notably the nested relational algebra and calculus. The platform also comes with a simple but extensible query optimizer.
Haskell programmers are often confounded by unexpected behaviors related to lazy evaluation. Nevertheless, they frequently make bold claims about the strictness behavior of their functions! As of now, the tools available to observe the evaluation of Haskell programs have only been extrinsic to Haskell (to the knowledge of the authors), and often they are intrusive to the running program. While it is useful to observe evaluation "from afar," manipulating such information from within Haskell offers new opportunities for pedagogy and automated testing, among other things.
We present a work-in-progress: a library for dynamically reifying the strictness behavior of a Haskell function; a generic programming library for writing executable specifications of that strictness behavior; and integration with QuickCheck for using those specifications in property-based random testing.