This run of NJPLS will be in-person only, with no accommodations for hybrid or remote participation. If there are, e.g., new variants or university policy changes, we will simply cancel the event.
Per UMD policy, masks will not be required, but they are strongly recommended while indoors.
Please register by Friday, October 14th!Getting to UMD
There are multiple ways to get to the Iribe building:
- Train (Amtrak, Regional Rail) to New Carrollton and Uber (15') to Iribe
- Train (Amtrak, Regional Rail) to Union Station, Metro (Red line to Fort Totten, Green/Yellow line to College Park), and Shuttle 104 to Iribe
- If you're driving, you can use the (paid) parking at Xfinity visitor lot or Regents Drive
Program
10:00am | – | 11:00am | Breakfast (catered) |
11:00am | – | 12:00pm |
Ben Greenman (Brown)
Joseph W. Cutler (UPenn)
Jie Zhou (University of Rochester)
|
12:00pm | – | 1:30pm | Lunch (catered) |
1:30pm | – | 2:30pm |
Harrison Goldstein (UPenn)
Benjamin Quiring and Justin Frank (UMD)
Shiwei Weng (Johns Hopkins)
|
2:30pm | – | 3:00pm | Coffee break |
3:00pm | – | 4:00pm |
Cyrus Liu (Stevens)
Grant Iraci and Cheng-En Chuang (Buffalo)
Aaron Bembenek (Harvard)
|
4:00pm | – | 4:15pm | Break |
4:15pm | – | 5:15pm |
Henry Blanchette and Jacob Prinz (UMD)
Santiago Cuellar (Galois)
Zachary Flores (TwoSix Technologies)
|
5:15pm | – | 5:30pm | Business Meeting |
5:30pm | – | Drinks and dinner out on the town |
Abstracts
Little Tricky Logic: Misconceptions in the Understanding of LTL
Ben Greenman (Brown)
This will be an **interactive** talk that demonstrates specific ways
in which linear temporal logic (LTL) formulas can be tricky to
write. The findings are based on studies conducted over the past two
years with researchers and learners. This work is a first step
toward better LTL tools and the design of new logics. In addition,
the methods that we used can be adapted to other domains.
Bunched and Ordered Types for Stream Processing
Joseph W. Cutler (UPenn)
Existing high-level programming models for stream processing
applications attempt to mirror the programming style of
non-streaming data. In doing so, these approaches lose track
of the rich structure of streams themselves. However, more
recent work attempts to capture that structure in terms of
types, based on a model of streams as partially-ordered
multisets (pomsets). In this talk, I will present work towards
a new programming model for stream processing which is derived
by viewing streams from this new type-theoretic perspective
where the pomset ordering is present. Concretely, I present
the beginnings of a new core calculus for stream processing,
and describe how it takes logical inspiration from an ordered
variant of the logic of bunched implication.
Sandbox Unsafe Code in Rust Program with Fast Whole-program Analysis
Jie Zhou (University of Rochester)
Rust is a memory-safe language. However, there are security
escapes: Programmers can directly write unsafe Rust code or
call unsafe C library code in Rust' unsafe blocks. Unsafe code
runs in the same address space as the safe part, and thus
memory safety errors in the unsafe code can cause arbitrary
memory access vulnerabilities in the whole address space. In
this talk, I will describe how we use lightweight summary-based
whole-program analysis on Rust's MIR code to find unsafe memory
objects and unsafe memory accesses.
Some Problems with Properties
Harrison Goldstein (UPenn)
Property-based testing research often focuses on performance,
improving input selection strategies to find bugs as quickly as
possible. This work is incredibly important, but improving testing
effectiveness is only part of the battle. In this talk, I'll
present ongoing research that explores another dimension of
property-based testing: usability. I will discuss a
semi-structured interview study in which participants described
their experiences using property-based testing in industry. I'll
highlight important themes from the conversations that may serve
as inspiration for future research, and along the way I'll also
introduce the uninitiated to the basic methods of qualitative
research.
Generating Well-typed Programs that Use their Arguments
Benjamin Quiring (UMD)
Generating well-typed programs that use their arguments. Previous
work generates well-typed programs that tend to not use their
arguments. We present a method that uses arguments 99% of the
time.
Higher-order Demand-driven Bounded Model Checking
Shiwei Weng (Johns Hopkins)
The talk is on the theory, design, and implementation of a bounded
model checking that is demand-driven and targets functional
languages. This is a follow-up work of our ICFP'20 demand-driven
symbolic evaluator.
DrNLA: Dual Rewriting for Branching-Time Verification of Non-Linear Arithmetic Programs
Cyrus Liu (Stevens)
Time constrained actions are common in practical systems, e.g.,
distributed applications are often composed of cooperative tasks
respecting time coherence. Branching-time logics are well-known in
specifying behaviors of programs over time including the ability
to express choices from a given state such as, e.g., whether there
is some choice or whether all choices lead to some intended
outcome. Consequently, reasoning about conditional branching is
crucial to branching-time verification techniques and when those
conditions are beyond the comfort of linear expressions,
branching-time verification tools report unknown or, worse,
unsound results. To mitigate the burden of reasoning nonlinear
expressions that contain higher order polynomials, we introduce a
new method of converting programs with nonlinear arithmetic (NLA)
into equivalent programs with linear arithmetic (LIA) via a
technique we call Dual Rewriting.
Our Dual Rewriting approach discovers a linear replacement for an
NLA boolean expression b (e.g., as found in conditional branching)
through a combination of dynamic learning and static validation of
learning results. We dynamically infer candidate invariants based
on sample traces at program locations involving polynomial
behaviors, and use static analysis to validate those
candidates. These candidates may not hold for all traces so we
then present a refinement algorithm to refine a Boolean
combination of linear expressions that captures the original
polynomial behavior. As a result, we iteratively explore and
construct linear expressions for both the positive and negative
sides of b. The final replacement is a boolean combination of
linear expressions that exploits the program context and trades
off boolean complexity for NLA complexity but, in doing so, puts
more static verification tools within reach.
We implemented our work in a new tool DrNLA that performs the
analysis to replace NLA conditional expressions with equivalent
LIA conditions. We use 92 CTL programs with NLA to evaluate
whether these written programs can be more easily verified,
showing that our transformed programs enable tools such as
FuncTion and T2 to verify branching-time (CTL) properties of more
programs that they could not previously verify.
Communication-based Semantics for Prioritized Concurrent ML
Grant Iraci and Cheng-En Chuang (Buffalo)
We introduce a new semantic framework for Concurrent ML extended
with priorities. Introducing thread priorities to Concurrent ML
opens a number of questions about how to support CML's selective
communication and first-class communication protocols. We examine
these issues and formalize our extension by reasoning about the
collection of pending potential communications, which we call
actions. Instead of maintaining channel structures, we look only
at the set of actions in the system and the valid pairings
(i.e. communications) between them. This new view provides a clear
place to insert priorities without overly complicating the
formalism.
From SMT to ASP: Solver-Based Approaches to Solving Datalog Synthesis-as-Rule-Selection Problems
Aaron Bembenek (Harvard)
This talk presents three solver-based algorithms for the Datalog
synthesis-as-rule-selection problem: given a set of candidate
Datalog rules, choose a subset that fits an input-output
example. Building off prior work based on CEGIS, our progression
of solutions explores the space of interactions between SAT/SMT
and Datalog. Along the way, we identify Datalog programs as
monotonic SMT theories, which enjoy particularly efficient
interactions in SMT; our plugins for popular SMT solvers make it
easy to load an arbitrary Datalog program into the SMT solver as a
custom monotonic theory. The ultimate solution in our
progression—which uses off-the-shelf answer set programming
(ASP)—achieves significant speedups (~9x geomean) over the prior
state-of-the-art solution while synthesizing higher quality
programs.
Zypr: A New Approach to Structural Editing
Henry Blanchette and Jacob Prinz (UMD)
Existing structure editors often seem to make it difficult to
rearrange programs — a problem known as high viscosity. Even
though they have advanced actions available to the user for making
structure-preserving changes, still, many changes that are
straightforward in a text editor are very complicated in many
structured editors.
In this presentation we present a new generic structured editing
system that preserves structure like other structured editors but
is as simple and low-viscosity as a text editor. The system
handles tree-like grammars by allowing the user to select
sub-nodes of the program (exists in other structured editors) and
zippers into sub-nodes of the program (our contribution). This
style of selection is general enough to encapsulate all actions
the user can make in our editor. We argue that this style of
selection is analogous to the cursor and span selections
respectively in text editing.
Abstract DSL Composition in Coq: Ideas, Obstructions, and Successes
Zachary Flores (TwoSix Technologies)
Our goal is to take a mathematical definition of a DSL, discuss
what it means to compose DSLs within our framework in Coq along
with our motivations for doing so, and current obstructions to
finishing this project.
Publicly disclosing program vulnerabilities without revealing the exploits
Santiago Cuellar (Galois)
Report on the development of Cheesecloth, a practical framework
that uses Zero-Knowledge proofs to verify the existence of program
vulnerabilities without revealing the vulnerability itself. We’ll
discuss the formal methods tools we use to make this process
practical and support real world vulnerabilities such in programs
such as FFmpeg (memory overflow) and OpenSSL/Heart-bleed
(non-interference).
Talk proposals
Talk proposals were due on Wednesday, September 28th.
Contact info
You can join the NJPLS mailing list to stay up-to-date on future events. If you have questions about this run of NJPLS at UMD, contact Leo Lampropoulos.