Effective language parallelization requires the solution of three problems: parallelism detection, parallelism selection, and parallel resource management. In the presence of irregular dynamic data and higher-order procedures, static compile-time parallelization does not--and often cannot--provide practical solutions to these problems. Dynamic parallelization augments conventional compile-time analyses with run-time information. This allows program evaluation to dynamically adapt to actual parallelism present at run time.
Dynamic-parallelization techniques for automatic parallelism detection and selection have been designed and implemented for higher-order dynamic languages (such as ML). In this talk, I will describe two dynamic techniques: the first can detect parallelism in procedures that build and modify DAGs; the second technique uses the run-time sizes of dynamic data to select, for parallel evaluation, program expressions whose computation requirements match the granularity of the target machine.
We seemed to reach consensus on the following points.
It is well known that adding side effects to functional languages changes the operational equivalences of the language. I will discuss a new language construct, encap, that forces imperative pieces of code to behave purely functionally, i.e., without any visible side effects. The coercion operators developed are proven correct for purely sequential languages, and the correctness result requires the construction of fully abstract models for sequential call-by-value languages and the formulation of a weak form of "monad" for call-by-value languages. I will mainly focus on how to use "encap" as a semantic tool for reasoning about programs with side effects.
(This is joint work with Ramesh Viswanathan.)
We consider flow analysis of a parallel language that supports dynamic process creation, and in which tasks communicate exclusively via first-class mutable shared locations; the sequential core of this language defines a higher-order call-by-value functional language with data constructors. The analysis is defined via an abstract interpretation of the language.
The first part of the talk motivates the utility of such an analysis. The definition of an abstraction function that tracks inter- and intra-thread control and dataflow for programs written in our language is also presented.
In the second part, we sketch a more general framework that permits a range of flow analyses with different cost/accuracy tradeoffs to be expressed. The goal of such a paramertizable framework is to provide an efficient and intuitive foundation within which important compile-time optimizations for high-level parallel systems can be investigated.
(This is joint work with Stephen Weeks.)