In this paper we present a parallel implementation of Lévy's optimal reduction for the λ-calculus [11]. In a similar approach to Lamping's one in [10], we base our work on a graph reduction technique known as directed virtual reduction [3] which is actually a restriction of Danos-Regnier virtual reduction [4]. The parallel implementation relies on a strategy for directed virtual reduction, namely half combustion, which we introduce in this paper. We embed in the implementation both a message aggregation technique, allowing a reduction of the communication overhead, and a fair policy for distributing dynamically originated load among processors. The aggregation technique is mandatory as the granularity of the computation is fine. Through this technique we obtain a linear speedup close to 80% of the ideal one on a shared memory multiprocessor. This result points out the viability of parallel implementations for optimal reduction.
Pedicini, M., Quaglia, F. (2000). A parallel implementation for optimal lambda-calculus reduction. In Proceedings of the 2nd International ACM SIGPLAN Conference on Principles and Practice of Declarative Programming (pp.3-14). Association for Computing Machinery (ACM) [10.1145/351268.351270].
A parallel implementation for optimal lambda-calculus reduction
Pedicini M.;
2000-01-01
Abstract
In this paper we present a parallel implementation of Lévy's optimal reduction for the λ-calculus [11]. In a similar approach to Lamping's one in [10], we base our work on a graph reduction technique known as directed virtual reduction [3] which is actually a restriction of Danos-Regnier virtual reduction [4]. The parallel implementation relies on a strategy for directed virtual reduction, namely half combustion, which we introduce in this paper. We embed in the implementation both a message aggregation technique, allowing a reduction of the communication overhead, and a fair policy for distributing dynamically originated load among processors. The aggregation technique is mandatory as the granularity of the computation is fine. Through this technique we obtain a linear speedup close to 80% of the ideal one on a shared memory multiprocessor. This result points out the viability of parallel implementations for optimal reduction.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.