Parallel Algorithms

Introduction

Fundamental physical limitations on processing speed will eventually force high-performance computations to be performed using multiple processing units. This has motivated study of a variety of alternative computing paradigms, including the study of parallel algorithms, that use multiple processors to solve problems more efficiently than is possible with one processor alone.

My research in this area has concerned the design and analysis of a variety of algebraic problems — mostly involving polynomial and matrix computations.

Note: While parallel architectures continues to be an active research area, this does not seem to be the case for the research concerning parallel algorithms that I have been involved in. I am not currently pursuing research in this area.

References

  1. J. Já Já
    An Introduction to Parallel Algorithms
    Addison-Wesley, 1992

    This is a very readable introduction to the design and analysis of parallel algorithms. It has been used as the text for the graduate course on parallel algorithms, here, and includes both an introduction to basic techniques that are used for parallel algorithm design and analysis, and a survey of parallel algorithms for a wide variety of problems - including a chapter on parallel algorithms for various algebraic problems.

  2. E. Kaltofen and V. Pan
    Processor Efficient Parallel Solution of Linear Systems over an Abstract Field
    Proceedings, 3rd Annual Symposium on Parallel Algorithms and Architectures
    New York, New York, 1991, pp. 180–191

    E. Kaltofen and V. Pan
    Processor-Efficient Parallel Solution of Linear Systems II: The Positive Characteristic and Singular Cases
    Proceedings, 33rd Annual Symposium on Foundations of Computer Science
    Pittsburgh, Pennsylvania, 1992; pp. 714–723

    These papers include a breakthrough result — the first known processor-efficient highly parallel algorithms that can be used to solve linear systems of linear equations, and perform various related matrix computations, over arbitrary fields. The algorithms are important themselves — they have been used to establish efficient parallel algorithms for a variety of related problems — but the design and analysis techniques contained in this paper are also worth studying in their own right.

  3. J. Reif (Ed.)
    Synthesis of Parallel Algorithms
    Morgan Kauffman, 1993

    This collection includes articles on parallel algorithms for problems in a variety of areas — including several articles on algebraic problems — that are more advanced than one would find in an introductory text.

Publications

  1. Very Fast Parallel Matrix and Polynomial Arithmetic

    W. Eberly
    Very Fast Parallel Matrix and Polynomial Arithmetic
    Proceedings, 25th Annual Symposium on Foundations of Computer Science
    Springer Island, Florida, 1984; pp. 21-30

    W. Eberly
    Very Fast Parallel Polynomial Arithmetic
    SIAM Journal on Computing 18 (1989), pp. 955–976
    Please send me email to obtain a copy of this paper.

    W. Eberly
    Very Fast Parallel Band Matrix Arithmetic
    Information and Computation 116 (1995), pp. 117–127
    Available online at ScienceDirect.

    Algorithms were presented for polynomial and matrix computations that reduced the parallel time needed to solve these problems, using a polynomial number of processors. Like many parallel algorithms that were presented at around this time, these were not processor-efficient and are, therefore, unlikely to be of any practical interest.

  2. Very Fast Parallel Hermite Interpolation

    W. Eberly
    Logarithmic Depth Circuits for Hermite Interpolation
    Journal of Algorithms 16 (1994), pp. 335–360
    Available online at ScienceDirect.

    A new parallel algorithm for Hermite interpolation was presented, reducing the parallel time required to solve this problem. The algorithm used a polynomial number of processors but was not processor-efficient, so that it is unlikely to be of practical interest.

  3. Processor-Efficient Computation of an Independent Subset

    W. Eberly
    Efficient Parallel Independent Subsets and Matrix Factorizations
    Proceedings, 3rd IEEE Symposium on Parallel and Distributed Processing
    Dallas, Texas, 1991; pp. 204-211

    This paper included the first known processor-efficient parallel algorithm for computation of a maximal linearly independent subset of vectors over a field.

  4. Processor-Efficient Parallel Band Matrix Computations

    W. Eberly
    On Efficient Band Matrix Arithmetic
    Proceedings, 33rd Foundation on Foundations of Computer Science
    Pittsburg, Pennsylvania, 1992; pp. 457-463

    W. Eberly
    Efficient Parallel Computations for Singular Band Matrices
    Proceedings, 6th ACM-SIAM Symposium on Discrete Algorithms
    San Francisco, California, 1995; pp. 132-138

    New randomized processor-efficient parallel algorithms were presented for various band matrix computations. As a pleasant by-product, new sequential algorithms for some of these problems were also obtained that were asymptotically more efficient than previously known algorithms when fast matrix multiplication is used.

  5. Processor-Efficient Matrix Inversion

    W. Eberly
    Processor-Efficient Parallel Matrix Inversion over Abstract Fields: Two Extensions
    Proceedings, 2nd International Symposium on Parallel Symbolic Computation
    Maui, Hawaii, 1997; pp. 38-45

    This paper presented two minor extensions of the work of Kaltofen and Pan on processor-efficient matrix inversion: A new, somewhat simpler version of the algorithm for the computation of the inverse of a nonsingular matrix was presented, and a version of the algorithm to solve nonsingular systems of linear equations over very small fields was presented that reduced the work required for this computation, to match the bounds that had been established by Kaltofen and Pan for the large field case.

  6. Somewhat Cheaper Parallel Linear Algebra

    W. Eberly (with a Discussion of Work by X.Chen)
    Somewhat Cheaper Parallel Linear Algebra
    Talk Presented at ACA 2002
    Volos, Greece, June 25, 2002
    Slides: [ Postscript ] [ PDF ]

    In this talk, some techniques are discussed for a small reduction in the work required to perform various computations in linear algebra, in the small field case, to match bounds established for the large field case - provided that one is willing to accept a probability of error that is bounded by a constant.

    A technique is also presented to avoid the use of binary search for a few problems, allowing log-n factors in time to be replaced by log-log-n factors.


University of Calgary Extension of Logo
Department of Computer Science

wayne eberly research cpsc 518 cpsc 667 other resources links

Research Black Box Linear Algebra Matrix Algebras Parallel Algorithms Other Research Publications