Tutorial: THE ALGEBRAIC SYNTHESIS OF ALGORITHMS ---------------------------------------------------------------------- Speaker: Erich Kaltofen Dept. of Mathematics and Dept. of Computer Science North Carolina State University, USA Abstract: In my tutorial, I will discuss several symbolic computation techniques for synthesizing algorithms for algebraic problems: ---------------------------------------------------------------------- Part 1 (50 minutes + questions) The transposition principle: The reverse mode of automatic differentiation, discovered as early as 1971, states that all partial derivatives of a rational function can be computed in the same space and time complexity as the function itself. E.g., the reverse mode yields efficient algorithms for the matrix inverse from efficient algorithms for the determinant. Bordewijk's 1956 transposition principle (related to Tellegen's theorem from circuit theory) states that a linear arithmetic circuit for computing a matrix times vector product can be reversed to obtain the transpose matrix times vector product. The principle is actually a special case of the reverse mode in automatic differentiation. Recently, the principle has been employed to design fast algorithms for weighted power sums, determining the minimum polynomial of an algebraic number via power projections, solution of transposed Vandermonde systems, etc. Part 1 of the tutorial will cover the proof of the principle, complexity theoretic considerations, and several applications. --------------------------------------------------------------------- Part 2 (50 minutes + questions) Elimination of divisions: The tutorial presents our recent application of Strassen's 1973 method for removing divisions from arithmetic circuits that compute polynomials, such as the LU matrix factorization circuit that computes the determinant. Valiant (1979) studies determinants of matrices that contain variables or scalars as entries. He shows that every polynomial computable by an arithmetic formula of size s is a determinant of such a matrix of dimensions s+1. P. Korian and I consider the problem of expressing polynomials given by a fraction of two formulas or by a fraction of two such determinants, and show that those polynomials can again be computed as a determinant of a matrix with variables and scalars as entries and of polynomial dimensions in the input formula size/dimensions. Part 2 of the tutorial focuses on arithmetic formula and circuit transformations and division-free circuits for the determinant.