Title: A new algebra core for the minimal form' problem

Abstract

The demands of large-scale algebraic computation have led to the development of many new algorithms for manipulating algebraic objects in computer algebra systems. For instance, parallel versions of many important algorithms have been discovered. Simultaneously, more effective symbolic representations of algebraic objects have been sought. Also, while some clever techniques have been found for improving the speed of the algebraic simplification process, little attention has been given to the issue of restructuring expressions, or transforming them into minimal forms.'' By minimal form,'' we mean that form of an expression that involves a minimum number of operations. In a companion paper, we introduce some new algorithms that are very effective at finding minimal forms of expressions. These algorithms require algebraic and combinatorial machinery that is not readily available in most algebra systems. In this paper we describe a new algebra core that begins to provide the necessary capabilities.

@article{osti_5254684,
title = {A new algebra core for the minimal form' problem},
author = {Purtill, M.R. and Oliveira, J.S. and Cook, G.O. Jr.},
abstractNote = {The demands of large-scale algebraic computation have led to the development of many new algorithms for manipulating algebraic objects in computer algebra systems. For instance, parallel versions of many important algorithms have been discovered. Simultaneously, more effective symbolic representations of algebraic objects have been sought. Also, while some clever techniques have been found for improving the speed of the algebraic simplification process, little attention has been given to the issue of restructuring expressions, or transforming them into minimal forms.'' By minimal form,'' we mean that form of an expression that involves a minimum number of operations. In a companion paper, we introduce some new algorithms that are very effective at finding minimal forms of expressions. These algorithms require algebraic and combinatorial machinery that is not readily available in most algebra systems. In this paper we describe a new algebra core that begins to provide the necessary capabilities.},
doi = {},
journal = {},
number = ,
volume = ,
place = {United States},
year = {Fri Dec 20 00:00:00 EST 1991},
month = {Fri Dec 20 00:00:00 EST 1991}
}

The demands of large-scale algebraic computation have led to the development of many new algorithms for manipulating algebraic objects in computer algebra systems. For instance, parallel versions of many important algorithms have been discovered. Simultaneously, more effective symbolic representations of algebraic objects have been sought. Also, while some clever techniques have been found for improving the speed of the algebraic simplification process, little attention has been given to the issue of restructuring expressions, or transforming them into ``minimal forms.`` By ``minimal form,`` we mean that form of an expression that involves a minimum number of operations. In a companion paper,more » we introduce some new algorithms that are very effective at finding minimal forms of expressions. These algorithms require algebraic and combinatorial machinery that is not readily available in most algebra systems. In this paper we describe a new algebra core that begins to provide the necessary capabilities.« less

It is widely appreciated that large-scale algebraic computation (performing computer algebra operations on large symbolic expressions) places very significant demands upon existing computer algebra systems. Because of this, parallel versions of many important algorithms have been successfully sought, and clever techniques have been found for improving the speed of the algebraic simplification process. In addition, some attention has been given to the issue of restructuring large expressions, or transforming them into minimal forms.'' By minimal form,'' we mean that form of an expression that involves a minimum number of operations in the sense that no simple transformation on the expressionmore » leads to a form involving fewer operations. Unfortunately, the progress that has been achieved to date on this very hard problem is not adequate for the very significant demands of large computer algebra problems. In response to this situation, we have developed some efficient algorithms for constructing minimal forms.'' In this paper, the multi-stage algorithm in which these new algorithms operate is defined and the features of these algorithms are developed. In a companion paper, we introduce the core algebra engine of a new tool that provides the algebraic framework required for the implementation of these new algorithms.« less

It is widely appreciated that large-scale algebraic computation (performing computer algebra operations on large symbolic expressions) places very significant demands upon existing computer algebra systems. Because of this, parallel versions of many important algorithms have been successfully sought, and clever techniques have been found for improving the speed of the algebraic simplification process. In addition, some attention has been given to the issue of restructuring large expressions, or transforming them into ``minimal forms.`` By ``minimal form,`` we mean that form of an expression that involves a minimum number of operations in the sense that no simple transformation on the expressionmore » leads to a form involving fewer operations. Unfortunately, the progress that has been achieved to date on this very hard problem is not adequate for the very significant demands of large computer algebra problems. In response to this situation, we have developed some efficient algorithms for constructing ``minimal forms.`` In this paper, the multi-stage algorithm in which these new algorithms operate is defined and the features of these algorithms are developed. In a companion paper, we introduce the core algebra engine of a new tool that provides the algebraic framework required for the implementation of these new algorithms.« less

The capacitated minimal spanning tree problem (CMST) is directly related to network design and appears as an important problem, e.g., in the design of local access networks. Previous results with a multicommodity flow formulation indicate that {open_quotes}information{close_quotes} used from a related hop model might be used to design stronger formulations for the CMST. In this talk we present several formulations for the CMST which involve arc variables with a hop index which counts the number of arcs between the root and the corresponding arc. We present an extended single-commodity flow formulation and use previous results to transform the original formulationmore » into another equivalent one with a fewer number of constraints. Numerical results are presented which compare the lower bounds given by the new formulations with lower bounds given by previously known formulations.« less

This paper discusses issues in the design of ScaLAPACK, a software library for performing dense linear algebra computations on distributed memory concurrent computers. These issues are illustrated using the ScaLAPACK routines for reducing matrices to Hessenberg, tridiagonal, and bidiagonal forms. These routines are important in the solution of eigenproblems. The paper focuses on how building blocks are used to create higher-level library routines. Results are presented that demonstrate the scalability of the reduction routines. The most commonly-used building blocks used in ScaLAPACK are the sequential BLAS, the Parallel Block BLAS (PB-BLAS) and the Basic Linear Algebra Communication Subprograms (BLACS). Eachmore » of the matrix reduction algorithms consists of a series of steps in each of which one block column (or panel), and/or block row, of the matrix is reduced, followed by an update of the portion of the matrix that has not been factorized so far. This latter phase is performed using distributed Level 3 BLAS routines, and contains the bulk of the computation. However, the panel reduction phase involves a significant amount of communication. And is important in determining the scalability of the algorithm. The simplest way to parallelize the panel reduction phase is to replace the appropriate Level 2 and Level 3 BLAS routines appearing in the LAPACK routine (mostly matrix-vector and matrix-matrix multiplications) with PB-BLAS routines.« less