Hello!
Just a few sort comments and ideas...
Don Clugston Wrote:
> No, I don't have the perfect Matrix. I just want to explore the question of
> whether it exists.
> There are a plethora of C++ matrix libraries, all of which are far from perfect;
> they all involve trade-offs between various language limitations.
Appart from these C++ matrix libs, I've come to appreciate these array/matrix API's:
http://numpy.scipy.org/
Fortran 95 arrays
MATLAB arrays
Maple linear algebra API's
> In particular, the usage of expression templates creates all kinds of evil.
> > Requirements:
> 1. Must be compatible with future multi-dimensional array syntax such as
> double [7,5] m;
As it has already been mentioned, is the target of this Matrix only 2-dimensional? In mathematical terms a matrix is always a 2-dimensional array with a proper algebra. For example, AFAIK the transpose operation is not defined for a 3-dimensional array.
> 2. Must be theoretically capable of optimal performance.
> 3. Needs to support slices. This means we also need to support strided
> vectors. Therefore we also need matrix slices, and matrix references.
> Probably distinguishing between the T[] and T[new] concepts.
> 4. For storage we need packed, triangular, symmetric, banded, and sparse
> matrix support. (Possibly even 'calculated' matrix support, for
> user-defined matrix types which generate entries on-demand).
> 5. Can be generalized to tensors.
What does this mean? AFAIK tensors are not a mere dimensional generalization of matrices. Tensors are "geometric" entities that are independent of the coordinate system. Besides all this play with the covariant/contravariant you haven't got with matrices. 0-order tensors are scalars, 1-order tensors are vectors. We can use matrices to represent scalars, different covariant/contravariant vectors and 2-order tensors. I would like to see a Matrix type but respecting the mathematical abstraction it should represent.
> 6. Nice error messages.
> > I now have enough experience with my BLADE library to be confident that all of these features can be achieved in existing D. In particular, the crucial requirement 2 can be completely decoupled from the others.
> > But, there are many requirements (dozens?) which are not on my brief list, and perhaps they cannot all be met simultaneously, even in theory. Yet I'm not convinced that it's impossible. What are the other requirements?
IMHO I think that when implementing (classic) mathematical abstractions, its primitive conceptual design should be retained.
Pablo

I'm fond of blitz++ approach:
1. An array is a view of memory, with dimension, strides, and (possibly) non-zero bases. This design allows multiple, independent views of the same memory.
2. Memory allocation itself is ref counted.
Benefits:
* This design allows a slice, or view to be an lvalue.
* Plays nicely with python - semantics are similar to numpy/numeric.
specifically, I can return a view as a python object. Python can hold it,
and ref counting keeps the underlying memory alive.
* Generalized views. For example real(complex_array) is a view, and is an
lvalue.

Craig Black wrote:
> > What are strided arrays? Are they similar to sparse arrays?
Say you have a two-dimensional array and you want to slice a specific column. By design, the elements are not adjacent in memory so a normal slice won't work. I believe such a slice is called a strided array.
Sean

Hi, regarding the organisation of matrices maybe it'd pay to take a look at how Lush is doing them, as it's designed (and known) to do them quite well.
I'll quote from the Lush tutorial:
Matrices are really composed of two separate entities:
* the storage or srg which contains the following fields: a pointer to the
actual data , an element type identifier (and the size thereof) , and flags
that indicate if the data is writable or not , if it is in RAM or
memory-mapped from a file.
* the index or idx points to a srg and contains the following fields: the number of dimensions of the tensor , the size in each dimension , the address increment from one element to the next in each dimension (called modulo ) , and the offset of the first tensor element in the storage . This structure allows multiple idx to point to the same srg thereby allowing the data to be accessed in multiple ways without copying.
regards, frank

Oskar Linde wrote:
> Don Clugston wrote:
>> No, I don't have the perfect Matrix. I just want to explore the question of whether it exists.
> > A very interesting topic indeed. I will just give some quick thoughts to start with. First of all, a matrix is (generally) 2-dimensional, but there are many uses for higher-dimensional structures as well. Also, the 1 dimensional vector is a useful structure that could be handled in the same generic framework. Is the target just 2-dimensional matrices or also higher dimensional arrays?
> > The matrix as a mathematical structure is quite different in semantics and use from generic multi-dimensional arrays and should perhaps be treated separately.
> > To avoid confusion, I will use "size" to refer to the number of elements of a matrix/array and "dimensionality" to refer to the number of dimensions of an array (a vector is 1-dimensional, a matrix is 2-dimensional).
> > First, there is the separation of compile time and run time. One can identify (at least) 3 different needs, with different requirements for run-time/compile-time dynamics:
> > * Small static matrices/arrays (where the size and dimensionality are known at compile time)
> * Dynamic matrices/arrays (where the sizes are determined at run-time, but dimensionality is fixed at compile time)
> * Fully dynamic multidimensional arrays (where the dimensionality is determined at run time)
> > The third case could probably in most cases be left out,
I implemented the third case in Murray (nee MultiArray)
www.dsource.org/projects/multiarray
Mainly because I originally wanted an easy way port my NumPy code to D.
But I have to agree that after spending a lot of time on it, I would go with #2 if I were starting over. Fix the number of dimensions at compile time.
> but the first two make an interesting separation. Knowing the size at compile time is mostly an advantage for smaller matrices (eg 4x4, as commonly used for geometric translations). Larger matrices (eg 1000x1000) probably gain much less from compile time known sizes. There is also the matter of template bloat that means that compile time quantities are best left to the smaller numbers.
For #1, I've never had a need for anything besides 2x2,3x3,4x4. Maybe occasionally things like 2x3 to represent affine transformations, but then interpretation starts getting in the way. You can't handle it like a generic 2x3 matrix. So to me for the compile time size case, you can pretty much get by with just a fixed set of classes. Of course if you can generate those efficiently from a big template, then that's great. But there are usually a lot of special cases. If you look at a geometry-oriented linalg lib like Helix you can find many little differences in the API between Matrix22 and Matrix33. They can be taken care of with static ifs, but the question is if the loss of readability it's worth it in the end.
> >> Requirements:
>> 1. Must be compatible with future multi-dimensional array syntax such as
>> double [7,5] m;
>> 2. Must be theoretically capable of optimal performance.
> > Seems reasonable.
> >> 3. Needs to support slices. This means we also need to support strided
>> vectors. Therefore we also need matrix slices, and matrix references.
> > Yes, both strided views and non-strided references are definitely needed.
> >> Probably distinguishing between the T[] and T[new] concepts.
> > Such a distinction is definitely of an advantage. For large matrices, ownership of memory becomes important, as such structures may be too large to be able to rely on the GC.
Sorry, I forgot what this distinction was. Can someone remind me?
>> 4. For storage we need packed, triangular, symmetric, banded, and sparse
>> matrix support. (Possibly even 'calculated' matrix support, for
>> user-defined matrix types which generate entries on-demand).
> > For generality, at least sparse (and possibly calculated) are useful for vectors and higher dimensional arrays too.
>
>> 5. Can be generalized to tensors.
Numbers 4 and 5 are a bit conflicting. There is no "triangular storage" for anything other than 2-D arrays. Same goes for the commonly used compressed matrix formats: they are 2D-specific.
Convenient usage for linear algebra also suggests that opMul should do linear algebraic multiplication, but that doesn't make sense for tensors. Changing opMul to do different things depending on the dimension seems inconsistent.
My feeling is that the generic N-D array concept needs to be separated from the linear algebra concepts.
However, I think all these needs can be accommodated by layering things appropriately.
first Storage
then N-D array using some specific storage
then Matrix using 2-D array, and Vector using 1-D array.
Implementations of the storage concept can be dimension-specific (like CCS sparse), or generic (like strided memory).
N-D array will have only operators that make sense for N-D arrays without reference to specific interpretations. So opMul would be element-wise multiplication if anything.
Matrix and Vector can implement the linear algebraic rules.
> Yet I'm not convinced that it's impossible. What are the other
>> requirements?
> > Naturally, different element types (as well UDT).
I think the UDT part in particular is difficult to do well without reference return values. Even now I think with a complex matrix you have the problem that you can't set the real part of an element using opIndexAssign:
mat[3,2].re = 5. // oops you just set the real part of a copy!
Or for increment for any type of matrix (+= 1.0). But that's old news.
I think you can achieve something decent using current D, but the syntax will not be ideal. And doing all the above will be a very large effort.
I started collecting a list of possible libraries to draw inspiration from over at the Murray forum: http://www.dsource.org/forums/viewtopic.php?t=3307&sid=b05ae6bad8b91d51286b0096cd4ef9d2
--bb

0ffh wrote:
> > Hi, regarding the organisation of matrices maybe it'd pay to take a look at how Lush is doing them, as it's designed (and known) to do them quite well.
> I'll quote from the Lush tutorial:
> > Matrices are really composed of two separate entities:
> > * the storage or srg which contains the following fields: a pointer to the
> actual data , an element type identifier (and the size thereof) , and flags
> that indicate if the data is writable or not , if it is in RAM or
> memory-mapped from a file.
> > * the index or idx points to a srg and contains the following fields: the number of dimensions of the tensor , the size in each dimension , the address increment from one element to the next in each dimension (called modulo ) , and the offset of the first tensor element in the storage . This structure allows multiple idx to point to the same srg thereby allowing the data to be accessed in multiple ways without copying.
>
That sounds like the way NumPy in python does it too. And maybe Blitz++ as well?
--bb

Bill Baxter wrote:
> That sounds like the way NumPy in python does it too. And maybe Blitz++ as well?
Humm, upon reading the last few posts on this topic I decided that I was
probably way behind with this comment... I should read more first.
Seems this way of handling things is widely used; the more reason to
suspect there might be something to it. =)
regards, frank

Bill Baxter Wrote:
[...]
> > The third case could probably in most cases be left out,
> > I implemented the third case in Murray (nee MultiArray)
> www.dsource.org/projects/multiarray> > Mainly because I originally wanted an easy way port my NumPy code to D. But I have to agree that after spending a lot of time on it, I would go with #2 if I were starting over. Fix the number of dimensions at compile time.
> > > but the first two make an interesting separation. Knowing the size at compile time is mostly an advantage for smaller matrices (eg 4x4, as commonly used for geometric translations). Larger matrices (eg 1000x1000) probably gain much less from compile time known sizes. There is also the matter of template bloat that means that compile time quantities are best left to the smaller numbers.
> > For #1, I've never had a need for anything besides 2x2,3x3,4x4. Maybe occasionally things like 2x3 to represent affine transformations, but then interpretation starts getting in the way. You can't handle it like a generic 2x3 matrix. So to me for the compile time size case, you can pretty much get by with just a fixed set of classes. Of course if you can generate those efficiently from a big template, then that's great. But there are usually a lot of special cases. If you look at a geometry-oriented linalg lib like Helix you can find many little differences in the API between Matrix22 and Matrix33. They can be taken care of with static ifs, but the question is if the loss of readability it's worth it in the end.
In 2D Finite Elements some other fixed size small arrays are common, 6x6, 9x9,
depending on the type of elements you use, also some non-square ones.
[...]
> >> 4. For storage we need packed, triangular, symmetric, banded, and sparse matrix support. (Possibly even 'calculated' matrix support, for user-defined matrix types which generate entries on-demand).
> > > > For generality, at least sparse (and possibly calculated) are useful for vectors and higher dimensional arrays too.
> >
> >> 5. Can be generalized to tensors.
> > Numbers 4 and 5 are a bit conflicting. There is no "triangular storage" for anything other than 2-D arrays. Same goes for the commonly used compressed matrix formats: they are 2D-specific.
Well, that becomes something like "pyramidal" structure in 3D. They are quite common in 3D partial differential equations, like Laplace solvers, or in Fluid Mechanics.
> > Convenient usage for linear algebra also suggests that opMul should do linear algebraic multiplication, but that doesn't make sense for tensors. Changing opMul to do different things depending on the dimension seems inconsistent.
For Tensors we'd need something like ., :, and x, dot product, double dot product and cross product. It may seem inconsistent, but it is math :-)
Ciao
Tom
> > My feeling is that the generic N-D array concept needs to be separated from the linear algebra concepts.
> > However, I think all these needs can be accommodated by layering things appropriately.
> > first Storage
> then N-D array using some specific storage
> then Matrix using 2-D array, and Vector using 1-D array.
> > Implementations of the storage concept can be dimension-specific (like CCS sparse), or generic (like strided memory).
> > N-D array will have only operators that make sense for N-D arrays without reference to specific interpretations. So opMul would be element-wise multiplication if anything.
> > Matrix and Vector can implement the linear algebraic rules.
> > > Yet I'm not convinced that it's impossible. What are the other
> >> requirements?
> > > > Naturally, different element types (as well UDT).
> > I think the UDT part in particular is difficult to do well without
> reference return values. Even now I think with a complex matrix you
> have the problem that you can't set the real part of an element using
> opIndexAssign:
> mat[3,2].re = 5. // oops you just set the real part of a copy!
> > Or for increment for any type of matrix (+= 1.0). But that's old news.
> > > I think you can achieve something decent using current D, but the syntax will not be ideal. And doing all the above will be a very large effort.
> > I started collecting a list of possible libraries to draw inspiration from over at the Murray forum: http://www.dsource.org/forums/viewtopic.php?t=3307&sid=b05ae6bad8b91d51286b0096cd4ef9d2> > --bb

TomD wrote:
> Bill Baxter Wrote:
> [...]
>>> The third case could probably in most cases be left out,
>> I implemented the third case in Murray (nee MultiArray)
>> www.dsource.org/projects/multiarray>>>> Mainly because I originally wanted an easy way port my NumPy code to D.
>> But I have to agree that after spending a lot of time on it, I would go with #2 if I were starting over. Fix the number of dimensions at compile time.
>>>>> but the first two make an interesting separation. Knowing the size at compile time is mostly an advantage for smaller matrices (eg 4x4, as commonly used for geometric translations). Larger matrices (eg 1000x1000) probably gain much less from compile time known sizes. There is also the matter of template bloat that means that compile time quantities are best left to the smaller numbers.
>> For #1, I've never had a need for anything besides 2x2,3x3,4x4. Maybe occasionally things like 2x3 to represent affine transformations, but then interpretation starts getting in the way. You can't handle it like a generic 2x3 matrix. So to me for the compile time size case, you can pretty much get by with just a fixed set of classes. Of course if you can generate those efficiently from a big template, then that's great. But there are usually a lot of special cases. If you look at a geometry-oriented linalg lib like Helix you can find many little differences in the API between Matrix22 and Matrix33. They can be taken care of with static ifs, but the question is if the loss of readability it's worth it in the end.
> > In 2D Finite Elements some other fixed size small arrays are common, 6x6, 9x9, depending on the type of elements you use, also some non-square ones.
And wouldn't you know it. Right after I wrote that I sat down and started working on some code where I really needed 2x3 matrices. Doh! I guess I just never noticed needing them before.
> [...]
>>>> 4. For storage we need packed, triangular, symmetric, banded, and sparse
>>>> matrix support. (Possibly even 'calculated' matrix support, for
>>>> user-defined matrix types which generate entries on-demand).
>>> For generality, at least sparse (and possibly calculated) are useful for vectors and higher dimensional arrays too.
>> >
>>>> 5. Can be generalized to tensors.
>> Numbers 4 and 5 are a bit conflicting. There is no "triangular storage" for anything other than 2-D arrays. Same goes for the commonly used compressed matrix formats: they are 2D-specific.
> > Well, that becomes something like "pyramidal" structure in 3D.
> They are quite common in 3D partial differential equations, like
> Laplace solvers, or in Fluid Mechanics.
> >> Convenient usage for linear algebra also suggests that opMul should do linear algebraic multiplication, but that doesn't make sense for tensors. Changing opMul to do different things depending on the dimension seems inconsistent.
> > For Tensors we'd need something like ., :, and x, dot product, double dot
> product and cross product. It may seem inconsistent, but it is math :-)
So the thing to do seems to be to have different wrappers for the underlying storage methods that expose different operations.
Or just different sets of functions that treat the storage in different ways. linalg.dot, tensor.dot, ...
It's quite a large software engineering problem. But I'm looking forward to the day when Don is done writing it. :-)
--bb

Bill Baxter Wrote:
[...]
> > The third case could probably in most cases be left out,
> > I implemented the third case in Murray (nee MultiArray)
> www.dsource.org/projects/multiarray> > Mainly because I originally wanted an easy way port my NumPy code to D. But I have to agree that after spending a lot of time on it, I would go with #2 if I were starting over. Fix the number of dimensions at compile time.
> > > but the first two make an interesting separation. Knowing the size at compile time is mostly an advantage for smaller matrices (eg 4x4, as commonly used for geometric translations). Larger matrices (eg 1000x1000) probably gain much less from compile time known sizes. There is also the matter of template bloat that means that compile time quantities are best left to the smaller numbers.
> > For #1, I've never had a need for anything besides 2x2,3x3,4x4. Maybe occasionally things like 2x3 to represent affine transformations, but then interpretation starts getting in the way. You can't handle it like a generic 2x3 matrix. So to me for the compile time size case, you can pretty much get by with just a fixed set of classes. Of course if you can generate those efficiently from a big template, then that's great. But there are usually a lot of special cases. If you look at a geometry-oriented linalg lib like Helix you can find many little differences in the API between Matrix22 and Matrix33. They can be taken care of with static ifs, but the question is if the loss of readability it's worth it in the end.
In 2D Finite Elements some other fixed size small arrays are common, 6x6, 9x9,
depending on the type of elements you use, also some non-square ones.
[...]
> >> 4. For storage we need packed, triangular, symmetric, banded, and sparse matrix support. (Possibly even 'calculated' matrix support, for user-defined matrix types which generate entries on-demand).
> > > > For generality, at least sparse (and possibly calculated) are useful for vectors and higher dimensional arrays too.
> >
> >> 5. Can be generalized to tensors.
> > Numbers 4 and 5 are a bit conflicting. There is no "triangular storage" for anything other than 2-D arrays. Same goes for the commonly used compressed matrix formats: they are 2D-specific.
Well, that becomes something like "pyramidal" structure in 3D. They are quite common in 3D partial differential equations, like Laplace solvers, or in Fluid Mechanics.
> > Convenient usage for linear algebra also suggests that opMul should do linear algebraic multiplication, but that doesn't make sense for tensors. Changing opMul to do different things depending on the dimension seems inconsistent.
For Tensors we'd need something like ., :, and x, dot product, double dot product and cross product. It may seem inconsistent, but it is math :-)
Ciao
Tom
> > My feeling is that the generic N-D array concept needs to be separated from the linear algebra concepts.
> > However, I think all these needs can be accommodated by layering things appropriately.
> > first Storage
> then N-D array using some specific storage
> then Matrix using 2-D array, and Vector using 1-D array.
> > Implementations of the storage concept can be dimension-specific (like CCS sparse), or generic (like strided memory).
> > N-D array will have only operators that make sense for N-D arrays without reference to specific interpretations. So opMul would be element-wise multiplication if anything.
> > Matrix and Vector can implement the linear algebraic rules.
> > > Yet I'm not convinced that it's impossible. What are the other
> >> requirements?
> > > > Naturally, different element types (as well UDT).
> > I think the UDT part in particular is difficult to do well without
> reference return values. Even now I think with a complex matrix you
> have the problem that you can't set the real part of an element using
> opIndexAssign:
> mat[3,2].re = 5. // oops you just set the real part of a copy!
> > Or for increment for any type of matrix (+= 1.0). But that's old news.
> > > I think you can achieve something decent using current D, but the syntax will not be ideal. And doing all the above will be a very large effort.
> > I started collecting a list of possible libraries to draw inspiration from over at the Murray forum: http://www.dsource.org/forums/viewtopic.php?t=3307&sid=b05ae6bad8b91d51286b0096cd4ef9d2> > --bb