I am writing a D library based some of the stuff in SLATEC, and I've
come to a point where I need to decide on a way to manipulate vectors
and matrices. To that end, I have some ideas and questions I would like
comments on from the community.
Ideally, I want to restrict the user as little as possible, so I'm
writing heavily templated code in which one can use both library-defined
vector/matrix types and built-in arrays (both static and dynamic). My
reasons for this are:
a) Different problems may benefit from different types. Sparse
matrices, dense matrices, triangular matrices, etc. can all be
represented differently based on efficiency and/or memory requirements.
b) I hope that, at some point, my library will be of such a quality
that it may be useful to others, and in that event I will release it.
Interoperability with other libraries is therefore a goal for me, and a
part of this is to let the user choose other vector/matrix types than
the ones provided by me.
c) Often, for reasons of both efficiency and simplicity, it is
desirable to use arrays directly.
My first question goes to those among you who do a lot of linear algebra
in D: Do you think supporting both library types and arrays is worth
the trouble? Or should I just go with one and be done with it?
A user-defined matrix type would have opIndex(i,j) defined, and to
retrieve elements one would write m[i,j]. However, the syntax for
two-dimensional arrays is m[i][j], and this means I have to put a lot of
static ifs around my code, in order to check the type every time I
access a matrix. This leads me to my second question, which is a
suggestion for a language change, so I expect a lot of resistance. :)
Would it be problematic to define m[i,j,...] to be equivalent to
m[i][j][...] for built-in arrays, so that arrays and user-defined types
could be used interchangeably?
(And, importantly, are there anyone but me who think they would benefit
from this?)
-Lars

I am writing a D library based some of the stuff in SLATEC, and I've come=

a point where I need to decide on a way to manipulate vectors and matrice=

To that end, I have some ideas and questions I would like comments on fro=

the community.
Ideally, I want to restrict the user as little as possible, so I'm writin=

heavily templated code in which one can use both library-defined
vector/matrix types and built-in arrays (both static and dynamic). My
reasons for this are:
=A0 a) Different problems may benefit from different types. Sparse matric=

dense matrices, triangular matrices, etc. can all be represented differen=

based on efficiency and/or memory requirements.
=A0 b) I hope that, at some point, my library will be of such a quality t=

it may be useful to others, and in that event I will release it.
Interoperability with other libraries is therefore a goal for me, and a p=

of this is to let the user choose other vector/matrix types than the ones
provided by me.
=A0 c) Often, for reasons of both efficiency and simplicity, it is desira=

to use arrays directly.
My first question goes to those among you who do a lot of linear algebra =

D: Do you think supporting both library =A0types and arrays is worth the
trouble? Or should I just go with one and be done with it?
A user-defined matrix type would have opIndex(i,j) defined, and to retrie=

elements one would write m[i,j]. However, the syntax for two-dimensional
arrays is m[i][j], and this means I have to put a lot of static ifs aroun=

my code, in order to check the type every time I access a matrix. This le=

me to my second question, which is a suggestion for a language change, so=

expect a lot of resistance. :)
Would it be problematic to define m[i,j,...] to be equivalent to
m[i][j][...] for built-in arrays, so that arrays and user-defined types
could be used interchangeably?
(And, importantly, are there anyone but me who think they would benefit f=

this?)

How about just making a lightweight array-view class that provides
your interface, but manipulates an underlying m[] passed to the
constructor by the user?
Another point is that if you are willing to write code like
index(m,i,j) inside your lib instead of m[i,j], then you only need the
static conditional inside the index() function.
--bb

I am writing a D library based some of the stuff in SLATEC, and I've come to
a point where I need to decide on a way to manipulate vectors and matrices.
To that end, I have some ideas and questions I would like comments on from
the community.
Ideally, I want to restrict the user as little as possible, so I'm writing
heavily templated code in which one can use both library-defined
vector/matrix types and built-in arrays (both static and dynamic). My
reasons for this are:
a) Different problems may benefit from different types. Sparse matrices,
dense matrices, triangular matrices, etc. can all be represented differently
based on efficiency and/or memory requirements.
b) I hope that, at some point, my library will be of such a quality that
it may be useful to others, and in that event I will release it.
Interoperability with other libraries is therefore a goal for me, and a part
of this is to let the user choose other vector/matrix types than the ones
provided by me.
c) Often, for reasons of both efficiency and simplicity, it is desirable
to use arrays directly.
My first question goes to those among you who do a lot of linear algebra in
D: Do you think supporting both library types and arrays is worth the
trouble? Or should I just go with one and be done with it?
A user-defined matrix type would have opIndex(i,j) defined, and to retrieve
elements one would write m[i,j]. However, the syntax for two-dimensional
arrays is m[i][j], and this means I have to put a lot of static ifs around
my code, in order to check the type every time I access a matrix. This leads
me to my second question, which is a suggestion for a language change, so I
expect a lot of resistance. :)
Would it be problematic to define m[i,j,...] to be equivalent to
m[i][j][...] for built-in arrays, so that arrays and user-defined types
could be used interchangeably?
(And, importantly, are there anyone but me who think they would benefit from
this?)

How about just making a lightweight array-view class that provides
your interface, but manipulates an underlying m[] passed to the
constructor by the user?
Another point is that if you are willing to write code like
index(m,i,j) inside your lib instead of m[i,j], then you only need the
static conditional inside the index() function.
--bb

Those are good points. In both cases I guess the function call can be
inlined, so there is little or no performance hit.
-Lars

I do a lot of linear work with economic models in D at work. For this reason I
created small matrix and vector package that makes use of the ATLAS library.
Most of the time I don't know the sizes of the matrices and vectors that I am
working with until runtime. Because of this and to keep the memory contiguous,
the backend of my package is implemented as a dynamic single dimensional array.
Because of this and the fact that static arrays cannot exceed 16Mb (according
to
the D1 docs), I would suggest just working with opIndex(i,j) instead of the
arrays themselves.
Just my 2 cents,
JC
Lars Kyllingstad wrote:

I am writing a D library based some of the stuff in SLATEC, and I've
come to a point where I need to decide on a way to manipulate vectors
and matrices. To that end, I have some ideas and questions I would like
comments on from the community.
Ideally, I want to restrict the user as little as possible, so I'm
writing heavily templated code in which one can use both library-defined
vector/matrix types and built-in arrays (both static and dynamic). My
reasons for this are:
a) Different problems may benefit from different types. Sparse
matrices, dense matrices, triangular matrices, etc. can all be
represented differently based on efficiency and/or memory requirements.
b) I hope that, at some point, my library will be of such a quality
that it may be useful to others, and in that event I will release it.
Interoperability with other libraries is therefore a goal for me, and a
part of this is to let the user choose other vector/matrix types than
the ones provided by me.
c) Often, for reasons of both efficiency and simplicity, it is
desirable to use arrays directly.
My first question goes to those among you who do a lot of linear algebra
in D: Do you think supporting both library types and arrays is worth
the trouble? Or should I just go with one and be done with it?
A user-defined matrix type would have opIndex(i,j) defined, and to
retrieve elements one would write m[i,j]. However, the syntax for
two-dimensional arrays is m[i][j], and this means I have to put a lot of
static ifs around my code, in order to check the type every time I
access a matrix. This leads me to my second question, which is a
suggestion for a language change, so I expect a lot of resistance. :)
Would it be problematic to define m[i,j,...] to be equivalent to
m[i][j][...] for built-in arrays, so that arrays and user-defined types
could be used interchangeably?
(And, importantly, are there anyone but me who think they would benefit
from this?)
-Lars

I do a lot of linear work with economic models in D at work. For this
reason I created small matrix and vector package that makes use of the
ATLAS library. Most of the time I don't know the sizes of the matrices
and vectors that I am working with until runtime. Because of this and to
keep the memory contiguous, the backend of my package is implemented as
a dynamic single dimensional array.

Is your library available online? It would be helpful to be able to take
a look at it.

Because of this and the fact that static arrays cannot exceed 16Mb
(according to the D1 docs), I would suggest just working with
opIndex(i,j) instead of the arrays themselves.
Just my 2 cents,
JC
Lars Kyllingstad wrote:

I am writing a D library based some of the stuff in SLATEC, and I've
come to a point where I need to decide on a way to manipulate vectors
and matrices. To that end, I have some ideas and questions I would
like comments on from the community.
Ideally, I want to restrict the user as little as possible, so I'm
writing heavily templated code in which one can use both
library-defined vector/matrix types and built-in arrays (both static
and dynamic). My reasons for this are:
a) Different problems may benefit from different types. Sparse
matrices, dense matrices, triangular matrices, etc. can all be
represented differently based on efficiency and/or memory requirements.
b) I hope that, at some point, my library will be of such a quality
that it may be useful to others, and in that event I will release it.
Interoperability with other libraries is therefore a goal for me, and
a part of this is to let the user choose other vector/matrix types
than the ones provided by me.
c) Often, for reasons of both efficiency and simplicity, it is
desirable to use arrays directly.
My first question goes to those among you who do a lot of linear
algebra in D: Do you think supporting both library types and arrays
is worth the trouble? Or should I just go with one and be done with it?
A user-defined matrix type would have opIndex(i,j) defined, and to
retrieve elements one would write m[i,j]. However, the syntax for
two-dimensional arrays is m[i][j], and this means I have to put a lot
of static ifs around my code, in order to check the type every time I
access a matrix. This leads me to my second question, which is a
suggestion for a language change, so I expect a lot of resistance. :)
Would it be problematic to define m[i,j,...] to be equivalent to
m[i][j][...] for built-in arrays, so that arrays and user-defined
types could be used interchangeably?
(And, importantly, are there anyone but me who think they would
benefit from this?)
-Lars

Lars,
I would be happy for you to take a look at it. If you will send me a private
e-mail to this address, I'll get my package to you.
JC
Lars Kyllingstad Wrote:

JC wrote:

I do a lot of linear work with economic models in D at work. For this
reason I created small matrix and vector package that makes use of the
ATLAS library. Most of the time I don't know the sizes of the matrices
and vectors that I am working with until runtime. Because of this and to
keep the memory contiguous, the backend of my package is implemented as
a dynamic single dimensional array.

Is your library available online? It would be helpful to be able to take
a look at it.

Because of this and the fact that static arrays cannot exceed 16Mb
(according to the D1 docs), I would suggest just working with
opIndex(i,j) instead of the arrays themselves.
Just my 2 cents,
JC
Lars Kyllingstad wrote:

I am writing a D library based some of the stuff in SLATEC, and I've
come to a point where I need to decide on a way to manipulate vectors
and matrices. To that end, I have some ideas and questions I would
like comments on from the community.
Ideally, I want to restrict the user as little as possible, so I'm
writing heavily templated code in which one can use both
library-defined vector/matrix types and built-in arrays (both static
and dynamic). My reasons for this are:
a) Different problems may benefit from different types. Sparse
matrices, dense matrices, triangular matrices, etc. can all be
represented differently based on efficiency and/or memory requirements.
b) I hope that, at some point, my library will be of such a quality
that it may be useful to others, and in that event I will release it.
Interoperability with other libraries is therefore a goal for me, and
a part of this is to let the user choose other vector/matrix types
than the ones provided by me.
c) Often, for reasons of both efficiency and simplicity, it is
desirable to use arrays directly.
My first question goes to those among you who do a lot of linear
algebra in D: Do you think supporting both library types and arrays
is worth the trouble? Or should I just go with one and be done with it?
A user-defined matrix type would have opIndex(i,j) defined, and to
retrieve elements one would write m[i,j]. However, the syntax for
two-dimensional arrays is m[i][j], and this means I have to put a lot
of static ifs around my code, in order to check the type every time I
access a matrix. This leads me to my second question, which is a
suggestion for a language change, so I expect a lot of resistance. :)
Would it be problematic to define m[i,j,...] to be equivalent to
m[i][j][...] for built-in arrays, so that arrays and user-defined
types could be used interchangeably?
(And, importantly, are there anyone but me who think they would
benefit from this?)
-Lars

I am writing a D library based some of the stuff in SLATEC, and I've
come to a point where I need to decide on a way to manipulate vectors
and matrices. To that end, I have some ideas and questions I would like
comments on from the community.
Ideally, I want to restrict the user as little as possible, so I'm
writing heavily templated code in which one can use both library-defined
vector/matrix types and built-in arrays (both static and dynamic). My
reasons for this are:
a) Different problems may benefit from different types. Sparse
matrices, dense matrices, triangular matrices, etc. can all be
represented differently based on efficiency and/or memory requirements.

I use all of the above. It would be great to have them all within an
integrated framework.

b) I hope that, at some point, my library will be of such a quality
that it may be useful to others, and in that event I will release it.
Interoperability with other libraries is therefore a goal for me, and a
part of this is to let the user choose other vector/matrix types than
the ones provided by me.

Yes please. It would be great if you considered submitting it to Phobos.

c) Often, for reasons of both efficiency and simplicity, it is
desirable to use arrays directly.

Yah.

My first question goes to those among you who do a lot of linear algebra
in D: Do you think supporting both library types and arrays is worth
the trouble? Or should I just go with one and be done with it?

If you go templated you don't need to explicitly support built-in arrays
- they'll just work.

A user-defined matrix type would have opIndex(i,j) defined, and to
retrieve elements one would write m[i,j].

Yah.

However, the syntax for
two-dimensional arrays is m[i][j], and this means I have to put a lot of
static ifs around my code, in order to check the type every time I
access a matrix.

No, that's not a two-dimensional array; it's an array of arrays. If you
want to make your lib work with arrays of arrays, you could easily build
a little wrapper arround it (e.g. JaggedMatrix).

This leads me to my second question, which is a
suggestion for a language change, so I expect a lot of resistance. :)
Would it be problematic to define m[i,j,...] to be equivalent to
m[i][j][...] for built-in arrays, so that arrays and user-defined types
could be used interchangeably?
(And, importantly, are there anyone but me who think they would benefit
from this?)

I am writing a D library based some of the stuff in SLATEC, and I've
come to a point where I need to decide on a way to manipulate vectors
and matrices. To that end, I have some ideas and questions I would
like comments on from the community.
Ideally, I want to restrict the user as little as possible, so I'm
writing heavily templated code in which one can use both
library-defined vector/matrix types and built-in arrays (both static
and dynamic). My reasons for this are:
a) Different problems may benefit from different types. Sparse
matrices, dense matrices, triangular matrices, etc. can all be
represented differently based on efficiency and/or memory requirements.

I use all of the above. It would be great to have them all within an
integrated framework.

I don't think I'm the man to provide you with that, at least not yet. I
have next to no experience with high-performance linear algebra. This is
a major part of the reason why I want to let people choose for
themselves what matrix types/libraries they want to use in conjunction
with my stuff.
Therefore I am, for now, focusing on other things. I am about halfway
done writing D versions of the QUADPACK routines, and I have started
work on MINPACK. It is for the latter that some basic linear algebra
functionality is needed.
You could take a look at Bill Baxter's MultiArray library, which
contains wrappers for several linear algebra libraries, as well as
storage formats for different types of matrices. Also, I think, it works
with Don's BLADE library. (Kudos to Don for that awesome name, by the
way -- it's a lot cooler than BLAS.) Both are written in D1, though.

b) I hope that, at some point, my library will be of such a quality
that it may be useful to others, and in that event I will release it.
Interoperability with other libraries is therefore a goal for me, and
a part of this is to let the user choose other vector/matrix types
than the ones provided by me.

Yes please. It would be great if you considered submitting it to Phobos.

I agree that a set of decent vector and matrix types should be put into
phobos. But the packages I mention above are perhaps more suited for a
dedicated scientific library, don't you think?
Right now I use D1, but I've been looking at the new phobos lately, and
I have to say that the combination D2+phobos2 is a very enticing one.
Writing array wrappers has never been easier than with the new "alias
this" feature. :)
If it weren't for the lack of a 64-bit compiler for Linux I would switch
immediately. As it is, I am seriously considering using the 32-bit one,
even though it just feels... wrong.
Actually, I tried compiling my lib with the latest 32-bit DMD2.
Immediately after pressing enter, memory usage went through the roof and
my computer became completely unresponsive. It took me forever to kill
the dmd process. I guess it has something to do with the heavy use of
templates. Has anyone else experienced this?

c) Often, for reasons of both efficiency and simplicity, it is
desirable to use arrays directly.

Yah.

My first question goes to those among you who do a lot of linear
algebra in D: Do you think supporting both library types and arrays
is worth the trouble? Or should I just go with one and be done with it?

If you go templated you don't need to explicitly support built-in arrays
- they'll just work.

Yeah, vectors work fine, especially now that D has built-in vector
operations. The problem is with matrices, as described in the following.

A user-defined matrix type would have opIndex(i,j) defined, and to
retrieve elements one would write m[i,j].

Yah.

However, the syntax for two-dimensional arrays is m[i][j], and this
means I have to put a lot of static ifs around my code, in order to
check the type every time I access a matrix.

No, that's not a two-dimensional array; it's an array of arrays. If you
want to make your lib work with arrays of arrays, you could easily build
a little wrapper arround it (e.g. JaggedMatrix).

OK, technically it may not be what is called a two-dimensional array.
But it's the closest we've got, no? Or perhaps that would be a
rectangular (static) array, but those still have the [][] indexing syntax.

This leads me to my second question, which is a suggestion for a
language change, so I expect a lot of resistance. :)
Would it be problematic to define m[i,j,...] to be equivalent to
m[i][j][...] for built-in arrays, so that arrays and user-defined
types could be used interchangeably?
(And, importantly, are there anyone but me who think they would
benefit from this?)

This wouldn't harm, but it would be a special case.

I know, and possibly an esoteric one, at that. But it would make it
easier to create drop-in array replacements.
-Lars

I am writing a D library based some of the stuff in SLATEC, and I've
come to a point where I need to decide on a way to manipulate vectors
and matrices. To that end, I have some ideas and questions I would like
comments on from the community.
Ideally, I want to restrict the user as little as possible, so I'm
writing heavily templated code in which one can use both library-defined
vector/matrix types and built-in arrays (both static and dynamic). My
reasons for this are:
a) Different problems may benefit from different types. Sparse
matrices, dense matrices, triangular matrices, etc. can all be
represented differently based on efficiency and/or memory requirements.
b) I hope that, at some point, my library will be of such a quality
that it may be useful to others, and in that event I will release it.
Interoperability with other libraries is therefore a goal for me, and a
part of this is to let the user choose other vector/matrix types than
the ones provided by me.
c) Often, for reasons of both efficiency and simplicity, it is
desirable to use arrays directly.
My first question goes to those among you who do a lot of linear algebra
in D: Do you think supporting both library types and arrays is worth
the trouble? Or should I just go with one and be done with it?

I'd say its worth the trouble.

A user-defined matrix type would have opIndex(i,j) defined, and to
retrieve elements one would write m[i,j]. However, the syntax for
two-dimensional arrays is m[i][j], and this means I have to put a lot of
static ifs around my code, in order to check the type every time I
access a matrix. This leads me to my second question, which is a
suggestion for a language change, so I expect a lot of resistance. :)

I consider m[i][j] to be a jagged array, which is logically different from
matrix types. (i.e. its not square, etc.)

Would it be problematic to define m[i,j,...] to be equivalent to
m[i][j][...] for built-in arrays, so that arrays and user-defined types
could be used interchangeably?

I am writing a D library based some of the stuff in SLATEC, and I've
come to a point where I need to decide on a way to manipulate vectors
and matrices. To that end, I have some ideas and questions I would
like comments on from the community.
Ideally, I want to restrict the user as little as possible, so I'm
writing heavily templated code in which one can use both
library-defined vector/matrix types and built-in arrays (both static
and dynamic). My reasons for this are:
a) Different problems may benefit from different types. Sparse
matrices, dense matrices, triangular matrices, etc. can all be
represented differently based on efficiency and/or memory requirements.
b) I hope that, at some point, my library will be of such a
quality that it may be useful to others, and in that event I will
release it. Interoperability with other libraries is therefore a goal
for me, and a part of this is to let the user choose other
vector/matrix types than the ones provided by me.
c) Often, for reasons of both efficiency and simplicity, it is
desirable to use arrays directly.
My first question goes to those among you who do a lot of linear
algebra in D: Do you think supporting both library types and arrays
is worth the trouble? Or should I just go with one and be done with it?

I'd say its worth the trouble.

A user-defined matrix type would have opIndex(i,j) defined, and to
retrieve elements one would write m[i,j]. However, the syntax for
two-dimensional arrays is m[i][j], and this means I have to put a lot
of static ifs around my code, in order to check the type every time I
access a matrix. This leads me to my second question, which is a
suggestion for a language change, so I expect a lot of resistance. :)

I consider m[i][j] to be a jagged array, which is logically different
from matrix types. (i.e. its not square, etc.)

Would it be problematic to define m[i,j,...] to be equivalent to
m[i][j][...] for built-in arrays, so that arrays and user-defined
types could be used interchangeably?

I am writing a D library based some of the stuff in SLATEC, and I've
come to a point where I need to decide on a way to manipulate vectors
and matrices. To that end, I have some ideas and questions I would like
comments on from the community.
Ideally, I want to restrict the user as little as possible, so I'm
writing heavily templated code in which one can use both
library-defined vector/matrix types and built-in arrays (both static
and dynamic). My reasons for this are:
a) Different problems may benefit from different types. Sparse
matrices, dense matrices, triangular matrices, etc. can all be
represented differently based on efficiency and/or memory requirements.

yes the problem is that depending on the representation some operation
might be not efficient or maybe even not worth being implemented,
especially for sparse matrixes.
This field is quite difficult, I think Bill idea of having your wrapper
is probably the easiest, second possibility would be the have template
arguments for the operations you use.
You could even have a template parameter that acts just a way to
"select" at compiletime the correct function to call. Doable but I
think that it would need some thinking to get it right.

b) I hope that, at some point, my library will be of such a quality
that it may be useful to others, and in that event I will release it.
Interoperability with other libraries is therefore a goal for me, and a
part of this is to let the user choose other vector/matrix types than
the ones provided by me.

Good, Bill sparse matrixes lib and Don Glade lib apart (that it seem
you already found) maybe you can also take a look at my blip lib
http://dsource.org/projects/blip
It gives dense multidimensional arrays that can call lapack through
nice to use wrappers eigv for eigenvalues, dot for dot product, easy
subslicing, basically never copying unless really needed...
It needs the svn version of tango though...

c) Often, for reasons of both efficiency and simplicity, it is
desirable to use arrays directly.
My first question goes to those among you who do a lot of linear
algebra in D: Do you think supporting both library types and arrays is
worth the trouble? Or should I just go with one and be done with it?

I'd say its worth the trouble.

if for example you need just matrix,vector multiplication I would try
to abstract that away.
If your needs are more complex then if you manage to abstract away
nicely then go for it, but beware of tying to abstract away the things
before beginning, as the problem is common and well known, and there
are different approaches non compatible with each other... Start with
what you use, and try to abstract that away.

A user-defined matrix type would have opIndex(i,j) defined, and to
retrieve elements one would write m[i,j]. However, the syntax for
two-dimensional arrays is m[i][j], and this means I have to put a lot
of static ifs around my code, in order to check the type every time I
access a matrix. This leads me to my second question, which is a
suggestion for a language change, so I expect a lot of resistance. :)

I consider m[i][j] to be a jagged array, which is logically different
from matrix types. (i.e. its not square, etc.)

Would it be problematic to define m[i,j,...] to be equivalent to
m[i][j][...] for built-in arrays, so that arrays and user-defined types
could be used interchangeably?

Me too, but that's a bigger language change. At least there ought to be
a matrix type in the standard library.

This is a prime example of something that sparse matrixes might not
define so efficiently, looping over the indexes, maybe row-wise or
column-wise this is more likely to be always efficient, indexing... mhh
not so sure.
So think about your needs and start from there, but yes try to do
algorithms that need just matrix vector multiplication abstract.

(And, importantly, are there anyone but me who think they would benefit
from this?)