Well, I know this's been savaged before, but I've not had the displeasure in
using it for a long time.
Alas, I have to say AA+in stinks.
Isn't D supposed to be avoiding pointers?
How can something like the following be anything but a hideous warty piece of
crap?
Field findField(char[] fieldName)
in
{
assert(null !== fieldName);
}
body
{
Field *pfield = (fieldName in m_values);
return (null === pfield) ? null : *pfield;
}
Plus it violates KISS - it's only reasonable to do two things in one if they
fit. And they clearly don't in this case.
'in' should just evaluate to true or false. And we should come up with another
mechanism for testing presence *and*
getting the value in one.
Ooo. How about aa having the following 'properties':
$aa$[key] => returns value or inserts
.lookup(key, out value) => returns true and sets value if key in aa,
otherwise returns false and leaves value as is

Alas, I have to say AA+in stinks.
Isn't D supposed to be avoiding pointers?

I have this eerie feeling that "readf" will end up using pointers too ?
And with the lack of movement on the "variadic out" front, it has to...

'in' should just evaluate to true or false. And we should come up with another
mechanism for testing presence *and* getting the value in one.

With the usual amount of ignorance, one can use "in" as returning bool ?
(of course, you would still have to do a second lookup to get the value)
I suggested that hash[key] should stay the h*ll away from setting keys
that don't exist and just return .init, but lots of people hated it...
I think I will just continue with: (key in hash) ? hash[key] : key.init
and "optimize later". Besides, aren't hash lookup supposed to be fast ?
--anders

I suggested that hash[key] should stay the h*ll away from
setting keys that don't exist and just return .init, but lots of
people hated it...

I am on your side.
However, I just believe in the psychlogical phenomenons of
understanding. And one of them is, that most people once involved in
a more complex piece of thinking and then understood it, just do not
want to through it away, because they have invested energy into this
understanding.
Most people will not accept, that the right way to reason about it,
is to conclude that because of the fact that they had to invest
energy, the introduction of this feature prolonges the time of
learning D and that this prolonging is only justified if the costs
for it are payed back in the further run.
Diminishing the costs is a major goal of the design of D. But I have
never ever seen costs calculated.
-manfred

Excellent philosophy.
I'd disagree with both of you that it is in any way appropriate for the [] to
return a default in the case where an
entry does not exist.
There are three options:
1. return .init
2. insert an element
3. throw an exception
2. is what C++'s map does, which is quite a mistake (albeit one that could be
ameliorated, given C++'s (current) greater
expressiveness). 1. is a huge mistake, and I guess that's why no
language/library (or at least none I'm aware of) has
taken that option. (The exception of sorts is Ruby, which returns nil.)
IMO, 3. is the only sane solution. It's the same problem with the stinky crap
of opCmp. Look at the rubbish I've had to
write to get comparable Fields:
class Field
{
. . .
/// \name Comparison
/// {
public:
int opCmp(Object rhs)
{
Field f = cast(Field)(rhs);
if(null === f)
{
throw new InvalidTypeException("Attempt to compare a Field with an
instance of another type");
}
return opCmp(f);
}
public:
int opCmp(Field rhs)
{
return (this === rhs) ? 0 : std.string.cmp(m_name, rhs.m_name);
}
/// }
What should one do in the case where a Field is asked to compare itself against
_anything_ else? Return 0? -1? +1?
Assert? There's no sensible answer. The only half-sensible solution is to throw
an exception, but that still stinks.
What if, for some reason, someone wants to store a heterogeneous collection and
do weird things with it? Frankly, I find
stuff like this a f-ing joke, and it makes me truly hate D. Or, rather, it
makes me resolve to never use arrays or
associative arrays, and only go for DTL with its compile time type enforcement.
(Of course, it's not ready yet ...)
By the way, I've an article in this month's DDJ about operator []= for C++,
which discusses many of these issues, and
mistakes in the C++ standard library design (which we have to hope we don't
replicate in D!!).
Cheers
Matthew
"Manfred Nowak" <svv1999 hotmail.com> wrote in message
news:d0ha2h$23kt$1 digitaldaemon.com...

Anders F Björklund <afb algonet.se> wrote:
[...]

I suggested that hash[key] should stay the h*ll away from
setting keys that don't exist and just return .init, but lots of
people hated it...

I am on your side.
However, I just believe in the psychlogical phenomenons of
understanding. And one of them is, that most people once involved in
a more complex piece of thinking and then understood it, just do not
want to through it away, because they have invested energy into this
understanding.
Most people will not accept, that the right way to reason about it,
is to conclude that because of the fact that they had to invest
energy, the introduction of this feature prolonges the time of
learning D and that this prolonging is only justified if the costs
for it are payed back in the further run.
Diminishing the costs is a major goal of the design of D. But I have
never ever seen costs calculated.
-manfred

There are three options:
1. return .init
2. insert an element
3. throw an exception
2. is what C++'s map does, which is quite a mistake (albeit one that
could be ameliorated, given C++'s (current) greater
expressiveness).
1. is a huge mistake, and I guess that's why no language/library (or at
least none I'm aware of) has
taken that option. (The exception of sorts is Ruby, which returns nil.)
IMO, 3. is the only sane solution.

I dislike 3 also, it would require code like:
Field f;
try {
f = array["bob"];
} catch (UnknownKeyException e) {
}
to get an item from an AA without a double lookup. Basically I worry there
will be a lot of un-necessary try/catch blocks, making the code harder to
read/follow.
The best soln IMO is a method call, eg.
bool contains(KeyType key, out ValueType value);
where it returns true/false and assigns 'value' if found.
The code looks much neater:
if (array.contains("bob",f)) {
}
It can almost be done with templates, i.e. it requires implicit
instantiation and the array method feature to be applied.
template contains(KeyType,ValueType) {
bool contains(ValueType[KeyType] array, KeyType key, out ValueType value) {
..etc..
}
}
char[char[]] array;
char[] value;
contains!(char[],char[])(array,"bob",value);
imagine the above line with implicit instantion and the array method
feature...
array.contains("bob",value);
perfect!
Regan

There are three options:
1. return .init
2. insert an element
3. throw an exception
2. is what C++'s map does, which is quite a mistake (albeit one that could be
ameliorated, given C++'s (current)
greater
expressiveness).
1. is a huge mistake, and I guess that's why no language/library (or at least
none I'm aware of) has
taken that option. (The exception of sorts is Ruby, which returns nil.)
IMO, 3. is the only sane solution.

I dislike 3 also, it would require code like:
Field f;
try {
f = array["bob"];
} catch (UnknownKeyException e) {
}
to get an item from an AA without a double lookup.

Not so. I propose only a few hours ago that in should be redefined to be a
boolean, and we should add
bool $aa$.lookup(in key, out value);
No double lookup.
No ambiguity.
Highly useful.
Highly efficient.

There are three options:
1. return .init
2. insert an element
3. throw an exception
2. is what C++'s map does, which is quite a mistake (albeit one that
could be ameliorated, given C++'s (current)
greater
expressiveness).
1. is a huge mistake, and I guess that's why no language/library (or
at least none I'm aware of) has
taken that option. (The exception of sorts is Ruby, which returns nil.)
IMO, 3. is the only sane solution.

I dislike 3 also, it would require code like:
Field f;
try {
f = array["bob"];
} catch (UnknownKeyException e) {
}
to get an item from an AA without a double lookup.

Not so. I propose only a few hours ago that in should be redefined to be
a boolean, and we should add

I saw that post, I proposed the same thing months ago, we're in agreeance
then.
Where does the exception come into it?
Regan

There are three options:
1. return .init
2. insert an element
3. throw an exception
2. is what C++'s map does, which is quite a mistake (albeit one that could
be ameliorated, given C++'s (current)
greater
expressiveness).
1. is a huge mistake, and I guess that's why no language/library (or at
least none I'm aware of) has
taken that option. (The exception of sorts is Ruby, which returns nil.)
IMO, 3. is the only sane solution.

I dislike 3 also, it would require code like:
Field f;
try {
f = array["bob"];
} catch (UnknownKeyException e) {
}
to get an item from an AA without a double lookup.

Not so. I propose only a few hours ago that in should be redefined to be a
boolean, and we should add

I saw that post, I proposed the same thing months ago, we're in agreeance
then.
Where does the exception come into it?

The exception only occurs when using the subscript operator and the item does
not exist

There are three options:
1. return .init
2. insert an element
3. throw an exception
2. is what C++'s map does, which is quite a mistake (albeit one
that could be ameliorated, given C++'s (current)
greater
expressiveness).
1. is a huge mistake, and I guess that's why no language/library
(or at least none I'm aware of) has
taken that option. (The exception of sorts is Ruby, which returns
nil.)
IMO, 3. is the only sane solution.

I dislike 3 also, it would require code like:
Field f;
try {
f = array["bob"];
} catch (UnknownKeyException e) {
}
to get an item from an AA without a double lookup.

Not so. I propose only a few hours ago that in should be redefined to
be a boolean, and we should add

I saw that post, I proposed the same thing months ago, we're in
agreeance then.
Where does the exception come into it?

The exception only occurs when using the subscript operator and the item
does not exist

Ahh.. ok, if the 'contains' method is present, then I agree the exception
is the way I'd go.
Regan

Excellent philosophy.
I'd disagree with both of you that it is in any way appropriate for the [] to
return a default in the case where an
entry does not exist.
There are three options:
1. return .init
2. insert an element
3. throw an exception

My vote would be for 3 since looking up a key that isn't in the array is the AA
equivalent to indexing a dynamic array outside of its bounds. Since indexing out
of bounds throws an exception (in debug mode) so should invalid key lookup.
Plus the exception that gets thrown should be general enough to cover both
cases. I had to invent such an exception for MinTL but I'd really really love
to just reuse a general "illegal index" exception from std.
-Ben
ps - Matthew, did you have too much coffee today? Your posts are ... intense.

Excellent philosophy.
I'd disagree with both of you that it is in any way appropriate for the [] to
return a default in the case where an
entry does not exist.
There are three options:
1. return .init
2. insert an element
3. throw an exception

My vote would be for 3 since looking up a key that isn't in the array is the AA
equivalent to indexing a dynamic array outside of its bounds. Since indexing
out
of bounds throws an exception (in debug mode) so should invalid key lookup.

Well said!

Plus the exception that gets thrown should be general enough to cover both
cases. I had to invent such an exception for MinTL but I'd really really love
to just reuse a general "illegal index" exception from std.

-Ben
ps - Matthew, did you have too much coffee today? Your posts are ... intense.

Well, I've had very little sleep in the last 48 hours, and I'm stupendously
late in the preparation of the May
instalment of my CUJ column - going to be about Open-RJ/D, which I've just sent
(along with documentation!!!) to
Walter - and I've released 7 libraries in the last two weeks, and I'm 3 weeks
behind the most recent deadline for
starting the writing of my next two books, and my last client's got *more* work
for me.
So, I guess I'm probably being more than a little strange.
Sorry if it's getting old. Am off to grab some sleep now, so enjoy the
cessation, and hope for a more sane barrage
tomorrow. (Now that Open-RJ/D is done, I'll be busy on other things for a few
days, so maybe the volume'll be down at
least.)
Cheers
Matthew

Plus the exception that gets thrown should be general enough to cover both
cases. I had to invent such an exception for MinTL but I'd really really love
to just reuse a general "illegal index" exception from std.

No problem. I'd be happy to get *an* exception and really happy to get a
standard exception.

-Ben
ps - Matthew, did you have too much coffee today? Your posts are ... intense.

Well, I've had very little sleep in the last 48 hours, and I'm stupendously
late in the preparation of the May
instalment of my CUJ column - going to be about Open-RJ/D, which I've just sent
(along with documentation!!!) to
Walter - and I've released 7 libraries in the last two weeks, and I'm 3 weeks
behind the most recent deadline for
starting the writing of my next two books, and my last client's got *more* work
for me.
So, I guess I'm probably being more than a little strange.
Sorry if it's getting old. Am off to grab some sleep now, so enjoy the
cessation, and hope for a more sane barrage
tomorrow. (Now that Open-RJ/D is done, I'll be busy on other things for a few
days, so maybe the volume'll be down at
least.)

No need to explain - it was just a surprise to be drinking my morning coffee and
open up the newsgroup and see this barrage of posts. The more the better,
though. A flame-war or two in morning is a good way to get the day started :-)

My vote would be for 3 since looking up a key that isn't in the array is the AA
equivalent to indexing a dynamic array outside of its bounds. Since indexing
out
of bounds throws an exception (in debug mode) so should invalid key lookup.

I disagree. An out of bounds array index is an exceptional case because
the size of the array is, usually, a known quantity - i.e. all members
of the set of numbers from 0...n are valid, and n *must* be known during
the allocation of the array. Unassigned keys in a hashmap are not an
exceptional case because for hashmaps there is no valid set of keys (in
the general case) that is guaranteed to be known at runtime. Besides
which, hasmaps are quite often used as a form of registry where the case
of unassigned keys is common.
I'm in the camp that advocates throwing exceptions only in exceptional
circumstances, and not as a general catch-all failure signal. If you
imagine a world where exceptions are never caught, what are some failure
cases which, the majority of the time, would be an acceptable reason to
terminate the program? Any answers you come up with are good candidated
to define as 'exceptional'. In my book, hashmap lookup failures do not
pass that test.
Just to push my point further, there is not one case where an out of
bounds array index is acceptable. And in every case the developer would
certainly want to know if an attempt to index an out of bounds array
were made. In our imaginary world of uncaught exceptions, this is a
great time to terminate the program. Yet there are a large number of
hashmap use cases where you would not want your application to terminate
because a particular key wasn't assigned. Imagine a plugin system that
loads plugins on demand. Of course the user should be notified that a
particular plugin isn't available to load - but does that pass the test
as an acceptable reason to terminate the application? Not in the genral
case, it is in the application specific domain and therefore it is not
exceptional. And when a failure case is not exceptional, then a testable
value (a boolean or null, for example) should be returned instead so
that the application can decide whether or not to terminate.
True, you can choose to catch an exception and do nothing with it, but
in my book that is poor design, particularly in a language like D which
has no mechanism that requires you to catch exceptions, or even declare
that a particular method or operation throws one. It's too easy to
overlook an exception here or there. If my production software
terminates, it had better be due to an exceptional case.

My vote would be for 3 since looking up a key that isn't in the array
is the AA
equivalent to indexing a dynamic array outside of its bounds. Since
indexing out
of bounds throws an exception (in debug mode) so should invalid key
lookup.

I disagree. An out of bounds array index is an exceptional case because
the size of the array is, usually, a known quantity - i.e. all members
of the set of numbers from 0...n are valid, and n *must* be known during
the allocation of the array. Unassigned keys in a hashmap are not an
exceptional case because for hashmaps there is no valid set of keys (in
the general case) that is guaranteed to be known at runtime. Besides
which, hasmaps are quite often used as a form of registry where the case
of unassigned keys is common.
I'm in the camp that advocates throwing exceptions only in exceptional
circumstances, and not as a general catch-all failure signal. If you
imagine a world where exceptions are never caught, what are some failure
cases which, the majority of the time, would be an acceptable reason to
terminate the program? Any answers you come up with are good candidated
to define as 'exceptional'. In my book, hashmap lookup failures do not
pass that test.
Just to push my point further, there is not one case where an out of
bounds array index is acceptable. And in every case the developer would
certainly want to know if an attempt to index an out of bounds array
were made. In our imaginary world of uncaught exceptions, this is a
great time to terminate the program. Yet there are a large number of
hashmap use cases where you would not want your application to terminate
because a particular key wasn't assigned. Imagine a plugin system that
loads plugins on demand. Of course the user should be notified that a
particular plugin isn't available to load - but does that pass the test
as an acceptable reason to terminate the application? Not in the genral
case, it is in the application specific domain and therefore it is not
exceptional. And when a failure case is not exceptional, then a testable
value (a boolean or null, for example) should be returned instead so
that the application can decide whether or not to terminate.
True, you can choose to catch an exception and do nothing with it, but
in my book that is poor design, particularly in a language like D which
has no mechanism that requires you to catch exceptions, or even declare
that a particular method or operation throws one. It's too easy to
overlook an exception here or there. If my production software
terminates, it had better be due to an exceptional case.

While I tend to agree with you here.. I think that if there is a way to
query and retrieve the value i.e.
bool contains(KeyType key, out ValueType value);
Then it is acceptable to throw an exception on:
array["key"]
because, given the former, the latter makes an assumption that the value
exists. So, when it doesn't exist that assumption is false, and that is an
exceptional case.
Regan

Hear Hear!
It just doesn't make sense to wrap a try/catch around hashmap lookups, where a
missing entry is perfectly legitimate. The performance hit would be even less
attractive.
In article <d0j1fc$ueh$1 digitaldaemon.com>, Mike Parker says...

Ben Hinkle wrote:

My vote would be for 3 since looking up a key that isn't in the array is the AA
equivalent to indexing a dynamic array outside of its bounds. Since indexing
out
of bounds throws an exception (in debug mode) so should invalid key lookup.

I disagree. An out of bounds array index is an exceptional case because
the size of the array is, usually, a known quantity - i.e. all members
of the set of numbers from 0...n are valid, and n *must* be known during
the allocation of the array. Unassigned keys in a hashmap are not an
exceptional case because for hashmaps there is no valid set of keys (in
the general case) that is guaranteed to be known at runtime. Besides
which, hasmaps are quite often used as a form of registry where the case
of unassigned keys is common.
I'm in the camp that advocates throwing exceptions only in exceptional
circumstances, and not as a general catch-all failure signal. If you
imagine a world where exceptions are never caught, what are some failure
cases which, the majority of the time, would be an acceptable reason to
terminate the program? Any answers you come up with are good candidated
to define as 'exceptional'. In my book, hashmap lookup failures do not
pass that test.
Just to push my point further, there is not one case where an out of
bounds array index is acceptable. And in every case the developer would
certainly want to know if an attempt to index an out of bounds array
were made. In our imaginary world of uncaught exceptions, this is a
great time to terminate the program. Yet there are a large number of
hashmap use cases where you would not want your application to terminate
because a particular key wasn't assigned. Imagine a plugin system that
loads plugins on demand. Of course the user should be notified that a
particular plugin isn't available to load - but does that pass the test
as an acceptable reason to terminate the application? Not in the genral
case, it is in the application specific domain and therefore it is not
exceptional. And when a failure case is not exceptional, then a testable
value (a boolean or null, for example) should be returned instead so
that the application can decide whether or not to terminate.
True, you can choose to catch an exception and do nothing with it, but
in my book that is poor design, particularly in a language like D which
has no mechanism that requires you to catch exceptions, or even declare
that a particular method or operation throws one. It's too easy to
overlook an exception here or there. If my production software
terminates, it had better be due to an exceptional case.

It just doesn't make sense to wrap a try/catch around hashmap lookups,
where a
missing entry is perfectly legitimate. The performance hit would be
even less
attractive.

I don't recall a single post in the last week that's even hinted at such
a thing. Naturally that would be total crap. The throwing [] has been
suggested as an inprovement to the semantics of the existing subscript
operation _in combination with_ the addition of a non-throwing, boolean
returning 'in' and a non-throwing, boolean returning lookup(in key, out
value) method.
Can we please all either get back on point, or get off the thread?! This
stuff's bordering on the peurile

In article <d0j1fc$ueh$1 digitaldaemon.com>, Mike Parker says...

Ben Hinkle wrote:

My vote would be for 3 since looking up a key that isn't in the
array is the AA
equivalent to indexing a dynamic array outside of its bounds. Since
indexing out
of bounds throws an exception (in debug mode) so should invalid key
lookup.

I disagree. An out of bounds array index is an exceptional case
because
the size of the array is, usually, a known quantity - i.e. all members
of the set of numbers from 0...n are valid, and n *must* be known
during
the allocation of the array. Unassigned keys in a hashmap are not an
exceptional case because for hashmaps there is no valid set of keys
(in
the general case) that is guaranteed to be known at runtime. Besides
which, hasmaps are quite often used as a form of registry where the
case
of unassigned keys is common.
I'm in the camp that advocates throwing exceptions only in exceptional
circumstances, and not as a general catch-all failure signal. If you
imagine a world where exceptions are never caught, what are some
failure
cases which, the majority of the time, would be an acceptable reason
to
terminate the program? Any answers you come up with are good
candidated
to define as 'exceptional'. In my book, hashmap lookup failures do not
pass that test.
Just to push my point further, there is not one case where an out of
bounds array index is acceptable. And in every case the developer
would
certainly want to know if an attempt to index an out of bounds array
were made. In our imaginary world of uncaught exceptions, this is a
great time to terminate the program. Yet there are a large number of
hashmap use cases where you would not want your application to
terminate
because a particular key wasn't assigned. Imagine a plugin system that
loads plugins on demand. Of course the user should be notified that a
particular plugin isn't available to load - but does that pass the
test
as an acceptable reason to terminate the application? Not in the
genral
case, it is in the application specific domain and therefore it is not
exceptional. And when a failure case is not exceptional, then a
testable
value (a boolean or null, for example) should be returned instead so
that the application can decide whether or not to terminate.
True, you can choose to catch an exception and do nothing with it, but
in my book that is poor design, particularly in a language like D
which
has no mechanism that requires you to catch exceptions, or even
declare
that a particular method or operation throws one. It's too easy to
overlook an exception here or there. If my production software
terminates, it had better be due to an exceptional case.

"Exceptions are for exceptional cases" ~ a non-existant entry in an HashMap is
not exceptional in and of itself. Period.

It just doesn't make sense to wrap a try/catch around hashmap lookups,
where a
missing entry is perfectly legitimate. The performance hit would be
even less
attractive.

I don't recall a single post in the last week that's even hinted at such
a thing. Naturally that would be total crap.

Guess I missed a whole lot of posts.

The throwing [] has been
suggested as an inprovement to the semantics of the existing subscript
operation _in combination with_ the addition of a non-throwing, boolean
returning 'in' and a non-throwing, boolean returning lookup(in key, out
value) method.

OK. Thanks for the catchup. You'll perhaps forgive me for noticing that three
seperate and distinct means to 'lookup' an AA entry, and the fact they
apparently have differing behaviour, might seem a tad odd and unweildly? Perhaps
we should stick to the lookup() method?

Can we please all either get back on point, or get off the thread?! This
stuff's bordering on the peurile

<cough> Ahem. Is it really? Or are you having a particularly bad day?

In article <d0j1fc$ueh$1 digitaldaemon.com>, Mike Parker says...

Ben Hinkle wrote:

My vote would be for 3 since looking up a key that isn't in the
array is the AA
equivalent to indexing a dynamic array outside of its bounds. Since
indexing out
of bounds throws an exception (in debug mode) so should invalid key
lookup.

I disagree. An out of bounds array index is an exceptional case
because
the size of the array is, usually, a known quantity - i.e. all members
of the set of numbers from 0...n are valid, and n *must* be known
during
the allocation of the array. Unassigned keys in a hashmap are not an
exceptional case because for hashmaps there is no valid set of keys
(in
the general case) that is guaranteed to be known at runtime. Besides
which, hasmaps are quite often used as a form of registry where the
case
of unassigned keys is common.
I'm in the camp that advocates throwing exceptions only in exceptional
circumstances, and not as a general catch-all failure signal. If you
imagine a world where exceptions are never caught, what are some
failure
cases which, the majority of the time, would be an acceptable reason
to
terminate the program? Any answers you come up with are good
candidated
to define as 'exceptional'. In my book, hashmap lookup failures do not
pass that test.
Just to push my point further, there is not one case where an out of
bounds array index is acceptable. And in every case the developer
would
certainly want to know if an attempt to index an out of bounds array
were made. In our imaginary world of uncaught exceptions, this is a
great time to terminate the program. Yet there are a large number of
hashmap use cases where you would not want your application to
terminate
because a particular key wasn't assigned. Imagine a plugin system that
loads plugins on demand. Of course the user should be notified that a
particular plugin isn't available to load - but does that pass the
test
as an acceptable reason to terminate the application? Not in the
genral
case, it is in the application specific domain and therefore it is not
exceptional. And when a failure case is not exceptional, then a
testable
value (a boolean or null, for example) should be returned instead so
that the application can decide whether or not to terminate.
True, you can choose to catch an exception and do nothing with it, but
in my book that is poor design, particularly in a language like D
which
has no mechanism that requires you to catch exceptions, or even
declare
that a particular method or operation throws one. It's too easy to
overlook an exception here or there. If my production software
terminates, it had better be due to an exceptional case.

"Exceptions are for exceptional cases" ~ a non-existant entry in an
HashMap is
not exceptional in and of itself. Period.

Correct.
But trying to get the value of a non-existant entry is an exceptional
condition. Axiomatic.

The throwing [] has been
suggested as an inprovement to the semantics of the existing subscript
operation _in combination with_ the addition of a non-throwing,
boolean
returning 'in' and a non-throwing, boolean returning lookup(in key,
out
value) method.

OK. Thanks for the catchup. You'll perhaps forgive me for noticing
that three
seperate and distinct means to 'lookup' an AA entry, and the fact they
apparently have differing behaviour, might seem a tad odd and
unweildly?

Better to have 10 simple, well-defined, discoverable, sensible things
that just do their jobs in the way any sane person would expect than
shoehorning many disjoint tasks into one.

Perhaps
we should stick to the lookup() method?

I really don't see the big deal.
int[char[]] vice;
1. Ask if something exists, without wanting to get hold of it _now_
if("arse" in vice)
{
... do something in response to knowing that your "arse" is in
the "vice"
2. Ask if something exists, and get hold of it if it does
int value;
if(vice.lookup("arse", value))
{
... do something with the value of your "arse"
3. "Knowing" that something exists, simply get hold of it
int value = vice["arse"];
If vice does not have your "arse" in it, then it throws an
exception. What else can it sensibly do?
I use associative containers in several languages, including C++
(auto-inserts) and Ruby (returns default), and am constantly hrumphing
around and cursing stupid language designers. Associative containers in
these languages do hidden things. Therefore they're shite. Period.
The above scheme does *nothing* hidden, therefore it is _not_ shite.
Might not be to everyone's tastes, but doing nothing hidden is more
important. And __puh-leezze__ no-one say that throwing an exception is
(necessarily) hidden behaviour. If its part of the [] operators
contract, then it ain't hidden!
Incidentally, I couldn't give a rat's about the syntax. (I'd be quite
happy if AAs lost much/all of their in-built-ness.)
For example, in std.openrj, a record has the following methods:
bool hasField(); // aka 'in'
Field findField(); // aka lookup()
Field getField(); // aka []
This avoids the slight complexity of lookup because a Field, being of
class type, can be represented in its does-not-exist form as a null.
Naturally, in DTL the containers cannot do the same. (I forget exactly
how they do it at the mo ... blush)
btw, Record also provides operator [], which just calls getField().

Can we please all either get back on point, or get off the thread?!
This
stuff's bordering on the peurile

<cough> Ahem. Is it really? Or are you having a particularly bad day?

Not bad, but obviously somewhat out of kilter.
I know you're up to it, though, my rumbustious friend.

I use associative containers in several languages, including C++
(auto-inserts) and Ruby (returns default), and am constantly hrumphing
around and cursing stupid language designers. Associative containers in
these languages do hidden things. Therefore they're shite. Period.
The above scheme does *nothing* hidden, therefore it is _not_ shite.
Might not be to everyone's tastes, but doing nothing hidden is more
important. And __puh-leezze__ no-one say that throwing an exception is
(necessarily) hidden behaviour. If its part of the [] operators
contract, then it ain't hidden!
Incidentally, I couldn't give a rat's about the syntax. (I'd be quite
happy if AAs lost much/all of their in-built-ness.)

Aye; I'm with you, on all the above.

<cough> Ahem. Is it really? Or are you having a particularly bad day?

Not bad, but obviously somewhat out of kilter.
I know you're up to it, though, my rumbustious friend.

Naturally ~ I can certainly dish it out in profuse conglomerations
(talking of which: does anyone recall an old Dudley & Moore skit about "going to
sea on one of Winston Churchill nose bogeys" ?)

My vote would be for 3 since looking up a key that isn't in the array
is the AA
equivalent to indexing a dynamic array outside of its bounds. Since
indexing out
of bounds throws an exception (in debug mode) so should invalid key
lookup.

I disagree. An out of bounds array index is an exceptional case
because the size of the array is, usually, a known quantity - i.e. all
members of the set of numbers from 0...n are valid, and n *must* be
known during the allocation of the array. Unassigned keys in a hashmap
are not an exceptional case because for hashmaps there is no valid set
of keys (in the general case) that is guaranteed to be known at
runtime. Besides which, hasmaps are quite often used as a form of
registry where the case of unassigned keys is common.

Well, we must use associative containers in much different ways. What
you've described simply does not tally with my experience and use
patterns.

I'm in the camp that advocates throwing exceptions only in exceptional
circumstances, and not as a general catch-all failure signal.

Same here. And trying to get a value out of an associative container
that does not exist is exactly that, exceptional. (Or it certainly
should be.)

If you imagine a world where exceptions are never caught, what are
some failure cases which, the majority of the time, would be an
acceptable reason to terminate the program?

Anything that violates a program's design should result in its
termination.

Any answers you come up with are good candidated to define as
'exceptional'.

Absolute nonsense.
I'm bugging out now, because I fear we're so far apart that there's no
point wasting our breaths. :-(

Same here. And trying to get a value out of an associative container
that does not exist is exactly that, exceptional. (Or it certainly
should be.)

I just can't agree with this. A missing key is simply a missing key. See
below.

Absolute nonsense.
I'm bugging out now, because I fear we're so far apart that there's no
point wasting our breaths. :-(

Too bad, because I think this is something that needs to be discussed.
Associative arrays are an integral part of the language that will be
used frequently. And maybe the fact that you and I are so far apart
indicates that other people are going to have extremely different views
on the issue as well.
The problem, as I see it, is twofold. First, is the [] operator. This
causes people to view aa's in the same light as normal arrays. From that
perspective, I can understand how aa["missing key"] can be construed as
functionally equivalent to array[out_of_bounds_index]. I see it the
former being more like a substitute for Hashmap.get("missing key") in
Java, which always returns null and which I have never heard anyone
argue should being throwing an exception instead.
The second problem is that aas allow any sort of value to be stored. If
it allowed only class objects, then we wouldn't need the pointer syntax
which 'in' currently returns (which seems to be the bit that ignited the
discussion in the first place). But that's nasty. I surely don't want to
wrap my struct instances and integrals in a class just to put them in an aa.
In my opinion, aas should function thusly:
boolean contains = (key in aa);
int* val = aa["key"]; // return null if missing
I don't see a problem with using pointers as return values, as it
eliminates the requirement that all aa values be objects, and pointers
are a part of the language anyway.
Perhaps I'm wrong viewing associative rays in the same light as Java's
Hashmaps. But in my mind it's a natural way to look at it. What else are
they if not hashmaps?

In my opinion, aas should function thusly:
boolean contains = (key in aa);
int* val = aa["key"]; // return null if missing
I don't see a problem with using pointers as return values, as it
eliminates the requirement that all aa values be objects, and pointers
are a part of the language anyway.

And it would make more sense, than the current situation:
bool contains = cast(bool) (key in aa);
int* val = key in aa; // return null if missing
It's just that some people tend to hate pointers...

Perhaps I'm wrong viewing associative rays in the same light as Java's
Hashmaps. But in my mind it's a natural way to look at it. What else are
they if not hashmaps?

Technically it does not have to be a Hash, but it is definitely a Map...
--anders

Same here. And trying to get a value out of an associative container
that does not exist is exactly that, exceptional. (Or it certainly
should be.)

I just can't agree with this. A missing key is simply a missing key. See
below.

I'm with Matthew on this one. Assuming there is a method:
bool contains(KeyType key, out ValueType value);
that returns true/false and sets value when found.
Then array["key"] makes an assumption (that the key exists), and if it's
false, that is an exception.

Absolute nonsense.
I'm bugging out now, because I fear we're so far apart that there's no
point wasting our breaths. :-(

Too bad, because I think this is something that needs to be discussed.
Associative arrays are an integral part of the language that will be
used frequently. And maybe the fact that you and I are so far apart
indicates that other people are going to have extremely different views
on the issue as well.
The problem, as I see it, is twofold. First, is the [] operator. This
causes people to view aa's in the same light as normal arrays. From that
perspective, I can understand how aa["missing key"] can be construed as
functionally equivalent to array[out_of_bounds_index].

This is not why I want [] to throw an exception.

I see it the former being more like a substitute for
Hashmap.get("missing key") in Java, which always returns null and which
I have never heard anyone argue should being throwing an exception
instead.

The two are similar in that they're both implementations of the same
concept or idea, however the Java hasmap is limited in scope to objects,
which is why it can return null. The same is not true for D's AA's.

The second problem is that aas allow any sort of value to be stored. If
it allowed only class objects, then we wouldn't need the pointer syntax
which 'in' currently returns (which seems to be the bit that ignited the
discussion in the first place). But that's nasty. I surely don't want to
wrap my struct instances and integrals in a class just to put them in an
aa.

Indeed.

In my opinion, aas should function thusly:
boolean contains = (key in aa);
int* val = aa["key"]; // return null if missing
I don't see a problem with using pointers as return values, as it
eliminates the requirement that all aa values be objects, and pointers
are a part of the language anyway.

That is one solution, however I think a much better soln is:
bool contains(KeyType key, out ValueType value);
1. no pointers.
2. can get and/or check for a value in one operation.
3. reads well:
if (array.contains("bob",value)) {
}
given that, I believe [] should throw an exception.

Perhaps I'm wrong viewing associative rays in the same light as Java's
Hashmaps. But in my mind it's a natural way to look at it. What else are
they if not hashmaps?

I agree, they're the same in concept, just not in implementation.
Regan

Same here. And trying to get a value out of an associative container
that does not exist is exactly that, exceptional. (Or it certainly
should be.)

I just can't agree with this. A missing key is simply a missing key. See
below.

I'm with Matthew on this one. Assuming there is a method:
bool contains(KeyType key, out ValueType value);
that returns true/false and sets value when found.
Then array["key"] makes an assumption (that the key exists), and if it's
false, that is an exception.

This proposal is pretty nice. At the risk of going even more off-topic we
should add a method to remove an item from the AA instead of "abusing"
delete. I had guessed before that the reason Walter chose to use delete was
to avoid adding a method to AAs. But now there are two good methods to add
to AAs and so I hope he considers sacrificing the aesthetic beauty of
method-less AAs for them.
The "in" operator should stay since it would be annoying to have to pass a
dummy value parameter just to see if a key is in the array.
So, my preferred AA api:
bit opIn(Key key) // possibly return value* instead
Value opIndex(Key key) // throws on missing key
bit contains(Key key, out Value value)
void remove(Key key) // ignores missing key
... rest as before except "delete" is removed...
This would make opIn the only obstacle to writing a user type that mimics
builtin AAs.
-Ben

ack - let me add another one:
Value* insert(Key key) // lookup and insert if not present
That way in the example at the end of
http://www.digitalmars.com/d/arrays.html the expression
dictionary[word]++;
would become
(*dictionary.insert(word))++;
To me even though it looks uglier it's more explicit about what is really
going on.

Same here. And trying to get a value out of an associative
container that does not exist is exactly that, exceptional. (Or it
certainly should be.)

I just can't agree with this. A missing key is simply a missing key.
See below.

I'm with Matthew on this one. Assuming there is a method:
bool contains(KeyType key, out ValueType value);
that returns true/false and sets value when found.
Then array["key"] makes an assumption (that the key exists), and if
it's false, that is an exception.

This proposal is pretty nice. At the risk of going even more off-topic
we should add a method to remove an item from the AA instead of
"abusing" delete.

Not off topic. I think no-one's mentioned it because it is such an
obvious and total wart. I'd confidently say that aa/delete is the
(current) worst part of D. Anyone think of anything more stupid??
I'm just expecting Walter to kill it quietly one day, to avoid the shit
storm of protest if he doesn't.
Or maybe I'm being naive. In which case I'd better get my shovel ...

My vote would be for 3 since looking up a key that isn't in the array is the AA
equivalent to indexing a dynamic array outside of its bounds. Since indexing
out
of bounds throws an exception (in debug mode) so should invalid key lookup.

I disagree. An out of bounds array index is an exceptional case because
the size of the array is, usually, a known quantity - i.e. all members
of the set of numbers from 0...n are valid, and n *must* be known during
the allocation of the array.

Dynamic arrays grow and shrink all the time. Same thing with adding and removing
keys from an AA. The only difference (conceptually) is that dynamic arrays have
a continuous block of integer keys. But I don't see why that matters.

Unassigned keys in a hashmap are not an
exceptional case because for hashmaps there is no valid set of keys (in
the general case) that is guaranteed to be known at runtime.

Same for dynamic arrays. An empty array should have no valid lookups. Let me put
it this way, when you look up the phone number of someone in the phone book and
it isn't there, do you dial a random number, or make up a phone number and write
it into the phone book? I doubt it.

Besides
which, hasmaps are quite often used as a form of registry where the case
of unassigned keys is common.

ok, that's fine. For those cases one can use the function that doesn't throw -
what we know as "in" today. The question here is what to do with expressions of
the form aa[key] that must return something of the value type.

I'm in the camp that advocates throwing exceptions only in exceptional
circumstances, and not as a general catch-all failure signal. If you
imagine a world where exceptions are never caught, what are some failure
cases which, the majority of the time, would be an acceptable reason to
terminate the program? Any answers you come up with are good candidated
to define as 'exceptional'. In my book, hashmap lookup failures do not
pass that test.

True throwing exceptions shouldn't be done willy nilly. So what do you propose?
I take it your are comfortable with the current behavior? I'm

Just to push my point further, there is not one case where an out of
bounds array index is acceptable. And in every case the developer would
certainly want to know if an attempt to index an out of bounds array
were made. In our imaginary world of uncaught exceptions, this is a
great time to terminate the program. Yet there are a large number of
hashmap use cases where you would not want your application to terminate
because a particular key wasn't assigned.

Again, there are other ways of safely looking up a key that might not be in the
array.

Imagine a plugin system that
loads plugins on demand. Of course the user should be notified that a
particular plugin isn't available to load - but does that pass the test
as an acceptable reason to terminate the application? Not in the genral
case, it is in the application specific domain and therefore it is not
exceptional. And when a failure case is not exceptional, then a testable
value (a boolean or null, for example) should be returned instead so
that the application can decide whether or not to terminate.

ahh - I think I see the miscommunication. That last sentance says something
about returning a testable value - which to me is what "in" is for. We are
talking about two different functions. I totally agree there should be an
exception-free key lookup. It's just that aa[key] is not that function.

True, you can choose to catch an exception and do nothing with it, but
in my book that is poor design, particularly in a language like D which
has no mechanism that requires you to catch exceptions, or even declare
that a particular method or operation throws one. It's too easy to
overlook an exception here or there. If my production software
terminates, it had better be due to an exceptional case.

I disagree. An out of bounds array index is an exceptional case because
the size of the array is, usually, a known quantity - i.e. all members
of the set of numbers from 0...n are valid, and n *must* be known during
the allocation of the array.

Dynamic arrays grow and shrink all the time. Same thing with adding and
removing
keys from an AA. The only difference (conceptually) is that dynamic arrays have
a continuous block of integer keys. But I don't see why that matters.

Maybe I'm missing something, but the only way in D for a dynamic array
to grow or shrink is to set the length property, correct? That means the
last index is always known - the index (n + 1) always points to an area
of memory beyond the end of the array.
I said in another post that the [] used by associative arrays causes
people to view them in the same light as normal arrays. From this
perspective, it's easy to draw the conclusing that a aa["missing key"]
is invalid and exceptional. But if D had a hashmap class instead, would
hashmap.get("missing key") still be viewed as an exceptional case?

Maybe I'm missing something, but the only way in D for a dynamic array
to grow or shrink is to set the length property, correct? That means the
last index is always known - the index (n + 1) always points to an area
of memory beyond the end of the array.

You can also implicitly set .length, by appending stuff with ~=

I said in another post that the [] used by associative arrays causes
people to view them in the same light as normal arrays. From this
perspective, it's easy to draw the conclusing that a aa["missing key"]
is invalid and exceptional.

It's also a sense of how you view variables that have not been set...
Are they implicitly set to a known usable value, or are they off limits?
int i;
// what is the value of i ? Is it 0 (int.init), or is it
// TryingToUseAnUninitializedVariableException ?
int[] a;
a.length = 10;
a[0] = 1;
a[1] = 2;
a[2] = 3;
// what is the value of a[3] ? Is it 0 (int.init), or is it
// TryingToUseAnUninitializedVariableException ? the index
// it's within the bounds, [0..9], so it's not OutOfBounds
int[int] aa;
aa[0] = 1;
aa[1] = 2;
aa[2] = 3;
// what is the value of aa[3] ? :) is the "bounds" of the hash
// the current keys [0,1,2], or is every possible key (int) ?
// depending on the definition, it *might* be OutOfBounds...
Currently, all of these return 0 (that is: the .init value
for variables and arrays, and a zero-filled value for AAs)
It's just that the last one above, with the AA, also has a nasty
*side effect* of also adding the key that was used for lookup ?
And that is why it was reported as a bug in D. (several times)
Having it throw an exception would NOT be a good way to fix it.
Removing the side effect would be enough. I have not heard of
anyone actually *using* this side effect, to set the value...

Note that it doesn't even use the .init value, just zeroes...
Which means that char's get 0x00 and not 0xFF, floats here
get 0.0f instead of float.nan, and so on... Which sucks (too).
My thoughts: (still)
Please make it use .init, and *avoid* creating new elements ?
--anders

Currently, all of these return 0 (that is: the .init value
for variables and arrays, and a zero-filled value for AAs)
It's just that the last one above, with the AA, also has a nasty
*side effect* of also adding the key that was used for lookup ?

I think it's nasty also.

And that is why it was reported as a bug in D. (several times)
Having it throw an exception would NOT be a good way to fix it.

I think throwing an exception is better than returning type.init.
I think it's better because it will highlight an invalid assumption
immediately instead of propagating the bug further down the source to
another location where you get unexpected (but consistent - thanks to
type.init behaviour).
But wait, you say, it's not an invalid assumption, I _want_ the value _or_
type.init, to which I answer, use 'contains':
aa.contains("bob",value);
value is an 'out' parameter, and will be set to type.init.
As long as 'contains' exists I will be happy.
bool contains(KeyType key, out ValueType value);
bool contains(KeyType key);
Regan

Maybe, but remember that you can write non-OOP code in D too... :-)
And I think such an Exception would be better off hidden in a
HashMap class library, instead of in the core language spec ?

But wait, you say, it's not an invalid assumption, I _want_ the value
_or_ type.init, to which I answer, use 'contains':
aa.contains("bob",value);
value is an 'out' parameter, and will be set to type.init.
As long as 'contains' exists I will be happy.
bool contains(KeyType key, out ValueType value);
bool contains(KeyType key);

And I think such an Exception would be better off hidden in a
HashMap class library, instead of in the core language spec ?

Why? What's the difference between a hashmap class library and built in
AA's?

But wait, you say, it's not an invalid assumption, I _want_ the value
_or_ type.init, to which I answer, use 'contains':
aa.contains("bob",value);
value is an 'out' parameter, and will be set to type.init.
As long as 'contains' exists I will be happy.
bool contains(KeyType key, out ValueType value);
bool contains(KeyType key);

But that's how it works now ? Just using a pointer,
instead of: a "boolean" (bit) and an out reference.

I don't want to use pointers (I can, I just dont want to). I want
'contains' built into AA's. Failing that I want implicit template
instantiation and the array method feature, so I can write a template to
add contains to AA's myself.
In short I want to be able to say:
if (aa.contains("bob",value)) {
}
Regan

Yeah, we just different in opinion on whether missing keys are errors ?
If they are, then an exception is OK... Something like ArrayBoundsError.

And I think such an Exception would be better off hidden in a
HashMap class library, instead of in the core language spec ?

Why? What's the difference between a hashmap class library and built in
AA's?

Objects ? :-)

In short I want to be able to say:
if (aa.contains("bob",value)) {
}

I like the current aa["bob"] syntax, just don't like the side effect.
The above looks too much like a function call, to be a built-in op ?
But I'm all for discussing/improving the syntax of "in" and "delete"
--anders

Why? What's the difference between a hashmap class library and
built in AA's?

Objects ? :-)

? I'm not following...

A class library implies that it would be using classes
(or structs, if you want to do C++ish performance hacks)
The built-in arrays are not classes, any more than
strings are, so they would not use the same syntax ?
For a HashMap library, then a "contains" method with
parameters is perfectly alright and could even throw
exceptions if the class library designer so wished...
But a built-in class is more about operators and such ?

The above looks too much like a function call, to be a built-in op ?
But I'm all for discussing/improving the syntax of "in" and "delete"

I like "in".

With the pointer and all ? Maybe one could tweak the
language so that "in" can return either/or, and not
require a cast to be assigned to a bool/bit variable...
Then you would only have to know about the "special feature"
when doing performance tricks to avoid any double lookups ?

I think "delete" should be done as contains is done above, i.e.
aa.delete("bob");
or perhaps it should be called "remove"?

I'm not sure you could even use the "delete" keyword, if you wanted?
"remove" would work great for a library, along with get/put methods...
I think I will hack together a simple Object-storing Map/HashMap,
just to see what it would look like in D ? (much like my String)
Still think "out" to be a great pun. :-)
value = "bob" in aa; => aa.contains("bob",value);
value = "bob" out aa; => aa.remove("bob", value);
--anders

Why? What's the difference between a hashmap class library and
built in AA's?

Objects ? :-)

A class library implies that it would be using classes
(or structs, if you want to do C++ish performance hacks)
The built-in arrays are not classes, any more than
strings are, so they would not use the same syntax ?

Ahh. I follow now. Yes, the syntax would be different.

For a HashMap library, then a "contains" method with
parameters is perfectly alright and could even throw
exceptions if the class library designer so wished...

If it returns bool, then there is no need to.
Or rather, I wouldn't.

But a built-in class is more about operators and such ?

We can emultate the built-ins to a large degree. NOt perfectly, but
perhaps that will change?

The above looks too much like a function call, to be a built-in op ?
But I'm all for discussing/improving the syntax of "in" and "delete"

With the pointer and all ?

I have no problem saying "if (a in b)" where "a in b" equates to a pointer.
I would prefer it returned bool, and I had contains.

Maybe one could tweak the
language so that "in" can return either/or, and not
require a cast to be assigned to a bool/bit variable...

Interesting idea, by checking the lhs type perhaps? That sounds a bit like
picking an overload based on return type.

Then you would only have to know about the "special feature"
when doing performance tricks to avoid any double lookups ?

It would be best if no 'tricks' were required to get good performance.
I think if there are clearly 3 methods, for clearly different purposes
we'll get that.

I think "delete" should be done as contains is done above, i.e.
aa.delete("bob");
or perhaps it should be called "remove"?

I'm not sure you could even use the "delete" keyword, if you wanted?
"remove" would work great for a library, along with get/put methods...
I think I will hack together a simple Object-storing Map/HashMap,
just to see what it would look like in D ? (much like my String)

Maybe, but remember that you can write non-OOP code in D too... :-)
And I think such an Exception would be better off hidden in a
HashMap class library, instead of in the core language spec ?

But wait, you say, it's not an invalid assumption, I _want_ the value _or_
type.init, to which I answer, use
'contains':
aa.contains("bob",value);
value is an 'out' parameter, and will be set to type.init.
As long as 'contains' exists I will be happy.
bool contains(KeyType key, out ValueType value);
bool contains(KeyType key);

But that's how it works now ? Just using a pointer,
instead of: a "boolean" (bit) and an out reference.

Come on. The pointer stuff is just a joke. A hack that'll make people question
the sanity, or at least the intelligence,
of the language designers. I can't believe you don't see that, never mind be
concerned about it.

But that's how it works now ? Just using a pointer,
instead of: a "boolean" (bit) and an out reference.

Come on. The pointer stuff is just a joke. A hack that'll make people question
the sanity, or at least the intelligence,
of the language designers. I can't believe you don't see that, never mind be
concerned about it.

I do see it. I think it was simpler when they were
separate (like in DMD before version 0.107, or in
the Map interface of the Java Collections library)
The current "in" expr is a performance hack, nothing else...
I actually prefer to not use the pointer, just like a "bool".
My point was just that if you want both-at-once, it's there ?
As long as D uses the C++ way of setting non-existant elements,
or if it will switch to the proposed way of throwing Exceptions,
I need to use to "in" before accessing the array as a workaround.
Otherwise I would use it after, *if* I cared about the difference...
(between keys that don't exist, and keys that map to the .init value)
It seems that other people are using AA's in other ways, so I let them.
--anders

I disagree. An out of bounds array index is an exceptional case
because the size of the array is, usually, a known quantity - i.e. all
members of the set of numbers from 0...n are valid, and n *must* be
known during the allocation of the array.

and removing
keys from an AA. The only difference (conceptually) is that dynamic
arrays have
a continuous block of integer keys. But I don't see why that matters.

Maybe I'm missing something, but the only way in D for a dynamic array
to grow or shrink is to set the length property, correct? That means the
last index is always known - the index (n + 1) always points to an area
of memory beyond the end of the array.
I said in another post that the [] used by associative arrays causes
people to view them in the same light as normal arrays. From this
perspective, it's easy to draw the conclusing that a aa["missing key"]
is invalid and exceptional. But if D had a hashmap class instead, would
hashmap.get("missing key") still be viewed as an exceptional case?

Yes or No. :)
If we assume the hashmap class has (at least) these methods:
class hashmap(KeyType,ValueType) {
//to get value
ValueType get(KeyType key) {..}
//to query existance and get value
bool contains(KeyType key, out ValueType value) {..}
//to query existance.
bool contains(KeyType key) {..}
}
One could argue for 'get' throwing an exception, because there is a
'contains' method which you should use if you're not 100% certain the key
exists.
One could argue for 'get' returning type.init, because there is a
'contains' method which can be used if it's important whether it exists or
not.
Truth be told, I would be happy with either method, so long as I get my
'contains' method in the form:
bool contains(KeyType key, out ValueType value) {..}
Regan
p.s. This is assuming nothing changes WRT to char[].init stopping me from
telling an "empty" string apart from an "undefined" string as I think this
is important.

I disagree. An out of bounds array index is an exceptional case because the
size of the array is, usually, a known
quantity - i.e. all members of the set of numbers from 0...n are valid, and n
*must* be known during the allocation
of the array.

Dynamic arrays grow and shrink all the time. Same thing with adding and
removing
keys from an AA. The only difference (conceptually) is that dynamic arrays have
a continuous block of integer keys. But I don't see why that matters.

Maybe I'm missing something, but the only way in D for a dynamic array to grow
or shrink is to set the length
property, correct? That means the last index is always known - the index (n +
1) always points to an area of memory
beyond the end of the array.
I said in another post that the [] used by associative arrays causes people to
view them in the same light as normal
arrays.

This looks like pure supposition. Or at least I suppose it is. Do you have
evidence to support this?

I said in another post that the [] used by associative arrays causes people to
view them in the same light as normal
arrays.

This looks like pure supposition. Or at least I suppose it is. Do you have
evidence to support this?

Just some comments I have read in this thread, such as in one of your posts:
"I'd disagree with both of you that it is in any way appropriate for the
[] to return a default in the case where an
entry does not exist."
Couple that with the fact that I have never heard anyone ask for
exceptions to be thrown from Hashmap.get() in Java and you may see where
I draw the conclusion. My interpretation is that when thinking of a
Hashmap class vs. D's associative arrays one might have different
expectations, even the functionality is conceptually the same - is
Hashmap.get("key") not the same as aa["key"]?

I said in another post that the [] used by associative arrays causes
people to view them in the same light as normal arrays.

This looks like pure supposition. Or at least I suppose it is. Do you
have evidence to support this?

Just some comments I have read in this thread, such as in one of your
posts:
"I'd disagree with both of you that it is in any way appropriate for the
[] to return a default in the case where an
entry does not exist."

And from Anders (just so I don't single out Matthew!):
"I guess it boils down to whether you consider an
empty array to be full of valid lookups, or not...
To me, a dynamic array is full of .init values
so then it makes sense that associative arrays
should also be full of .init values as well."

Couple that with the fact that I have never heard anyone ask for
exceptions to be thrown from Hashmap.get() in Java and you may see where
I draw the conclusion. My interpretation is that when thinking of a
Hashmap class vs. D's associative arrays one might have different
expectations, even the functionality is conceptually the same - is
Hashmap.get("key") not the same as aa["key"]?

Are you really overriding the method in your example?
In main() you call foo() from an object of type C with a type C. Does
the method name as well as type need to be the same for an override?
I'm not sure about this, because you new'd with a B(), and B inherits or
implements C. You can tell I've been writing way too much Java recently
and not enough D.
I probably would have expected this to print "C".
BA
John Demme wrote:

I thought that one needed to manually override the method in any
situation where the child class's method's parameter was a child of the
parent's method's parameter... ie:
import std.stdio;
class C {
void foo(C c) {
writefln("C");
}
}
class B: C {
void foo(B b) {
writefln("B");
}
}
void main() {
C b = new B();
b.foo(b);
}
Prints "C", when I think it should print "B". Is this something that D
will do eventually, or no?
John
Walter wrote:

Prints "C", when I think it should print "B". Is this something that D
will do eventually, or no?

What you're asking for is Java style overloading. D does C++ style
overloading. There's been heated debate about which is better.

I think the issue is about overriding not overloading. It's probably some
form of "covariance" but I'm not an expert on that stuff. The function call
in main() uses C as the declared type
C b = new B();
b.foo(b);

I thought that one needed to manually override the method in any situation
where the child class's method's parameter was a child of the parent's
method's parameter... ie:
import std.stdio;
class C {
void foo(C c) {
writefln("C");
}
}
class B: C {
void foo(B b) {
writefln("B");
}
}
void main() {
C b = new B();
b.foo(b);
}
Prints "C", when I think it should print "B". Is this something that D
will do eventually, or no?

It would be very hard to do with the standard vtable mechanism (I think).
The call b.foo(b) compiles into basically "take the vtable for b and call
the first function in the C section" since the type of b is C and the first
function of C is foo. To implement what you suggest using that same
mechanism the function B.foo would have to be stored in the slot for C.foo -
which is illegal since users can pass *any* object of type C to C.foo but
not B.foo (if I have that written straight). So I would imagine some sort of
double dispatching would be needed to implement what you suggest.

I thought that one needed to manually override the method in any situation
where the child class's method's parameter was a child of the parent's
method's parameter... ie:
import std.stdio;
class C {
void foo(C c) {
writefln("C");
}
}
class B: C {
void foo(B b) {
writefln("B");
}
}
void main() {
C b = new B();
b.foo(b);
}
Prints "C", when I think it should print "B". Is this something that D
will do eventually, or no?

It would be very hard to do with the standard vtable mechanism (I think).
The call b.foo(b) compiles into basically "take the vtable for b and call
the first function in the C section" since the type of b is C and the first
function of C is foo. To implement what you suggest using that same
mechanism the function B.foo would have to be stored in the slot for C.foo -
which is illegal since users can pass *any* object of type C to C.foo but
not B.foo (if I have that written straight). So I would imagine some sort of
double dispatching would be needed to implement what you suggest.

You are correct... please let me amend my previous statement to I have
done *similar* things in Java... I was writing a Tree-based map, and
used overriding and overloading... If may have been that an interface
was being used, and the structure was different. I don't recall the
details.
John
xs0 wrote:

I'm not familiar about this argument. What advantages are there to
*not* doing this? I can't think of any, whereas the advantages I see
(and have used in Java) are many.

What I don't get is, if you know at the time you are writing the code that
you want to create an instance of B, why are you declaring it to be a C?
The way I read (note: *read*) the code in the main routine, is that you are
declaring that 'b' is an instance of a C class, but that you are creating a
new B class instead. Then you are saying 'b.foo(b)' which reads that you
are calling the 'foo' method in whatever class 'b' is (ie. a C), passing
itself as a parameter.
In this reading, why would one expect the foo method in class B to be
called? What is it I'm not understanding? Can you give a real example where
this makes sense to do?
I would have thought that ...
class TwoDImage {
void foo(TwoDImage c) {
writefln("C");
}
}
class Circle: TwoDImage {
void foo(Circle b) {
writefln("B");
}
}
void main() {
Circle b = new TwoDImage();
b.foo(b);
}
would make more sense to do.
--
Derek
Melbourne, Australia
8/03/2005 11:06:45 AM

Here's a good example:
import std.stdio;
class TwoDImage {
TwoDImage overlap(TwoDImage i) {
//Returns the overlap of the two images.
writefln("Potentially long calculation");
return null;
}
}
class Circle: TwoDImage {
TwoDImage overlap(Circle c) {
//Does the same thing as super.overlap,
//But is able to calculate the overlap much more effeciently
writefln("Trivial calculation");
return null;
}
}
void main() {
Circle a = new Circle();
Circle b = new Circle();
PrintOverlap(a, b);
}
void PrintOverlap(TwoDImage a, TwoDImage b) {
TwoDImage i = a.overlap(b);
//Graphically print the overlap
}
The above prints "Potentially long calculation".
Since the PrintOverlap function cannot and should not know anything
about Circle, letting Circle override and overload TwoDImage, is the
only way to optimize this calculation. Currently the only way to do is
using a hack:
import std.stdio;
class TwoDImage {
TwoDImage overlap(TwoDImage i) {
//Returns the overlap of the two images.
writefln("Potentially long calculation");
return null;
}
}
class Circle: TwoDImage {
//HACK:
override TwoDImage overlap(TwoDImage i) {
if (cast(Circle)i)
return this.overlap(cast(Circle)i);
else
return super.overlap(i);
}
TwoDImage overlap(Circle c) {
//Does the same thing as super.overlap,
//But is able to calculate the overlap much more effeciently
writefln("Trivial calculation");
return null;
}
}
void main() {
Circle a = new Circle();
Circle b = new Circle();
PrintOverlap(a, b);
}
void PrintOverlap(TwoDImage a, TwoDImage b) {
TwoDImage i = a.overlap(b);
//Graphically print the overlap
}
This makes a lot of sense to me. I don't suppose I have everyone
convinced? or more importantly, have Walter convinced? It doesn't seem
like a hard thing for the compiler to do, as all it needs to do is
either add the method that directly overrides, or add a small bit to the
beginning of an already existing one. ie, if Circle.overlap(TwoDImage i) was
override TwoDImage overlap(TwoDImage i) {
//Do some other, but not as optimized calculation
return null;
}
The other Circle.overlap would be called as well.
Comments? Questions? Suggestions? Anecdotes? Potions?
John Demme
Derek Parnell wrote:

What I don't get is, if you know at the time you are writing the code that
you want to create an instance of B, why are you declaring it to be a C?
The way I read (note: *read*) the code in the main routine, is that you are
declaring that 'b' is an instance of a C class, but that you are creating a
new B class instead. Then you are saying 'b.foo(b)' which reads that you
are calling the 'foo' method in whatever class 'b' is (ie. a C), passing
itself as a parameter.
In this reading, why would one expect the foo method in class B to be
called? What is it I'm not understanding? Can you give a real example where
this makes sense to do?
I would have thought that ...
class TwoDImage {
void foo(TwoDImage c) {
writefln("C");
}
}
class Circle: TwoDImage {
void foo(Circle b) {
writefln("B");
}
}
void main() {
Circle b = new TwoDImage();
b.foo(b);
}
would make more sense to do.

should be:
class Circle extends TwoDImage
{
TwoDImage overlap(TwoDImage i)
{
//Does the same thing as super.overlap,
//But is able to calculate the overlap much more effeciently
System.out.println("Trivial calculation");
return null;
}
}
let's see the D version... it also works!
seems D supports OOP after all.
Ant

You clearly don't get what I'm saying. Go back and (re-)read the posts.
I'm aware of to get it to work. A simple override works fine. It's
when a subclass has a method by the same name, and accepts a parameter
that is a subclass of class that is a parameter of the parent's method
that I think it should be called when that method is called with that
subclass as a parameter. I realize that it's not hard to make it work,
by creating another function to handle it, but I see it as a hack.
I think one of those was perhaps one of the worst sentences ever
written... and my g/f is an english major... If she saw it she'd
probably kill me.
John Demme

IMO, 3. is the only sane solution. It's the same problem with the stinky

crap of opCmp. Look at the rubbish I've had to

write to get comparable Fields:

You only need opCmp(Object) if you are using builtin operations like .sort.
If you're going to use < or > operators with Fields, then opCmp(Field) is
all you need.

should work across the language. Of course, that's just MHO :)

Indeed. I'm pretty sure that I wrote DTL 0.2 to not use this disgusting hack.
(I *HATE* things whose behaviours one
cannot predict at compile time (and therefore cannot predict at all). To be
fair to D, this is a flaw of Java and .NET
also, and harder to obviate in them.) Even if I remember incorrectly (IRI??), I
will ensure that DTL 0.3 does so. Then I
will never have to write, or at least use, an opCmp(Object) ever again.

Maybe it is a philosophy only. But if one have a look at this
curious threads on how to denote the `.length' property of arrays
one would like to declare it as some sort of mass hysterogeny
otherwise.
Several intelligent representatives of mankind disputing on an
object which is the equivalent of sparing a thousandth of the
needed typing work. And it seems to be the typing only, as if in
programming this part of the whole task is the most important.
This is such a tininess of the whole task of programming language
design, that I could just not believe that under economic pressure
a board of programmers would come together several times over a
period of 15 month to discuss how to save themselves one minute of
typing every two days of their job.
-manfred

Maybe it is a philosophy only. But if one have a look at this
curious threads on how to denote the `.length' property of arrays
one would like to declare it as some sort of mass hysterogeny
otherwise.
Several intelligent representatives of mankind disputing on an
object which is the equivalent of sparing a thousandth of the
needed typing work. And it seems to be the typing only, as if in
programming this part of the whole task is the most important.
This is such a tininess of the whole task of programming language
design, that I could just not believe that under economic pressure
a board of programmers would come together several times over a
period of 15 month to discuss how to save themselves one minute of
typing every two days of their job.
-manfred

Me thinks ye are a dour sort.
http://www.cogsci.princeton.edu/cgi-bin/webwn?stage=1&word=fun
Its only a bunch of guys sitting around after work, having a beer or two,
and talking about their favourite sporting team. Its not important; just a
bit fun to unwind and prepare for the next real day. No one actually
expects this to be a life-changing or world-saving discussion. Walter will
sift through the chaff and rubbish, and form his own opinion and it will,
in all probability, be "alright-ish'.
--
Derek Parnell
Don't Worry - Be happy, mon.

Nice. Now I understand, why Walter stopped editing the news-column. I
invite you to help D with your wisdom by categorizing threads or only
single posts as what you think they are.
This "would be a big boost to the D community" as Walter said.
-manfred

Nice. Now I understand, why Walter stopped editing the news-column. I
invite you to help D with your wisdom by categorizing threads or only
single posts as what you think they are.

You talking to me or the community in general? I'll assume its me for the
moment.
So...Thank you for this great honour, and vote of confidence in my
editorial prowess. But it is with a heavy heart that I must decline your
most generous offer. If not for the weight of the world impinging on my
already crowded lifestyle, I would jump, nay, rush headlong, in to the role
of Advisor to the D News Editor. I can not think of a greater accolade. But
with respect good sir, I can perceive in these few missive of yours
already, that your editorial skills are more than a match for the task at
hand. I bid you 'Fine Sailing'.
Now *that* was an example of 'rubbish'.
Would you also like an example of 'chaff'?
--
Derek Parnell
Melbourne, Australia
13/03/2005 12:13:43 AM

Nice. Now I understand, why Walter stopped editing the news-column. I
invite you to help D with your wisdom by categorizing threads or only
single posts as what you think they are.

You talking to me or the community in general? I'll assume its me for the
moment.
So...Thank you for this great honour, and vote of confidence in my
editorial prowess. But it is with a heavy heart that I must decline your
most generous offer. If not for the weight of the world impinging on my
already crowded lifestyle, I would jump, nay, rush headlong, in to the role
of Advisor to the D News Editor. I can not think of a greater accolade. But
with respect good sir, I can perceive in these few missive of yours
already, that your editorial skills are more than a match for the task at
hand. I bid you 'Fine Sailing'.
Now *that* was an example of 'rubbish'.
Would you also like an example of 'chaff'?

I suggested that hash[key] should stay the h*ll away from
setting keys that don't exist and just return .init, but lots of
people hated it...

[...]
I am on your side.

I agree. I have a very hard time trying to figure out how anyone ever
could disagree with this! Hands off the key!

However, I just believe in the psychlogical phenomenons of
understanding. And one of them is, that most people once involved in
a more complex piece of thinking and then understood it, just do not
want to through it away, because they have invested energy into this
understanding.

Heck, I wouldn't actually bet on Stroustrup being too happy with C++,
but hey, he's gotta know this. I know a lot of guys who used to be
averse to digital circuits, precisely because they had to throw away a
lot of hard work learning the (inherently more complex) analog thinking.
It pissed them off to see the young guys doing all kinds of wizardry in
the back room, and impressing the crap out of the bosses -- with only
having read a chip catalog. It was so wrong.
As to Stroustrup (I admire the guy, but he's so precisely in the middle
of what you said!), it must be horrible for him, and the thousands (both
with C++, or totally in other fields) of others, who've amassed a huge
knowledge and who've become icons.

Most people will not accept, that the right way to reason about it,
is to conclude that because of the fact that they had to invest
energy, the introduction of this feature prolonges the time of
learning D and that this prolonging is only justified if the costs
for it are payed back in the further run.

Nothing so bad, that it doesn't have a silver lining too: during all
times, precisely this fact has made it very hard for new ideas to become
accepted. Looking at the whole, it may be just as well. Weren't the
resistance "unduely hard", then a lot of half-worthy ideas would have
got accepted -- before their downsdes had become scrutinized. (Most of
us have already had a taste of this in IT: the Hype Of Du Jour comes and
gets replaced, even before you get to the bookstore to buy the book on it.)

Diminishing the costs is a major goal of the design of D. But I have
never ever seen costs calculated.

I don't know any hard facts, say for D, except a gut feeling. For
example, I'm an old man and I am horribly slow at writing code. It takes
me a week to write something kids half my age whip up in an afternoon.
(I just hope I have less bugs and the code is better thought out. What
can I do. :-) And maybe, maybe takes less maintenance in the long run.)
Anyhow, with C++ (which I, after all am more familiar with, since many
more moons back, than with D), I code at least 4 times slower. And the
code is full of bugs.
The difference being that the bugs in my C++ code I blame on the
language, whereas my D bugs are genuinely my own doing.

How can something like the following be anything but a hideous warty piece of
crap?
Field findField(char[] fieldName)
in
{
assert(null !== fieldName);
}
body
{
Field *pfield = (fieldName in m_values);
return (null === pfield) ? null : *pfield;
}

(just have to go back to C strings...but since we're renigging on that
garbage collector and using C's malloc, why not go back to them too!)
Field findField(char* fieldName)
/* don't need these useless things
in
{
assert(null !== fieldName);
}
body*/
{
return (fieldName=cast(char*)(fieldName in m_values))!=null ?
null:*cast(Field*)fieldName;
}
isn't it beautiful! All one statement, no tmp vars ;-)
Finally a language with all the power of C and all the ease-of-use of C,
which is, in turn, a language with all the power of assembly and all the
ease-of-use of assembly! The circle is now complete.
but seriously guys--this sort of ad-hoc syntacic salt has made things go
a bit out of hand lately---esp that obnoxious overriding of length
variables inside []--broke half my old vector code.

Am I the only one that dislikes "in" for having two completely different
meanings (contracts / AA)?
Wouldn't it be 'nicer' to minimize this kind of keyword collisions by
choosing them more carefully?
Yes, I know the parser understands it. When I suggested VC2005.NET's
"context dependent keywords" (keywords are only keywords when they appear in
the right places) a few month ago I only got replies that went something
like "it results in unreadable code".
L.

While I'm with you in spirit (at least from what I'm assuming about you <g>), I
don't personally agree in this specific
case.
Having multiple _disjoint_ meanings for a particular word doesn't trouble me.
It's when those meanings can get confused
that it's troubling.
"Lionello Lunesu" <lio lunesu.removethis.com> wrote in message
news:d0h87e$21ol$1 digitaldaemon.com...

Am I the only one that dislikes "in" for having two completely different
meanings (contracts / AA)?
Wouldn't it be 'nicer' to minimize this kind of keyword collisions by choosing
them more carefully?
Yes, I know the parser understands it. When I suggested VC2005.NET's "context
dependent keywords" (keywords are only
keywords when they appear in the right places) a few month ago I only got
replies that went something like "it results
in unreadable code".
L.

Having multiple _disjoint_ meanings for a particular word doesn't trouble
me. It's when those meanings can get confused that it's troubling.

It doesn't trouble me either! The point is that there doesn't seem to be a
fixed position on these kind of issues.
Two 'in's that mean something completely different is okay with me. But
what's up with all that talk about "$ instead of length because 'length'
appears a lot in project x", "make x a keyword..." or "please don't make x a
keyword" etc.
If it's okay to have two "in" commands (statements or whatever you call 'm),
then it shouldn't be a problem to have any 'keywords' double in the
language. We should be able to do "int int = 6;" because of the completely
different meanings and positions of the two "int"s </extreme_example>. Yes,
it'll be unreadable, but only because the programmer wants it to be.
I just like to generalize things. Or am I exaggerating again?
L.

Two 'in's that mean something completely different is okay with me. But
what's up with all that talk about "$ instead of length because 'length'
appears a lot in project x", "make x a keyword..." or "please don't make x

keyword" etc.

I dislike this alot also. I see three distinct uses of the 'in' keyword, if
X 'in' collection , 'in' as a modifier for parameters, and 'in' as a
contract specifier.
I like Matthew's suggestion, rename it contains or lookup
T value;
if ( AA.contains(x, T ) ) doSomething(T);

But
what's up with all that talk about "$ instead of length because 'length'

I dunno, this length non-sense is depressing ( length is bad enough but $ ?
< shudders > ). Why cant we all agree the Im smartest and that negative
numbers look awesome as offests from the end! ;)
Charlie
"Lionello Lunesu" <lio lunesu.removethis.com> wrote in message
news:d0hvqf$2qgb$1 digitaldaemon.com...

Having multiple _disjoint_ meanings for a particular word doesn't

me. It's when those meanings can get confused that it's troubling.

It doesn't trouble me either! The point is that there doesn't seem to be a
fixed position on these kind of issues.
Two 'in's that mean something completely different is okay with me. But
what's up with all that talk about "$ instead of length because 'length'
appears a lot in project x", "make x a keyword..." or "please don't make x

keyword" etc.
If it's okay to have two "in" commands (statements or whatever you call

then it shouldn't be a problem to have any 'keywords' double in the
language. We should be able to do "int int = 6;" because of the completely
different meanings and positions of the two "int"s </extreme_example>.

it'll be unreadable, but only because the programmer wants it to be.
I just like to generalize things. Or am I exaggerating again?
L.

Am I the only one that dislikes "in" for having two completely different
meanings (contracts / AA)?

Wouldn't it be 'nicer' to minimize this kind of keyword collisions by
choosing them more carefully?

I think it should either be different, or a metaphore-mixing slugfest!
(as a another part of the glorious C/C++ legacy and earlier precedence)
This was why I suggested the syntax "key out hash" to remove it... :-)
Seemed to make more sense than the current recycled "delete" keyword ?
It's a general problem with D that is has a lot things built-in; but
some are so poorly implemented yet, that they make you wish that they
hadn't been - but just provided in a separate runtime library instead ?
(That might have been a bit harsh, and I do hope they all get fixed...)
--anders

Am I the only one that dislikes "in" for having two completely different
meanings (contracts / AA)?

Wouldn't it be 'nicer' to minimize this kind of keyword collisions by choosing
them more carefully?

I think it should either be different, or a metaphore-mixing slugfest!
(as a another part of the glorious C/C++ legacy and earlier precedence)
This was why I suggested the syntax "key out hash" to remove it... :-)
Seemed to make more sense than the current recycled "delete" keyword ?

Yeah. Using delete to remove an element from the AA is an unequivocal stink. It
must die.

It's a general problem with D that is has a lot things built-in; but
some are so poorly implemented yet, that they make you wish that they
hadn't been - but just provided in a separate runtime library instead ?
(That might have been a bit harsh, and I do hope they all get fixed...)

No, you've hit the nail on the head. Some things could be really well received,
but for the way they're (currently)
implemented.

This was why I suggested the syntax "key out hash" to remove it... :-)
Seemed to make more sense than the current recycled "delete" keyword ?

Yeah. Using delete to remove an element from the AA is an unequivocal stink.
It must die.

"delete" makes sense in Perl, where objects are instead DESTROYed ?
In D, with a hash full of object references or pointers, it doesn't.
And while "in" is a nice performance hack, it isn't very pretty... ?
(it made sense while it returned bit, but not really with a pointer)
Combining exists-in-AA and get-from-AA into one operation sounds
like a case of premature optimization, and just a workaround for
the confusing behaviour (horrible bug) of the hash[key] expression.
(my condoleances to C++'s map, if that is where the critter was born?)
Throwing an exception instead of just setting it would be just as bad,
I would still have to use the workaround: (key in aa) ? aa[key] : null
I'm used to http://java.sun.com/j2se/1.4.2/docs/api/java/util/Map.html:

boolean containsKey(Object key);
Object get(Object key);

But of course, that can only store full Classes and not primitive types.
(but get returns null, both for null values and for unexisting keys...)
Perl is similar, it also returns blank values on unexisting hash keys.
There you can use the "defined" operator to separate between the cases.
So for me, rather naturally, I want the same hash behaviour in D too ?
But D is not Java and D is not Perl, so I'll live with the workarounds.
I do think it would make sense to continue and equate "unset variable"
with ".init value", since that's what the rest of the D language uses.
--anders

IMHO, reading from AA by nonexisting key must not write any data to AA.
Let it return default value, bu NO WRITE.
(Sorry for repeating, but i don't know other way s to change current D
implementation details)

IMHO, reading from AA by nonexisting key must not write any data to AA.
Let it return default value, bu NO WRITE.
(Sorry for repeating, but i don't know other way s to change current D
implementation details)

This idea has it's own problems, primarily with value types, take 'int'
for example:
int[char[]] array;
int value;
array["bob"] = 5;
value = array["fred"];
at this point value == 0, and we have no idea whether it existed in the AA
or not. So, we have to use 'in' eg.
int[char[]] array;
int value;
array["bob"] = 5;
if ("fred" in array) {
value = array["fred"];
}
but, now we're doing a double lookup.
So, to solve this, we return a pointer from 'in' (as we currently have)
OR...
we use a method call, eg.
array.contains("fred",value);
which returns true/false, and sets value if found.
I cannot see why you'd want anything else.
Regan

but, now we're doing a double lookup.
So, to solve this, we return a pointer from 'in' (as we currently have)
OR...
we use a method call, eg.
array.contains("fred",value);
which returns true/false, and sets value if found.
I cannot see why you'd want anything else.

The method call is ok with me.
But, this should be separate, totally separate, from setting it!
We cannot afford to keep things like this. Period.
In a newcomers eye, this sticks out so bad, he'll think the rest of the
language is the same. _We_ know it's not, but how do you convince them?
The syntax and words are a trivial matter, eg:
bool array.found("fred", value);
bool array.set("fred", value);
bool array.get("fred", value);
But since this (or some other naming) hasn't emerged yet, AT LEAST the
newcomers (young, and seasoned alike) can't help but wondering about the
internals being shaky. Heck, soon I'll be too.
And, nothing keeps us from using array.set("fed", value) to BOTH query
and set at the same time. But the other two simply have to also exist.

two
one
four
three
I think D (and rumor has it C++'s "map"?)
are the only ones that set keys on get...

C++ certainly does. And it STINKS!!!!!!!!!!!!!!
(For anyone that's interested, you can read all about my _reasoned_ arguments
against in April's DDJ, in my article "C++
and operator []=".)
Walter, I say get some caffiene, gird your loins and brace yourself for a
barrage with more fervour and passion than the
warnings debate. This is at once a howling stinker, and an opportunity to
genuinely improve on C++ (rather than doing
something new and different).

C++'s map does it for sure , why I have no idea, seems like a horrible idea
to me as well. Just wanted to throw my vote in the 'No AA Writing on
lookup' camp.
Charlie
"Anders F Björklund" <afb algonet.se> wrote in message
news:d0icp0$86l$1 digitaldaemon.com...

Walter wrote:

We cannot afford to keep things like this. Period.
In a newcomers eye, this sticks out so bad, he'll think the rest of the
language is the same. _We_ know it's not, but how do you convince them?

C++'s map does it for sure , why I have no idea, seems like a horrible idea
to me as well. Just wanted to throw my vote in the 'No AA Writing on
lookup' camp.

Throwing exceptions on missing keys is *equally* bad,
in my opinion. It would also stop me from using hashes
the way that I am used, and I'd have to continue with
the workaround that's currently needed due to setting.
I guess it boils down to whether you consider an
empty array to be full of valid lookups, or not...
To me, a dynamic array is full of .init values
so then it makes sense that associative arrays
should also be full of .init values as well.
In the end, I'll just continue to write code like today.
value = (key in hash) ? hash[key] : null;
It's also the only form that has survived for a while,
even if it does do a double lookup in the hash table.
(but actually using "key.init" instead of null above
does not work, due to a horrible init-related bug)
--anders

Aye. Mango avoids all this for the most part via a library-based HashMap
instead. I remain one of the principal detractors of the built-in AA,
for all kinds of reasons. The only thing going for the latter is the way
in which it avoids the need for assignment casts (cos' the compiler
knows the types already). That certainly has /some/ value, but it's not
clear just how much.
Anders F Björklund wrote:

bamb00 wrote:

Javascript does it just that way, too.

Does what ? Set keys on lookup ? No, it doesn't.

C++'s map does it for sure , why I have no idea, seems like a horrible
idea
to me as well. Just wanted to throw my vote in the 'No AA Writing on
lookup' camp.

Throwing exceptions on missing keys is *equally* bad,
in my opinion. It would also stop me from using hashes
the way that I am used, and I'd have to continue with
the workaround that's currently needed due to setting.
I guess it boils down to whether you consider an
empty array to be full of valid lookups, or not...
To me, a dynamic array is full of .init values
so then it makes sense that associative arrays
should also be full of .init values as well.
In the end, I'll just continue to write code like today.
value = (key in hash) ? hash[key] : null;
It's also the only form that has survived for a while,
even if it does do a double lookup in the hash table.
(but actually using "key.init" instead of null above
does not work, due to a horrible init-related bug)
--anders

C++'s map does it for sure , why I have no idea, seems like a horrible
idea
to me as well. Just wanted to throw my vote in the 'No AA Writing on
lookup' camp.

Throwing exceptions on missing keys is *equally* bad,
in my opinion. It would also stop me from using hashes
the way that I am used, and I'd have to continue with
the workaround that's currently needed due to setting.
I guess it boils down to whether you consider an
empty array to be full of valid lookups, or not...
To me, a dynamic array is full of .init values
so then it makes sense that associative arrays
should also be full of .init values as well.
In the end, I'll just continue to write code like today.
value = (key in hash) ? hash[key] : null;

Or you could simply write:
contains(key,value);
as the 'out' param value will be set to type.init (which is null for
arrays, object etc)

It's also the only form that has survived for a while,
even if it does do a double lookup in the hash table.
(but actually using "key.init" instead of null above
does not work, due to a horrible init-related bug)

data_type&
operator[](const key_type& k)
Returns a reference to the object that is associated with a particular
key. If the map does not already contain such an object, operator[]
inserts the default object data_type().

Still, it should at least set it to the .init value !
And I myself would prefer the new key to *not* get added,
my rationale was that it is just a uninitialized value...
Sorta like a dynamic array, after you set a big .length.
Matthew (and others) meant that it was an OutOfBounds ?
(and thus should throw an exception, in debugging builds)
This in turn depends on the definition of an AA's "bounds".
Is it a) the current keys b) every possible key value
--anders