I'm a bit leary of adopting this feature (it has been discussed). To me
"in" implies a fast operation and substring searching isn't quite it.
One thing that could be done is to allow "in" with literal arrays to
their right:
if (x in ["abcde", "asd"]) { ... }
The size of the operand is constant, known, and visible.

The "in" is useful to tell if an item is present in a collection (dynamic
arrays, fixed-sized arrays, sorted arrays, tuple, typetuple, hash set, tree
set, hash map, tree map, trees, graph, deque, skip list, finger tree, etc), and
it's useful to tell if a substring is present in a string. Those two purposes
are very commonly useful in programs.
Despite those purposes are very common, there is no need for an "in" syntax. In
dlibs1 I have created an isIn() function that was good enough.
D supports the "in" operator for associative arrays, and in its operator
overloading (my Set()/set() implementation supports the "in" operator), so it's
natural for people to desire that operator to be usable for the other kind of
collections too, like arrays. Maybe those people have used Python, where the in
operator (__contains__) is widey used.
Regarding generic programming, Python doesn't have templates, but it has
dynamic typing, that also has purposes similar to generic programming. To a
python function that uses the "in" operator you may give both a list and an
associative array, or other objects that support the __contains__. In this case
the Python code performs an O(n) or O(1) operation according to the dynamic
type given to the function.
I can live without the "in" operation for arrays and strings (useful to look
for substrings too). So one solution is not not change the D language, and not
add support for "in" to arrays.
Another solution is just to accept O(n) as the worst complexity for the "in"
operator. I don't understand what's the problem in this.
Another solution is to support the "in" operator for dynamic arrays too, and
define a new attribute, like complexity(), plus an Enum that allows to specify
the worst case complexity. So associative arrays are annotated with
complexity(O.linear), while the function that searches for items/substrings in
arrays/strings is complexity(O.constant). At compile-time the generic code is
then able to query the computational complexity of the "in" operator
implementation. The problem is that the compiler today can't enforce functions
annotated with complexity(O.constant) to actually not perform a linear search
(but I don't think it's a problem, today if the Range protocol asks an
operation to not be worse than O(n ln n) the compiler doesn't enforce it).
I don't like the idea of allowing the "in" operator for sorted arrays, tuples,
fixed-sized arrays, skip lists, maps, sets, trees and disallow it for dynamic
arrays. It looks a bit silly. People for the next ten years will then ask "in"
to be extended for unsorted dynamic arrays & substring search too, you can bet
on it.
Bye,
bearophile

I'm a bit leary of adopting this feature (it has been discussed). To me
"in" implies a fast operation and substring searching isn't quite it.
One thing that could be done is to allow "in" with literal arrays to
their right:
if (x in ["abcde", "asd"]) { ... }
The size of the operand is constant, known, and visible.

The "in" is useful to tell if an item is present in a collection (dynamic
arrays, fixed-sized arrays, sorted arrays, tuple, typetuple, hash set, tree
set, hash map, tree map, trees, graph, deque, skip list, finger tree, etc), and
it's useful to tell if a substring is present in a string. Those two purposes
are very commonly useful in programs.
Despite those purposes are very common, there is no need for an "in" syntax.
In dlibs1 I have created an isIn() function that was good enough.
D supports the "in" operator for associative arrays, and in its operator
overloading (my Set()/set() implementation supports the "in" operator), so it's
natural for people to desire that operator to be usable for the other kind of
collections too, like arrays. Maybe those people have used Python, where the in
operator (__contains__) is widey used.
Regarding generic programming, Python doesn't have templates, but it has
dynamic typing, that also has purposes similar to generic programming. To a
python function that uses the "in" operator you may give both a list and an
associative array, or other objects that support the __contains__. In this case
the Python code performs an O(n) or O(1) operation according to the dynamic
type given to the function.
I can live without the "in" operation for arrays and strings (useful to look
for substrings too). So one solution is not not change the D language, and not
add support for "in" to arrays.
Another solution is just to accept O(n) as the worst complexity for the "in"
operator. I don't understand what's the problem in this.
Another solution is to support the "in" operator for dynamic arrays too, and
define a new attribute, like complexity(), plus an Enum that allows to specify
the worst case complexity. So associative arrays are annotated with
complexity(O.linear), while the function that searches for items/substrings in
arrays/strings is complexity(O.constant). At compile-time the generic code is
then able to query the computational complexity of the "in" operator
implementation. The problem is that the compiler today can't enforce functions
annotated with complexity(O.constant) to actually not perform a linear search
(but I don't think it's a problem, today if the Range protocol asks an
operation to not be worse than O(n ln n) the compiler doesn't enforce it).
I don't like the idea of allowing the "in" operator for sorted arrays, tuples,
fixed-sized arrays, skip lists, maps, sets, trees and disallow it for dynamic
arrays. It looks a bit silly. People for the next ten years will then ask "in"
to be extended for unsorted dynamic arrays& substring search too, you can bet
on it.
Bye,
bearophile

I think it's better to know that in always has logarithmic or better
complexity. Otherwise, there is always canFind (which needs a new name :-)

Another solution is just to accept O(n) as the worst complexity for
the "in" operator. I don't understand what's the problem in this.

That means we'd have to define another operation, i.e. "quickIn" that
has O(log n) bound.

Why?
I can't say I've ever cared about the big-O complexity of an operation.
All I care about is that it's "fast enough", which is highly
context-dependent and may have nothing to do with complexity. I can't
see myself replacing my 'int[]' arrays with the much slower and bigger
'int[MAX_SIZE]' arrays just to satisfy the compiler. I shouldn't have
to. The type system shouldn't encourage me to.
I think it's an abuse of the type system to use it to guarantee
performance. However, if I wanted the type system to provide
performance guarantees, I would need a lot more language support than a
convention that certain operations are supposed to be O(n). I'm talking
performance specification on *all* functions, with a compile-time error
if the compiler can't prove that the compiled function meets those
guarantees. And *even then*, I would like to be able to use an O(n)
implementation of 'in' where I know that O(n) performance is acceptable.
--
Rainer Deyke - rainerd eldwood.com

I can't say I've ever cared about the big-O complexity of an operation.

Then you don't understand how important it is. Here is an example of
how caring about the big O complexity cut the runtime of dmd to about 1/3:
http://d.puremagic.com/issues/show_bug.cgi?id=4721
big O complexity is very important when you are writing libraries. Not
so much when you are writing applications -- if you can live with it in
your application, then fine. But Phobos should not have these problems
for people who *do* care.
What I'd suggest is to write your own function that uses in when
possible and find when not possible. Then use that in your code.

Personally, I'd vote for inclusion of such a function in Phobos. The big
problem with C++ containers was and is an absence of uniform interface.
Typedefing those templates to save typing was good, but it didn't
completely solve the problem of interchangeability. E.g., you make a
change from list to vector and suddenly realize that remove() doesn't
work anymore.
Having uniform interface for insertion/removal/lookup is plainly awesome.
What if Phobos defined a function, say, E* contains(C,E)(C container, E
element) that'd behave as 'in' operator as best it could for type C?
It'd fit frequent demands without making big promises - in other words,
would still be largerly useful.
Yes, I understand that good (performance-wise) generic solution isn't
that simple to discover and implement. But having an easily accessible
(i.e. standard), though maybe not all that efficient solution is a good
thing nevertheless.

Another solution is just to accept O(n) as the worst complexity for
the "in" operator. I don't understand what's the problem in this.

That means we'd have to define another operation, i.e. "quickIn" that
has O(log n) bound.

Why?
I can't say I've ever cared about the big-O complexity of an operation.

Complexity composes very badly. Issues tend to manifest at moderate
sizes and may make things unbearable at large sizes. I'm really grateful
I'm at a workplace where the exclamation "Damn! I was waiting like an
idiot for this quadratic append!" is met with understanding nods from
workmates who've made the same mistake before.
As an example, only last week I was working on cramming a sort of an
index of the entire Wikipedia on one machine. I was waiting for the
indexer which ran slower and slower and slower. In the end I figured
there was only _one_ quadratic operation - appending to a vector<size_t>
that held document lengths. That wasn't even the bulk of the data and it
was the last thing I looked at! Yet it made the run time impossible to
endure.

All I care about is that it's "fast enough", which is highly
context-dependent and may have nothing to do with complexity. I can't
see myself replacing my 'int[]' arrays with the much slower and bigger
'int[MAX_SIZE]' arrays just to satisfy the compiler. I shouldn't have
to. The type system shouldn't encourage me to.
I think it's an abuse of the type system to use it to guarantee
performance. However, if I wanted the type system to provide
performance guarantees, I would need a lot more language support than a
convention that certain operations are supposed to be O(n). I'm talking
performance specification on *all* functions, with a compile-time error
if the compiler can't prove that the compiled function meets those
guarantees. And *even then*, I would like to be able to use an O(n)
implementation of 'in' where I know that O(n) performance is acceptable.

Complexity composes very badly. Issues tend to manifest at moderate
sizes and may make things unbearable at large sizes. I'm really grateful
I'm at a workplace where the exclamation "Damn! I was waiting like an
idiot for this quadratic append!" is met with understanding nods from
workmates who've made the same mistake before.
As an example, only last week I was working on cramming a sort of an
index of the entire Wikipedia on one machine. I was waiting for the
indexer which ran slower and slower and slower. In the end I figured
there was only _one_ quadratic operation - appending to a vector<size_t>
that held document lengths. That wasn't even the bulk of the data and it
was the last thing I looked at! Yet it made the run time impossible to
endure.

But that is not a matter of library interface isn't it? It's a matter of
algorithm/container choice. It's not the push_back that was slow in the
end, it was std::vector (yes, that's arguable, but the point is that
container defines rules for its methods, not vice-versa).

I think it's an abuse of the type system to use it to guarantee
performance. However, if I wanted the type system to provide
performance guarantees, I would need a lot more language support than a
convention that certain operations are supposed to be O(n). I'm talking
performance specification on *all* functions, with a compile-time error
if the compiler can't prove that the compiled function meets those
guarantees. And *even then*, I would like to be able to use an O(n)
implementation of 'in' where I know that O(n) performance is acceptable.

std.container introduces the convention that O(n) methods start with
"linear".

I find such convention useful indeed, though it brings a fact to
surface: if we need to emphasize method that make strong promises about
complexity with prefixes/suffixes, or say by putting them into separate
module, then why don't we have an non-emphasized counterpart that won't
make strong promises but would fit wider range of containers? After all,
std.algorithm offers different search mechanisms with varying complexity
(e.g. find() vs boyerMooreFinder()).

Then, don't use it in std.algorithm or any other code that needs
guaranteed complexity, just like now. I don't see the problem with a
generic "in" operator, nobody would be forced to use it.

What do you suggest for fast lookup in a container?

What is being used now? How can having "in" and not using it (just like now) in
functions requiring guaranteed complexity can be worse than not having it?
The only drawback I can see to having an "in" operator with all containers is
that some programmers would not read the documentation and use it expecting it
to be fast. But then that also happens with many other language constructs and
some programmers will write crappy algoritms anyway.

Then, don't use it in std.algorithm or any other code that needs
guaranteed complexity, just like now. I don't see the problem with a
generic "in" operator, nobody would be forced to use it.

What do you suggest for fast lookup in a container?

What is being used now? How can having "in" and not using it (just like now)
in functions requiring guaranteed complexity can be worse than not having it?

If I write a generic algorithm, I can use opIn and assume it is fast. If
I don't need the speed, I can use canFind over the container's range
instead. If we say opIn can be slow, the fast version goes away.

The only drawback I can see to having an "in" operator with all containers is
that some programmers would not read the documentation and use it expecting it
to be fast. But then that also happens with many other language constructs and
some programmers will write crappy algoritms anyway.

But that is not a matter of library interface isn't it? It's a matter of
algorithm/container choice. It's not the push_back that was slow in the
end, it was std::vector (yes, that's arguable, but the point is that
container defines rules for its methods, not vice-versa).

Except that when you're dealing with generic code which has to deal with
multiple container types (like std.algorithm), you _need_ certain complexity
guarantees about an operation since it could happen on any container that it's
given. Using operator in in an algorithm could be perfectly reasonable if it
had
O(1) or O(log n) complexity but be completely unreasonable if it had O(n)
complexity. So, the algorithm then is reasonable with some containers and not
others when if in were restricted to O(1) or O(log n), it could be used by the
algorithm without worrying that a container would be used with it which would
make it an order of complexity greater, or worse.
If it were strictly a matter of writing code which directly used a particular
algorithm and container type, then you could know what the complexities of the
algorithm and operations on the container are, but once you're dealing with
generic code, operations need to have complexity guarantees regardless of
their
container, or the complexity of the algorithm will vary with the container
type.
And if that algorithm is used in yet another algorithm, then it gets that much
worse. It can quickly become easy to bury the place where what you're trying
to
do becomes an order of complexity greater, because the container that you
selected was an order of complexity greater than other containers on an
operation that an algorithm is using buried in code somewhere.
- Jonathan M Davis

Yet still, generality ends at some point. You can't devise every
possible algorithm for any possible types and have it have set-in-stone
complexity independently of types.
Take std.range.popFrontN(). It's generic, and it's used in other
algorithms. Yet it has O(1) complexity for ranges that support slicing,
and O(n) for those that do not. But you don't take into account
complexity of slicing operation, or complexity of stepping through the
range.
Or std.algorithm.find(). It's basically O(n), but then again, when using
it, one should also consider complexity of used predicate.
Just the same happened in the case Andrei described: the algorithm was
O(n) judging from the description (performing n insertions into
container). But the container itself blew it up into quadratic time
because of it's own insertion algorithm.
What I mean is you'll always have algorithms that will perform
differently for different containers, and you'll always have to choose
containers that best fit your needs. Generic code is not only about
efficiency: you'd at least expect it to work for different types.
Replacing vector with list in Andrei's case would probably solve the
problem at the cost of losing random access (together with contiguous
storage). Which means it'd also require changing code if random access
was in use somewhere. Having a generic random access function
at(Container,index) (offering complexity that varies with container
used: e.g. O(1) for arrays, O(n) for lists) would save you maintenance.

[...]
What I mean is you'll always have algorithms that will perform
differently for different containers, and you'll always have to choose
containers that best fit your needs [...]

All true. However, the point is that operations need to have a know
complexity.
If in is known to be O(1), the algorithms can safely use it, knowing that it's
O(1). Any container that can't do it in O(1) doesn't implement in. You could,
on
the other hand, have in be O(n) (which doesn't stop containers from
implementing
it as O(1) since that's better than O(n)), and then any algorithm that uses it
has to assume that it could be O(n) and therfore may need to avoid it. The
real
question is what complexity we want to define it as. At the moment, the only
container to use it are the built-in associative arrays, and for them, it's
O(1). Since find() is already going to be O(n), it seems to me that in should
be
better than O(n) - be it O(1) or O(log n) - and Andrei appears to agree, but
others don't, hence this long discussion...

Ahh, I think I get the perspective now, though I had to reread the whole
thread two times. Thank you.

Then, don't use it in std.algorithm or any other code that needs
guaranteed complexity, just like now. I don't see the problem with a
generic "in" operator, nobody would be forced to use it.

That kind of "documentation" is useless, it doesn't prevent use, and it
doesn't feel right to the person who accidentally uses it. When I call
sort(x);
and it performs horribly, am I going to blame x or sort? Certainly, I'll
never think it's my own fault :)
-Steve

True! And that's the only drawback I see on generalizing "in", but there are
many things in programming languages that doesn't feel right when you don't
know the language well. That doesn't mean that D should be the "programming for
dummies on rails with a constant automated tutor included" language; if I
read well the site, it is mean to be a practical language with the ability to
shot yourself in the foot.
Still, I don't understand how generalizing "in" could affect std.algorithm et
al if they only use "in" for AAs, just like now.

Then, don't use it in std.algorithm or any other code that needs
guaranteed complexity, just like now. I don't see the problem with a
generic "in" operator, nobody would be forced to use it.

That kind of "documentation" is useless, it doesn't prevent use, and it
doesn't feel right to the person who accidentally uses it. When I call
sort(x);
and it performs horribly, am I going to blame x or sort? Certainly, I'll
never think it's my own fault :)
-Steve

True! And that's the only drawback I see on generalizing "in", but there are
many things in programming languages that doesn't feel right when you don't
know the language well. That doesn't mean that D should be the "programming
for dummies on rails with a constant automated tutor included" language; if I
read well the site, it is mean to be a practical language with the ability to
shot yourself in the foot.
Still, I don't understand how generalizing "in" could affect std.algorithm et
al if they only use "in" for AAs, just like now.

Suppose I would like to use a faster AA (for some cases) and define opIn with
good lookup behavior. Then the algorithms in phobos would think my opIn is
crap,
what now? All code using regular AA's could have just been replaced by my super-
duper user-defined AA if it were not for this generalizing of OpIn. So we need
another operation that replaces the previously fast opIn and make phobos use
that instead. But why bother if we have a perfectly good one to begin with?

Complexity composes very badly. Issues tend to manifest at moderate
sizes and may make things unbearable at large sizes. I'm really grateful
I'm at a workplace where the exclamation "Damn! I was waiting like an
idiot for this quadratic append!" is met with understanding nods from
workmates who've made the same mistake before.
As an example, only last week I was working on cramming a sort of an
index of the entire Wikipedia on one machine. I was waiting for the
indexer which ran slower and slower and slower. In the end I figured
there was only _one_ quadratic operation - appending to a vector<size_t>
that held document lengths. That wasn't even the bulk of the data and it
was the last thing I looked at! Yet it made the run time impossible to
endure.

But that is not a matter of library interface isn't it? It's a matter of
algorithm/container choice. It's not the push_back that was slow in the
end, it was std::vector (yes, that's arguable, but the point is that
container defines rules for its methods, not vice-versa).

Except that when you're dealing with generic code which has to deal with
multiple container types (like std.algorithm), you _need_ certain complexity
guarantees about an operation since it could happen on any container that it's
given. Using operator in in an algorithm could be perfectly reasonable if it
had
O(1) or O(log n) complexity but be completely unreasonable if it had O(n)
complexity. So, the algorithm then is reasonable with some containers and not
others when if in were restricted to O(1) or O(log n), it could be used by the
algorithm without worrying that a container would be used with it which would
make it an order of complexity greater, or worse.
If it were strictly a matter of writing code which directly used a particular
algorithm and container type, then you could know what the complexities of the
algorithm and operations on the container are, but once you're dealing with
generic code, operations need to have complexity guarantees regardless of their
container, or the complexity of the algorithm will vary with the container
type.
And if that algorithm is used in yet another algorithm, then it gets that much
worse. It can quickly become easy to bury the place where what you're trying to
do becomes an order of complexity greater, because the container that you
selected was an order of complexity greater than other containers on an
operation that an algorithm is using buried in code somewhere.
- Jonathan M Davis

Then, don't use it in std.algorithm or any other code that needs
guaranteed complexity, just like now. I don't see the problem with a
generic "in" operator, nobody would be forced to use it.

That kind of "documentation" is useless, it doesn't prevent use, and it
doesn't feel right to the person who accidentally uses it. When I call
sort(x);
and it performs horribly, am I going to blame x or sort? Certainly, I'll
never think it's my own fault :)
-Steve

In the end I figured there was only _one_
quadratic operation - appending to a vector<size_t> that held document
lengths. That wasn't even the bulk of the data and it was the last thing I
looked at! Yet it made the run time impossible to endure.

True! And that's the only drawback I see on generalizing "in", but there
are many things in programming languages that doesn't feel right when
you don't
know the language well. That doesn't mean that D should be the
"programming for dummies on rails with a constant automated tutor
included" language; if I
read well the site, it is mean to be a practical language with the
ability to shot yourself in the foot.

Absolutely. And one of its trademarks is being as fast as C. Now, C clearly
does not have the 'in' operator, but it is a goal in D that the obvious way
to do something should be fast and correct.

Still, I don't understand how generalizing "in" could affect
std.algorithm et al if they only use "in" for AAs, just like now.

Because if 'in' is available for other uses for other containers, one would
be tempted to use it also there. The alternative is to put it in the coding
standards:
43. Thou shalt not use magic numbers.
44. Thou shalt not use 'in', as it may be slow as heck.
45. Thou shalt not write spaghetti code. Nor macaroni.
This would make the feature useless.
--
Simen

That kind of "documentation" is useless, it doesn't prevent use, and it
doesn't feel right to the person who accidentally uses it. When I call
sort(x);
and it performs horribly, am I going to blame x or sort? Certainly,
I'll
never think it's my own fault :)
-Steve

True! And that's the only drawback I see on generalizing "in", but there
are many things in programming languages that doesn't feel right when
you don't
know the language well. That doesn't mean that D should be the
"programming for dummies on rails with a constant automated tutor
included" language; if I
read well the site, it is mean to be a practical language with the
ability to shot yourself in the foot.
Still, I don't understand how generalizing "in" could affect
std.algorithm et al if they only use "in" for AAs, just like now.

Let's move off of in for a minute, so I can illustrate what I mean. in is
kind of a non-universal operator (other languages define it differently).
Let's try indexing.
If you see the following code:
for(int i = 0; i < x.length; i++)
x[i]++;
If you see this code, you might think it's O(n). But let's say the author
of x didn't care about complexity, and just felt like [] should be for
indexing, no matter what the cost. Then x could possibly be a linked
list, and each index operation is O(n), then this block of code becomes
O(n^2), and your performance suffers. You may not notice it, it might be
"acceptable", but then somewhere down the road you start calling this
function more, and your program all of a sudden gets really slow. What
gives? Let's say you spend 1-2 hours looking for this and find out that
the problem is that indexing x is linear, you can change the loop to this:
foreach(ref i; x)
i++;
and all of a sudden your performance comes back. Maybe the author of x
puts right in his docs that indexing is an O(n) operation. You might
grumble about it, and move on. But if this happens all the time, you are
just going to start blaming the author of x more than your own
incompetence. It's one of those things where user interface designers get
it, and most engineers don't -- people have certain flaws, and a big one
is not reading the manual. Making the interface as intuitive as possible
is very important for the success of a product.
But what if we made the language such that this *couldn't possibly happen*
because you don't allow indexing on linked lists? This has the two very
good properties:
1. It makes the user more aware of the limitations, even though the syntax
is harder to use (hm.. it doesn't let me use indexing, there must be a
good reason).
2. It makes *even experienced users* avoid this bug because they can't
possibly compile it.
#2 is what I care about most. As an experienced developer, I still might
make the above mistake (look at Walter's mistake on the compiler that I
mentioned earlier). We don't want to set minefields for experienced
developers if we can help it. Yes they can shoot themselves in the foot,
but we don't want to remove the safety on the gun so they are *more
likely* to shoot themselves in the foot.
-Steve

Pelle Wrote:
The only drawback I can see to having an "in" operator with all
containers is that some programmers would not read the documentation and
use it expecting it to be fast. But then that also happens with many
other language constructs and some programmers will write crappy
algoritms anyway.

This is pretty much it. I would never use a language which is designed for
"some programmers".
--
Using Opera's revolutionary e-mail client: http://www.opera.com/mail/

Then, don't use it in std.algorithm or any other code that needs
guaranteed complexity, just like now. I don't see the problem with a
generic "in" operator, nobody would be forced to use it.

That kind of "documentation" is useless, it doesn't prevent use, and it
doesn't feel right to the person who accidentally uses it. When I call
sort(x);
and it performs horribly, am I going to blame x or sort? Certainly,
I'll never think it's my own fault :)
-Steve

Sure, write some random strings and compile it, if it doesn't compile, you
can always blame Walter, right?
If documentation is useless, so is most of the programmers, you got to
accept it :)
Question is, should this affect compiler design? If you think it should,
you can't write a single line that calls "some other guy"'s code, it
doesn't matter if he use "in" or "out",
operators or just simple functions.
Thanks!
--
Using Opera's revolutionary e-mail client: http://www.opera.com/mail/

I can't say I've ever cared about the big-O complexity of an operation.

Then you don't understand how important it is.

Let me rephrase that. I care about performance. Big-O complexity can
obviously have a significant effect on performance, so I so do care
about it, but only to the extend that it affects performance. Low big-O
complexity is a means to an end, not a goal in and of itself. If 'n' is
low enough, then a O(2**n) algorithm may well be faster than an O(1)
algorithm.
I also believe that, in the absence of a sophisticated system that
actually verifies performance guarantees, the language and standard
library should trust the programmer to know what he is doing. The
standard library should only provide transitive performance guarantees,
e.g. this algorithm calls function 'f' 'n' times, so the algorithm's
performance is O(n * complexity(f)). If 'f' runs in constant time, the
algorithm runs in linear time. If 'f' runs in exponential time, the
algorithm still runs.

big O complexity is very important when you are writing libraries. Not
so much when you are writing applications -- if you can live with it in
your application, then fine. But Phobos should not have these problems
for people who *do* care.
What I'd suggest is to write your own function that uses in when
possible and find when not possible. Then use that in your code.

The issue is that algorithms may use 'in' internally, so I may have to
rewrite large parts of Phobos. (And the issue isn't about 'in'
specifically, but complexity guarantees in general.)
--
Rainer Deyke - rainerd eldwood.com

Another solution is just to accept O(n) as the worst complexity for
the "in" operator. I don't understand what's the problem in this.

That means we'd have to define another operation, i.e. "quickIn" that=

has O(log n) bound.

Why?
I can't say I've ever cared about the big-O complexity of an operation=

=20
Then you don't understand how important it is.

If big O complexity is so important, then why does everyone use
quicksort (which is O(n**2)) and not heap sort or merge sort (which
are O(n*log(n)))?
Jerome
--=20
mailto:jeberger free.fr
http://jeberger.free.fr
Jabber: jeberger jabber.fr

Another solution is just to accept O(n) as the worst complexity for
the "in" operator. I don't understand what's the problem in this.

That means we'd have to define another operation, i.e. "quickIn" that
has O(log n) bound.

Why?
I can't say I've ever cared about the big-O complexity of an operation.

Then you don't understand how important it is.

If big O complexity is so important, then why does everyone use
quicksort (which is O(n**2)) and not heap sort or merge sort (which
are O(n*log(n)))?
Jerome

For average case big O and dwarfing heap/merge sort in the constant factor. But
in fact its not totally true that everyone uses quicksort pur sang: most sort
implementations deal with the worst case of quicksort, by switching to heapsort
for example. It is exactly because of this behavior.

I can't say I've ever cared about the big-O complexity of an operation.

quicksort (which is O(n**2)) and not heap sort or merge sort (which
are O(n*log(n)))?
Jerome

Because on average quicksort is faster than heap sort, and uses much less
space than merge sort. Also, trivial guards can be put in place to avoid
running quicksort in a worst case (pre-sorted data) scenario.

Jerome

In fact, guards can be put to ensure that the _expected_ (not average,
not best-case) complexity is O(n log n). This makes the risk of hitting
a worst-case scenario negligible in a principled manner.
http://en.wikipedia.org/wiki/Quicksort#Randomized_quicksort_expected_complexityhttp://docs.google.com/viewer?a=v&q=cache:MdBVR26N5UsJ:www.cs.cmu.edu/afs/cs/academic/class/15451-s07/www/lecture_notes/lect0123.pdf+randomized+quicksort&hl=en&gl=us&pid=bl&srcid=ADGEESi3GTSxfHWkeb_f14H0pkbigduS94qJVc9XLQ7aPa6lPUJ5JZbggI0izFe3ogiVOJCYcVkGtdumaS9hBvrGw0-TA_yZQj2qd1-AEudKyEWEGXnO4sTwqCZL95OpFkdFHDF2WXFV&sig=AHIEtbT1R0q5RIR4rob17QUKlYVl90vXyQ
Currently std.algorithm.getPivot picks the middle of the range as the
pivot, but I made it modular such that it can be improved in a number of
ways (i.e. randomized, median-of-3, median-of-5, etc).
Andrei

Currently std.algorithm.getPivot picks the middle of the range as the
pivot, but I made it modular such that it can be improved in a number of
ways (i.e. randomized, median-of-3, median-of-5, etc).

Isn't median of 3 a better default?
Bye,
bearophile

There is no "best". They are all trade-offs between the time taken to
select the pivot and the suitability of the pivot.
Middle - Fastest, not very good
Median-of-3 - Slower, but slightly better
Median-of-5 - Slower still, but even better
Randomized - Speed depends on the random number algorithm. Suitability
is random, so could be good, could be bad, but on average it's good
enough. Has the added quality that it's practically impossible to devise
worst-case input for it (you'd need to know the generator and seed),
whereas for the other 3, a malicious user could provide data that gives
you O(n^2) complexity.

Currently std.algorithm.getPivot picks the middle of the range as the
pivot, but I made it modular such that it can be improved in a number of
ways (i.e. randomized, median-of-3, median-of-5, etc).

Isn't median of 3 a better default?
Bye,
bearophile

There is no "best". They are all trade-offs between the time taken to
select the pivot and the suitability of the pivot.
Middle - Fastest, not very good
Median-of-3 - Slower, but slightly better
Median-of-5 - Slower still, but even better
Randomized - Speed depends on the random number algorithm. Suitability
is random, so could be good, could be bad, but on average it's good
enough. Has the added quality that it's practically impossible to devise
worst-case input for it (you'd need to know the generator and seed),
whereas for the other 3, a malicious user could provide data that gives
you O(n^2) complexity.

Currently std.algorithm.getPivot picks the middle of the range as the
pivot, but I made it modular such that it can be improved in a
number of
ways (i.e. randomized, median-of-3, median-of-5, etc).

Isn't median of 3 a better default?
Bye,
bearophile

There is no "best". They are all trade-offs between the time taken to
select the pivot and the suitability of the pivot.
Middle - Fastest, not very good
Median-of-3 - Slower, but slightly better
Median-of-5 - Slower still, but even better
Randomized - Speed depends on the random number algorithm. Suitability
is random, so could be good, could be bad, but on average it's good
enough. Has the added quality that it's practically impossible to devise
worst-case input for it (you'd need to know the generator and seed),
whereas for the other 3, a malicious user could provide data that gives
you O(n^2) complexity.

Also: in-place sorting for the median cases.
Andrei

Could you elaborate? I don't see how you lose in-place sorting with the
random and middle selection cases.

Currently std.algorithm.getPivot picks the middle of the range as the
pivot, but I made it modular such that it can be improved in a
number of
ways (i.e. randomized, median-of-3, median-of-5, etc).

Isn't median of 3 a better default?
Bye,
bearophile

There is no "best". They are all trade-offs between the time taken to
select the pivot and the suitability of the pivot.
Middle - Fastest, not very good
Median-of-3 - Slower, but slightly better
Median-of-5 - Slower still, but even better
Randomized - Speed depends on the random number algorithm. Suitability
is random, so could be good, could be bad, but on average it's good
enough. Has the added quality that it's practically impossible to devise
worst-case input for it (you'd need to know the generator and seed),
whereas for the other 3, a malicious user could provide data that gives
you O(n^2) complexity.

Also: in-place sorting for the median cases.
Andrei

Could you elaborate? I don't see how you lose in-place sorting with the
random and middle selection cases.

You just take median-of-some and also sort those elements right away
before returning the pivot. It's a minor improvement.
Andrei

Currently std.algorithm.getPivot picks the middle of the range as the
pivot, but I made it modular such that it can be improved in a number of
ways (i.e. randomized, median-of-3, median-of-5, etc).

Isn't median of 3 a better default?
Bye,
bearophile

There is no "best". They are all trade-offs between the time taken to
select the pivot and the suitability of the pivot.
Middle - Fastest, not very good
Median-of-3 - Slower, but slightly better
Median-of-5 - Slower still, but even better
Randomized - Speed depends on the random number algorithm. Suitability
is random, so could be good, could be bad, but on average it's good
enough. Has the added quality that it's practically impossible to devise
worst-case input for it (you'd need to know the generator and seed),
whereas for the other 3, a malicious user could provide data that gives
you O(n^2) complexity.

Introsort (a quicksort variant) degrades to heapsort for sorts that are going
quadratic (it tracks the recursion depth and transitions when the depth is more
than a desired threshold for the size of the range). The sort routine I wrote
for Tango uses this approach plus media-of-3, the insertion sort fallback, and
the quicksort variant that separately tracks equal values. All told, it's as
fast or faster than everything else I've tested, even for contrived degenerate
cases. Perhaps I'll convert it to use ranges and see how it does.

In fact, guards can be put to ensure that the _expected_ (not average,
not best-case) complexity is O(n log n). This makes the risk of hitting
a worst-case scenario negligible in a principled manner.
http://en.wikipedia.org/wiki/Quicksort#Randomized_quicksort_expected_complexityhttp://docs.google.com/viewer?a=v&q=cache:MdBVR26N5UsJ:www.cs.cmu.edu/afs/cs/academic/class/15451-s07/www/lecture_notes/lect0123.pdf+randomized+quicksort&hl=en&gl=us&pid=bl&srcid=ADGEESi3GTSxfHWkeb_f14H0pkbigduS94qJVc9XLQ7aPa6lPUJ5JZbggI0izFe3ogiVOJCYcVkGtdumaS9hBvrGw0-TA_yZQj2qd1-AEudKyEWEGXnO4sTwqCZL95OpFkdFHDF2WXFV&sig=AHIEtbT1R0q5RIR4rob17QUKlYVl90vXyQ
Currently std.algorithm.getPivot picks the middle of the range as the
pivot, but I made it modular such that it can be improved in a number of
ways (i.e. randomized, median-of-3, median-of-5, etc).

It would be nice if it used insertion sort for small ranges as well. There's
also a quicksort variant that partitions equal-to elements separately from
less-than and greater-than values, which is faster for ranges with a lot of
equal elements (and no slower for ranges without).

In fact, guards can be put to ensure that the _expected_ (not average,
not best-case) complexity is O(n log n). This makes the risk of hitting
a worst-case scenario negligible in a principled manner.
http://en.wikipedia.org/wiki/Quicksort#Randomized_quicksort_expected_complexityhttp://docs.google.com/viewer?a=v&q=cache:MdBVR26N5UsJ:www.cs.cmu.edu/afs/cs/academic/class/15451-s07/www/lecture_notes/lect0123.pdf+randomized+quicksort&hl=en&gl=us&pid=bl&srcid=ADGEESi3GTSxfHWkeb_f14H0pkbigduS94qJVc9XLQ7aPa6lPUJ5JZbggI0izFe3ogiVOJCYcVkGtdumaS9hBvrGw0-TA_yZQj2qd1-AEudKyEWEGXnO4sTwqCZL95OpFkdFHDF2WXFV&sig=AHIEtbT1R0q5RIR4rob17QUKlYVl90vXyQ
Currently std.algorithm.getPivot picks the middle of the range as the
pivot, but I made it modular such that it can be improved in a number of
ways (i.e. randomized, median-of-3, median-of-5, etc).

It would be nice if it used insertion sort for small ranges as well. There's
also a quicksort variant that partitions equal-to elements separately from
less-than and greater-than values, which is faster for ranges with a lot of
equal elements (and no slower for ranges without).

Another solution is just to accept O(n) as the worst complexity for
the "in" operator. I don't understand what's the problem in this.

That means we'd have to define another operation, i.e. "quickIn" that
has O(log n) bound.

Why?
I can't say I've ever cared about the big-O complexity of an operation.

Then you don't understand how important it is. Here is an example of how
caring about the big O complexity cut the runtime of dmd to about 1/3:
http://d.puremagic.com/issues/show_bug.cgi?id=4721
big O complexity is very important when you are writing libraries. Not so
much when you are writing applications -- if you can live with it in your
application, then fine. But Phobos should not have these problems for
people who *do* care.
What I'd suggest is to write your own function that uses in when possible
and find when not possible. Then use that in your code.
-Steve

In the end I figured there was only _one_
quadratic operation - appending to a vector<size_t> that held document
lengths. That wasn't even the bulk of the data and it was the last thing I
looked at! Yet it made the run time impossible to endure.

I can't say I've ever cared about the big-O complexity of an operation.

Then you don't understand how important it is.

Let me rephrase that. I care about performance. Big-O complexity can
obviously have a significant effect on performance, so I so do care
about it, but only to the extend that it affects performance. Low big-O
complexity is a means to an end, not a goal in and of itself. If 'n' is
low enough, then a O(2**n) algorithm may well be faster than an O(1)
algorithm.

You'd be surprised at how important it is. I remember competing on
TopCoder when they just barely added C++ as a possible language. I
remember wondering "how are all the Java guys going to even compete with
C++?" But the big-O complexity was always way way more important than the
performance differences of native vs. JVM'd code.
The thing about big-O complexity is that it gives you a good idea of how
well your library will perform under defined circumstances. And libraries
must be completely aware and rigid about it, or else you have situations
where things are nice and easy syntax-wise but perform horribly when
actually used. What you end up with when libraries don't care about it
are "mysterious" slowdowns, or cases where people just start blaming the
language for being so slow.
Imagine if a database backend developer said "you know, who cares about
big-O complexity, I think linear performance is good enough, most people
have small databases anyways" who would ever use that backend? This is
akin to how phobos needs to strive for the best performance.

I also believe that, in the absence of a sophisticated system that
actually verifies performance guarantees, the language and standard
library should trust the programmer to know what he is doing. The
standard library should only provide transitive performance guarantees,
e.g. this algorithm calls function 'f' 'n' times, so the algorithm's
performance is O(n * complexity(f)). If 'f' runs in constant time, the
algorithm runs in linear time. If 'f' runs in exponential time, the
algorithm still runs.

big O complexity is very important when you are writing libraries. Not
so much when you are writing applications -- if you can live with it in
your application, then fine. But Phobos should not have these problems
for people who *do* care.
What I'd suggest is to write your own function that uses in when
possible and find when not possible. Then use that in your code.

The issue is that algorithms may use 'in' internally, so I may have to
rewrite large parts of Phobos. (And the issue isn't about 'in'
specifically, but complexity guarantees in general.)\

You have two options, define opIn how you want if you don't care about
complexity guarantees (not recommended), or define a wrapper function that
uses the best available option, which your code can use.
Despite phobos defining opIn to require lg(n) or better complexity, it
does not restrict you from defining opIn how you want (even on arrays if
you wish). I personally find the tools phobos provides completely
adequate for the tasks you would need to do.
-Steve

Another solution is just to accept O(n) as the worst complexity for
the "in" operator. I don't understand what's the problem in this.

That means we'd have to define another operation, i.e. "quickIn" that
has O(log n) bound.

Why?
I can't say I've ever cared about the big-O complexity of an operation=

Then you don't understand how important it is.

If big O complexity is so important, then why does everyone use
quicksort (which is O(n**2)) and not heap sort or merge sort (which
are O(n*log(n)))?
Jerome
--
mailto:jeberger free.fr
http://jeberger.free.fr
Jabber: jeberger jabber.fr

Another solution is to support the "in" operator for dynamic arrays too,
and define a new attribute, like complexity(), plus an Enum that allows
to specify the worst case complexity. So associative arrays are annotated
with complexity(O.linear), while the function that searches for
items/substrings in arrays/strings is complexity(O.constant). At
compile-time the generic code is then able to query the computational
complexity of the "in" operator implementation. The problem is that the
compiler today can't enforce functions annotated with
complexity(O.constant) to actually not perform a linear search (but I
don't think it's a problem, today if the Range protocol asks an operation
to not be worse than O(n ln n) the compiler doesn't enforce it).

Good idea. It would make a nice use case for user-defined attributes in D3.
Making the
language aware of complexity specifically doesn't buy much, all you need is:
__traits(getAttribute, opIn, complexity).bigOh == O.constant
--
Tomek

Or:
__traits(getAttribute, opIn, complexity).bigOh = O.linear * O.log
bigOh could be e.g. a struct with an overloaded multiplier.
But you brought up something interesting -- how to bind N, M with
different properties of function arguments; big oh expressions can get
quite complex, e.g.
void completeSort(alias less = "a < b", SwapStrategy ss =
SwapStrategy.unstable, Range1, Range2)(SortedRange!(Range1,less) lhs,
Range2 rhs);
"[...] Performs Ο(lhs.length + rhs.length * log(rhs.length)) (best case)
to Ο((lhs.length + rhs.length) * log(lhs.length + rhs.length))
(worst-case) evaluations of swap."
Even if the attribute properties could see the arguments, how to deal with
things like lhs.length + rhs.length? It has to be inspectable at
compile-time. One idea is to store the expression's abstract syntax tree
(we want AST macros in D3 anyway)... But I got a feeling we're heading for
an overkill :)

Even if the attribute properties could see the arguments, how to deal
with things like
lhs.length + rhs.length? It has to be inspectable at compile-time. One
idea is to store the
expression's abstract syntax tree (we want AST macros in D3 anyway)...
But I got a feeling
we're heading for an overkill :)

Basically, it's the challenge of determining algorithmically whether an
arbitrary algorithm given arbitrary input will eventually halt or carry
on running forever.
The point is that the halting problem is known to be unsolvable. The
standard proof of this is as follows. Suppose the halt analyser
algorithm we seek exists. Call it WillHalt(Algorithm, Input). Then we
can consider WillHalt(Algorithm, Algorithm).
Then we can define a new algorithm, LoopIfHaltsOnItself(Algorithm),
defined as
if WillHalt(Algorithm, Algorithm) then
loop forever
else
return
Now try to analyse the outcome of LoopIfHaltsOnItself(LoopIfHaltsOnItself).
Personally, I think it's a shame that the halting problem can't be
solved. If it could, we could use it to solve many mathematical
problems that have as it happens remained unsolved for centuries.
Stewart.

Personally, I think it's a shame that the halting problem can't be
solved. If it could, we could use it to solve many mathematical problems
that have as it happens remained unsolved for centuries.

But solving those problems would mean nothing in that hypothetical
situation, because, for the halting problem to be solvable, it would
require that P <=> ~P, so any "theorem" would be meaningless.
Besides, I don't care to think about universes where P <=> ~P :-)

On Thursday, October 07, 2010 16:39:15 Tomek Sowiński wrote:
http://en.wikipedia.org/wiki/Halting_problem
It's a classic problem in computer science. Essentially what it comes down to
is
that you can't determine when - or even if - a program will halt until it
actually has. It's why stuff like file transfer dialogs can never be totally
accurate. And best, you can estimate how long a file transfer will take based
on
the current progress, but you can't _know_ when it will complete.

Another solution is to support the "in" operator for dynamic arrays too,
and define a new attribute, like complexity(), plus an Enum that allows
to specify the worst case complexity. So associative arrays are
annotated with complexity(O.linear), while the function that searches
for items/substrings in arrays/strings is complexity(O.constant). At
compile-time the generic code is then able to query the computational
complexity of the "in" operator implementation. The problem is that the
compiler today can't enforce functions annotated with
complexity(O.constant) to actually not perform a linear search (but I
don't think it's a problem, today if the Range protocol asks an
operation to not be worse than O(n ln n) the compiler doesn't enforce
it).

Good idea. It would make a nice use case for user-defined attributes in
D3. Making the language aware of complexity specifically doesn't buy
much, all you need is:
__traits(getAttribute, opIn, complexity).bigOh == O.constant
--
Tomek

Doesn't the language have to be aware of this if it's supposed to work
with ordinary arrays?

I don't think so, there can always be uniform syntax wrappers (there's already
a bunch of
them in std.array):
complexity(O.constant)
size_t length(T[] arr) {
return arr.length;
}
or special cases similar to hasLength!string.
--
Tomek

http://en.wikipedia.org/wiki/Halting_problem
Basically, inspecting the AST in search of the complexity might have an
infinite (or at least, arbitrarily high) complexity itself. It is likely=
possible in some situations, but in the general case, not so much.
Also, consider the while loop. It may have an arbitrarily complex
termination condition, making it hard or impossible to find the
complexity. Example, with added complexity by omitting the source of
foo:
extern bool foo( );
void bar( bool delegate( ) dg ) {
while ( 1 ) {
if ( dg( ) ) {
break;
}
}
}
bar( &foo );
-- =
Simen

Basically, it's the challenge of determining algorithmically whether an
arbitrary algorithm given arbitrary input will eventually halt or carry o=

running forever.
The point is that the halting problem is known to be unsolvable. The
standard proof of this is as follows. Suppose the halt analyser algorith=

we seek exists. Call it WillHalt(Algorithm, Input). Then we can conside=

WillHalt(Algorithm, Algorithm).
Then we can define a new algorithm, LoopIfHaltsOnItself(Algorithm), defin=

as
if WillHalt(Algorithm, Algorithm) then
loop forever
else
return
Now try to analyse the outcome of LoopIfHaltsOnItself(LoopIfHaltsOnItself=

Personally, I think it's a shame that the halting problem can't be solved=

If it could, we could use it to solve many mathematical problems that ha=

as it happens remained unsolved for centuries.
Stewart.

Or more poetically,
No general procedure for bug checks succeeds.
Now, I won=E2=80=99t just assert that, I=E2=80=99ll show where it leads:
I will prove that although you might work till you drop,
you cannot tell if computation will stop.
For imagine we have a procedure called P
that for specified input permits you to see
whether specified source code, with all of its faults,
defines a routine that eventually halts.
You feed in your program, with suitable data,
and P gets to work, and a little while later
(in finite compute time) correctly infers
whether infinite looping behavior occurs.
If there will be no looping, then P prints out `Good.=E2=80=99
That means work on this input will halt, as it should.
But if it detects an unstoppable loop,
then P reports `Bad!=E2=80=99 =E2=80=94 which means you=E2=80=99re in the s=
oup.
Well, the truth is that P cannot possibly be,
because if you wrote it and gave it to me,
I could use it to set up a logical bind
that would shatter your reason and scramble your mind.
Here=E2=80=99s the trick that I=E2=80=99ll use =E2=80=94 and it=E2=80=99s s=
imple to do.
I=E2=80=99ll define a procedure, which I will call Q,
that will use P=E2=80=99s predictions of halting success
to stir up a terrible logical mess.
For a specified program, say A, one supplies,
the first step of this program called Q I devise
is to find out from P what=E2=80=99s the right thing to say
of the looping behavior of A run on A.
If P=E2=80=99s answer is `Bad!=E2=80=99, Q will suddenly stop.
But otherwise, Q will go back to the top,
and start off again, looping endlessly back,
till the universe dies and turns frozen and black.
And this program called Q wouldn=E2=80=99t stay on the shelf;
I would ask it to forecast its run on itself.
When it reads its own source code, just what will it do?
What=E2=80=99s the looping behavior of Q run on Q?
If P warns of infinite loops, Q will quit;
yet P is supposed to speak truly of it!
And if Q=E2=80=99s going to quit, then P should say `Good=E2=80=99
=E2=80=94 which makes Q start to loop! (P denied that it would.)
No matter how P might perform, Q will scoop it:
Q uses P=E2=80=99s output to make P look stupid.
Whatever P says, it cannot predict Q:
P is right when it=E2=80=99s wrong, and is false when it=E2=80=99s true!
I=E2=80=99ve created a paradox, neat as can be =E2=80=94
and simply by using your putative P.
When you posited P you stepped into a snare;
Your assumption has led you right into my lair.
So where can this argument possibly go?
I don=E2=80=99t have to tell you; I=E2=80=99m sure you must know.
By reductio, there cannot possibly be
a procedure that acts like the mythical P.
You can never find general mechanical means
for predicting the acts of computing machines.
It=E2=80=99s something that cannot be done. So we users
must find our own bugs. Our computers are losers!
- Geoffrey K. Pullum
--20cf301d3ec8e417570492385da7
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
<br><br><div class=3D"gmail_quote">On Sat, Oct 9, 2010 at 9:11 AM, Stewart =
Gordon <span dir=3D"ltr">&lt;<a href=3D"mailto:smjg_1998 yahoo.com">smjg_19=
98 yahoo.com</a>&gt;</span> wrote:<br><blockquote class=3D"gmail_quote" sty=
le=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
On 08/10/2010 00:39, Tomek Sowi=C5=84ski wrote:<br>
&lt;snip&gt;<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
What&#39;s the halting problem?<br>
</blockquote>
<br>
Basically, it&#39;s the challenge of determining algorithmically whether an=
arbitrary algorithm given arbitrary input will eventually halt or carry on=
running forever.<br>
<br>
The point is that the halting problem is known to be unsolvable. =C2=A0The =
standard proof of this is as follows. =C2=A0Suppose the halt analyser algor=
ithm we seek exists. =C2=A0Call it WillHalt(Algorithm, Input). =C2=A0Then w=
e can consider WillHalt(Algorithm, Algorithm).<br>
<br>
Then we can define a new algorithm, LoopIfHaltsOnItself(Algorithm), defined=
as<br>
<br>
if WillHalt(Algorithm, Algorithm) then<br>
=C2=A0 =C2=A0loop forever<br>
else<br>
=C2=A0 =C2=A0return<br>
<br>
Now try to analyse the outcome of LoopIfHaltsOnItself(LoopIfHaltsOnItself).=
<br>
<br>
<br>
Personally, I think it&#39;s a shame that the halting problem can&#39;t be =
solved. =C2=A0If it could, we could use it to solve many mathematical probl=
ems that have as it happens remained unsolved for centuries.<br><font color=
=3D"#888888">
<br>
Stewart.</font></blockquote><div><br></div><div>Or =C2=A0more poetically,=
=C2=A0</div><div><br></div><div><meta http-equiv=3D"content-type" content=
=3D"text/html; charset=3Dutf-8"><span class=3D"Apple-style-span" style=3D"f=
ont-family: Tahoma, arial, helvetica, sans-serif; font-size: 13px; font-sty=
le: italic; "><p style=3D"margin-top: 0px; margin-right: 0px; margin-bottom=
: 0px; margin-left: 0px; padding-top: 10px; padding-right: 0px; padding-bot=
tom: 10px; padding-left: 0px; font-size: 10pt; line-height: 1.4em; ">
No general procedure for bug checks succeeds.<br style=3D"margin-top: 0px; =
margin-right: 0px; margin-bottom: 0px; margin-left: 0px; padding-top: 0px; =
padding-right: 0px; padding-bottom: 0px; padding-left: 0px; ">Now, I won=E2=
=80=99t just assert that, I=E2=80=99ll show where it leads:<br style=3D"mar=
gin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; padd=
ing-top: 0px; padding-right: 0px; padding-bottom: 0px; padding-left: 0px; "=

Another solution is to support the "in" operator for dynamic arrays too,
and define a new attribute, like complexity(), plus an Enum that allows
to specify the worst case complexity. So associative arrays are annotate=

with complexity(O.linear), while the function that searches for
items/substrings in arrays/strings is complexity(O.constant). At
compile-time the generic code is then able to query the computational
complexity of the "in" operator implementation. The problem is that the
compiler today can't enforce functions annotated with
complexity(O.constant) to actually not perform a linear search (but I
don't think it's a problem, today if the Range protocol asks an operatio=

to not be worse than O(n ln n) the compiler doesn't enforce it).

Good idea. It would make a nice use case for user-defined attributes in D=

Even if the attribute properties could see the arguments, how to deal
with things like
lhs.length + rhs.length? It has to be inspectable at compile-time. One
idea is to store the
expression's abstract syntax tree (we want AST macros in D3 anyway)...
But I got a feeling
we're heading for an overkill :)

Just as soon as we solve the halting problem, eh?

What's the halting problem?

http://en.wikipedia.org/wiki/Halting_problem
It's a classic problem in computer science. Essentially what it comes down =
to is=20
that you can't determine when - or even if - a program will halt until it=20
actually has. It's why stuff like file transfer dialogs can never be totall=
y=20
accurate. And best, you can estimate how long a file transfer will take bas=
ed on=20
the current progress, but you can't _know_ when it will complete. It's even=
=20
worse for algorithms where you can't even estimate how much work there is l=
eft.=20
And of course, you don't even necessarily know that the program _will_ halt=
=2E It=20
could be an infinite loop for all you know.
=2D Jonathan M Davis

Yet still, generality ends at some point. You can't devise every
possible algorithm for any possible types and have it have set-in-stone
complexity independently of types.
Take std.range.popFrontN(). It's generic, and it's used in other
algorithms. Yet it has O(1) complexity for ranges that support slicing,
and O(n) for those that do not. But you don't take into account
complexity of slicing operation, or complexity of stepping through the
range.
Or std.algorithm.find(). It's basically O(n), but then again, when using
it, one should also consider complexity of used predicate.
Just the same happened in the case Andrei described: the algorithm was
O(n) judging from the description (performing n insertions into
container). But the container itself blew it up into quadratic time
because of it's own insertion algorithm.
What I mean is you'll always have algorithms that will perform
differently for different containers, and you'll always have to choose
containers that best fit your needs. Generic code is not only about
efficiency: you'd at least expect it to work for different types.
Replacing vector with list in Andrei's case would probably solve the
problem at the cost of losing random access (together with contiguous
storage). Which means it'd also require changing code if random access
was in use somewhere. Having a generic random access function
at(Container,index) (offering complexity that varies with container
used: e.g. O(1) for arrays, O(n) for lists) would save you maintenance.

All true. However, the point is that operations need to have a know complexity.
If in is known to be O(1), the algorithms can safely use it, knowing that it's
O(1). Any container that can't do it in O(1) doesn't implement in. You could,
on
the other hand, have in be O(n) (which doesn't stop containers from
implementing
it as O(1) since that's better than O(n)), and then any algorithm that uses it
has to assume that it could be O(n) and therfore may need to avoid it. The real
question is what complexity we want to define it as. At the moment, the only
container to use it are the built-in associative arrays, and for them, it's
O(1). Since find() is already going to be O(n), it seems to me that in should
be
better than O(n) - be it O(1) or O(log n) - and Andrei appears to agree, but
others don't, hence this long discussion...
- Jonathan M Davis