Through out that page it shows examples of iterations where order is not
actually defined, the index has to modify by one but it gives the
compiler the ability to create a few threads and execute in parallel.
Thats what I was actually talking about. I would hate for that code to
be valid and would prefer to use the usual while, do while, for and
foreach loops but have an attribute to enable optimization eg:
unordered for(i = 0; i < 10; ++i)
{
//anything here is independant of the previous iterations
//and I dont care if compiler splits accross cpu cores
}
Anyway I still think there is of low priority over other things like
cent data type but it is rated rather high.

C-style for is a pretty terrible construct for this. How can it be
unordered when you've SPECIFIED that i increases linearly?
Does that mean it's equivalent to this?
unordered for(i = 9; i >= 0; --i)
{
...
}
If so, then 60% of that statement is redundant. So let's try using
foreach instead.
unordered foreach( i ; 0..10 )
{
...
}
Much better. But now the compiler has to prove that it's actually
possible to run this in arbitrary order. The easiest way to do this is
to check to see whether the body of the foreach is pure or not. Problem
there is that it CAN'T be pure; if it's pure, then the loop cannot
produce any results.
I think we have a better chance in stealing ideas from the functional
world like parallel map, or map-reduce.

Because we've relinquished direct control over iteration and our
functions are pure, the compiler is free to rewrite both those last two
statements any way it pleases. It could distribute the workload across
hardware threads, or even distribute it across machines; it doesn't matter.
Another potential way of doing this would be to make the following valid:
auto results = foreach( i ; 0..10 )
{
...
continue 2*i;
}
Assuming the body of the foreach is pure, that might work. How you'd
write a reduce with a loop, I have no idea. :P
The nice thing about using functional constructs is that they're just
functions; they can be implemented in a library without having to touch
the language itself.
-- Daniel

If the foreach body consists of a pure function call, then the foreach
is reorderable and parallelizable.
Considering that, you could save a keyword and use 'pure' rather than
'foreach'.
If you like the idea, send Walter a patch for this.

If the foreach body consists of a pure function call, then the foreach
is reorderable and parallelizable.
Considering that, you could save a keyword and use 'pure' rather than
'foreach'.
If you like the idea, send Walter a patch for this.

I *don't* like this; I was pointing out that it doesn't work.
Let's say the body is pure. Pure means that it only depends on its
arguments and doesn't use or mutate any non-immutable external state.
So you do lots of computations based on i. Now, how do you store the
result?
You *can't*, because that would involve changing state external to the loop.
As Tim said, you can go to lock-based programming. But that's just
reinventing the problem we have today with multithreading: it's
virtually impossible to do it right.
The point is that by the time you've munged foreach or whatever into a
state suitable for automatic parallelisation, you've likely just gone
and reinvented the map function, so you might as well just cut to the
chase and use that. That plus pure functions should make it virtually
impossible to get it *wrong*.
-- Daniel

I have no problems with that code there.
http://all-technology.com/eigenpolls/dwishlist/index.php?it=10
Through out that page it shows examples of iterations where order is not
actually defined, the index has to modify by one but it gives the compiler
the ability to create a few threads and execute in parallel. Thats what I
was actually talking about. I would hate for that code to be valid and
would prefer to use the usual while, do while, for and foreach loops but
have an attribute to enable optimization eg:
unordered for(i = 0; i < 10; ++i)
{
//anything here is independant of the previous iterations
//and I dont care if compiler splits accross cpu cores
}
Anyway I still think there is of low priority over other things like cent
data type but it is rated rather high.

Daniel mentioned part of the problem with that. The other problem I have
with it is that in most cases it mixes two different levels of abtraction.
You can't look at the code and immediately see the "what" without first
analysing the "how". If I look at that code, the *first* thing I comprehend
should not be "make 'i' go from 0 to 10, and then do somthing with 'i'", it
should be, "Ok, this is calculating a single particular value from an array,
ie, min, max, sum, doesBlahBlahExist, indexOfBlahBlah, whatever", or "This
is transforming (or outputting, or inputting) the content of a collection",
or "combining two collections in some way" etc. That's the stuff I want to
know, the "what". The "how" belongs elsewhere, at an entirely different
level of abstraction. That's the whole point of having functions in the
first place, separating that "what" from the "how", otherwise we'd all be
writing asm. (Of course, comments can be used to explain the intended
"what", but self-documenting code is better whenever possible.)

I have no problems with that code there.
http://all-technology.com/eigenpolls/dwishlist/index.php?it=10
Through out that page it shows examples of iterations where order is not
actually defined, the index has to modify by one but it gives the compiler
the ability to create a few threads and execute in parallel. Thats what I
was actually talking about. I would hate for that code to be valid and
would prefer to use the usual while, do while, for and foreach loops but
have an attribute to enable optimization eg:
unordered for(i = 0; i < 10; ++i)
{
//anything here is independant of the previous iterations
//and I dont care if compiler splits accross cpu cores
}
Anyway I still think there is of low priority over other things like cent
data type but it is rated rather high.

Through out that page it shows examples of iterations where order is not
actually defined, the index has to modify by one but it gives the
compiler the ability to create a few threads and execute in parallel.
Thats what I was actually talking about. I would hate for that code to
be valid and would prefer to use the usual while, do while, for and
foreach loops but have an attribute to enable optimization eg:
unordered for(i = 0; i < 10; ++i)
{
//anything here is independant of the previous iterations
//and I dont care if compiler splits accross cpu cores
}
Anyway I still think there is of low priority over other things like
cent data type but it is rated rather high.

C-style for is a pretty terrible construct for this. How can it be
unordered when you've SPECIFIED that i increases linearly?

Compilers are intelligent.

Does that mean it's equivalent to this?
unordered for(i = 9; i >= 0; --i)
{
...
}
If so, then 60% of that statement is redundant. So let's try using
foreach instead.
unordered foreach( i ; 0..10 )
{
...
}
Much better. But now the compiler has to prove that it's actually
possible to run this in arbitrary order.

The idea is that iteration two doesnt have to come after 1 as you specify
to the compiler that it doesnt matter. You are also adding extra syntax
and it would be best if D didn't do that because before you know it will
become to ambiguous to the compiler, not saying that alone is ambiguous
though.

The easiest way to do this is
to check to see whether the body of the foreach is pure or not. Problem
there is that it CAN'T be pure; if it's pure, then the loop cannot
produce any results.

Pure shouldn't be required. If you need you can put you locks in place but
pure is too restricted. Anyway pure is so the compiler can parralise
anyway. Most of the need for pure is that parralel will be there
automatically by default.