Reposting, as I didn't have my name show up properly due to outlook express
configuration fun, and any threaded readers are going to camoflauge my
response to a nearly year old post :)
"Marcio" <mqmnews123 sglebs.com> wrote in message
news:ecipfc$15v4$1 digitaldaemon.com...

I'm new here but this somewhat old thread starting with this .ppt is very
interesting to me since I work with the UE3 engine as a middleware. Its
pretty much dead on, as far as what the future is going to be, and that is
in a massively parallel/threaded environment. Those that do not make the
transition are going to have very noticably 'rigid' game simulations that
are far less capable. Only the problem from my point of view is there isn't
a single good language make the leap with. Sure you can have a truely
expert programmer get your threading right with the current tools available,
but the reality is the current tools and languages available to make
threaded programs are pretty bad. In addition you end up with only 'one
guy' who is responsible for far too much code since he has to wrangle
everyone elses broken stuff into shape, and always on a rather riduclous
timeline.
So how do games, threading, and D all tie into this?
Ok so I've been following D as a lurker off and on. From the outside as a
C++ programmer, D looks great to me. I literally get more angry working
with C++ every day. And its all because doing anything 'cool' in C++,
particularly templates, requires jumping through hoops. Jumping through
hoops is really the reality of having to deal with language deficiencies.
So looking at D the initial impression is 'sweet' I want to write something
in that. Except from my world there are several huge problems:
Problem A : Garbage Collection is a dealbreaker. But not because it exists
or even that is is forced or a default. We definitely want garbage
collection. It is a dealbreaker because of how it behaves. There are
several behaviors that make it such a strong negative as to be a
dealbreaker, primarily the unknown frequency of collections, duration of
collections, and the fact all our threads get suspended. The more hardcore
games run at a required 60 fps (gran turismo, god of war, etc). This means
all threads are executing a full cycle in 16.6 ms. How much time do we want
the GC to spend? The answer is 0 ms. Even spending 1ms for any kind of
function in any game is pretty damn slow in a game engine. If it starts
pushing 5 or 10ms it starts impacting response of input, and noticebly
hitches the rendering, since this hitch generally isn't every single frame.
Consistency of the frame rate matters a lot. In fact consistency matters
so much that collecting could take 2ms of every 15 and we would be ok with
it as long at was predictable so we can budget the game around it. Even if
it would only really need 10ms every 5 minutes, that is unacceptable,
because a collector taking 10ms is a dealbreaker.
Problem B : Threading support. The language of the future addresses
threading head on, understanding that the number of cores on CPU processors
is going to rapidly be in the tripple digits and beyond. The chip makers
have hit a wall, and the gains are going to come predominantly from core
increases and memory I/O catching back up to the CPUs. Eventually the line
between CPU's and GPU's will blur quite a bit. Which means we need to write
threadable code safely, efficiently, without jumping through hoops, and not
even really worrying about it a whole lot. If our CPU based languages fail
at this, we are going to be ending up writing game-physics raytracing code
on the GPU instead via stuff like NVIDIA's GPGPU. Which is essentially
going to be stealing GPU performance to make up for the inability to take
advantage of a massively parallel CPU core architecture. Languages that are
created, or are modified to make this leap cleanly will be the dominant
players of the future. I believe that if D makes this leap, it will be the
one of them. Judging by the progress of c++0x, I believe it has lost the
agility necessary to make this transition and will be superceded by another
language at this transition.
My gut feeling says that if the threading issues of are dealt with up front
it would help a lot with garbage collection, since the requirements of
moving the data efficiently in a massively threaded environment would drive
evoloution in the GC system. Even if the end-result is simply that you can
construct isolated threads to get private GC heaps, and that data must be
marshalled to them as if they were in another process space, it would be an
improvement, because at least then the GC doesnt show down every single
thread, and thin threads can collect very very fast.

I basically agree with the GC issues.
If it were up to me, I'd integrate a separate mode into the GC, in which it is
only run in debug
mode - and breaks on collection! Basically, I'd not use it as a collector per
se, but as a tool to
make manual memory cleaning easier.
Apart from that, I agree D is not quite ready for a massively parallel future -
but the strength of
the language is such that it can be made to be ready, without requiring any
in-depth changes.
Take the following example.
foreach (foo; parallel(bar)) { /* do stuff with foo */ }
Looks neat? It can be made to work _today_, with D 1.0 or 2.0, GDC or DMD,
without requiring _any
changes to the compiler_, using exclusively language features (about one page
of code) - and even
without any significant runtime overhead! :D
And there's a decent amount of multithreading extensions for D already. Take a
look at StackThreads
or DCSP on http://assertfalse.com/projects.shtml , all implemented using a
minimum of machine
specific code, and working fine (I think. I hope. :p )
From my (admittedly overoptimistic and fanboyish) perspective, even without
threading built into
the language, D is quite prepared for a massively-multithreaded future. :)
--downs

I can see where you are coming from and appreciate your enthusiasm.
However, I can also see Sean Cavanaugh's point about threading capabilities
being overly complex. I can see that D is trying to address thread support
using libraries. There are many classes in Tango that work toward this
purpose. And they seem to be clean and capable, but IMO not a huge leap
forward when compared to how threading is done in other modern languages.
The problem with threading is complexity. For example, there are many
classes in Tango that accommodate concurrency: Thread, Atomic, Barrier,
Condition, Mutex, ReadWriteMutex, Semaphore. To someone like myself, who is
not exactly a concurrency expert, this can be quite overwhelming. How can
we make it simpler for programmers? Perhaps is can't be simplified any
further and the best we can do is documentation, tutorials, etc.
But I think there are ways to make it easier. I am a fan of the Concur
project. I think at least some of the abstractions that Sutter and friends
have identified can be implemented in D with libraries. Some may not be
implementable with libraries, but may require support in the compiler.
Whatever the case, I think D's compiler/standard libraries should be
extended to deliver the features that Sutter is promoting.
-Craig
"downs" <default_357-line yahoo.de> wrote in message
news:f7aigu$1uif$1 digitalmars.com...

I basically agree with the GC issues.
If it were up to me, I'd integrate a separate mode into the GC, in which
it is only run in debug mode - and breaks on collection! Basically, I'd
not use it as a collector per se, but as a tool to make manual memory
cleaning easier.
Apart from that, I agree D is not quite ready for a massively parallel
future - but the strength of the language is such that it can be made to
be ready, without requiring any in-depth changes.
Take the following example.
foreach (foo; parallel(bar)) { /* do stuff with foo */ }
Looks neat? It can be made to work _today_, with D 1.0 or 2.0, GDC or DMD,
without requiring _any changes to the compiler_, using exclusively
language features (about one page of code) - and even without any
significant runtime overhead! :D
And there's a decent amount of multithreading extensions for D already.
Take a look at StackThreads or DCSP on
http://assertfalse.com/projects.shtml , all implemented using a minimum of
machine specific code, and working fine (I think. I hope. :p )
From my (admittedly overoptimistic and fanboyish) perspective, even
without threading built into the language, D is quite prepared for a
massively-multithreaded future. :)
--downs

I can see where you are coming from and appreciate your enthusiasm.
However, I can also see Sean Cavanaugh's point about threading
capabilities being overly complex. I can see that D is trying to
address thread support using libraries. There are many classes in Tango
that work toward this purpose. And they seem to be clean and capable,
but IMO not a huge leap forward when compared to how threading is done
in other modern languages.

Yup. I feel that these classes are building blocks for something more
comprehensive, rather than an end in themselves.

The problem with threading is complexity. For example, there are many
classes in Tango that accommodate concurrency: Thread, Atomic, Barrier,
Condition, Mutex, ReadWriteMutex, Semaphore. To someone like myself,
who is not exactly a concurrency expert, this can be quite
overwhelming. How can we make it simpler for programmers? Perhaps is
can't be simplified any further and the best we can do is documentation,
tutorials, etc.

It can be simplified further, but tutorials help anyway. One idea would
be to perform in-process messaging with the clustering package. It's a
bit heavyweight compared to, say, DCSP, but I like that it largely
eliminates the differences between in-process and out-of-process
concurrency. Futures are another option, and they aren't terribly
difficult to implement.

But I think there are ways to make it easier. I am a fan of the Concur
project. I think at least some of the abstractions that Sutter and
friends have identified can be implemented in D with libraries. Some
may not be implementable with libraries, but may require support in the
compiler. Whatever the case, I think D's compiler/standard libraries
should be extended to deliver the features that Sutter is promoting.

You might want to look at Mikola Lysenko's DCSP:
http://www.assertfalse.com/projects.shtml
Concur is heavily based on Hoare's CSP model, so if you're familiar with
Concur then DCSP may not be too much of a stretch.
Sean

Reposting, as I didn't have my name show up properly due to outlook express
configuration fun, and any threaded readers are going to camoflauge my
response to a nearly year old post :)
"Marcio" <mqmnews123 sglebs.com> wrote in message
news:ecipfc$15v4$1 digitaldaemon.com...

I'm new here but this somewhat old thread starting with this .ppt is very
interesting to me since I work with the UE3 engine as a middleware. Its
pretty much dead on, as far as what the future is going to be, and that is
in a massively parallel/threaded environment. Those that do not make the
transition are going to have very noticably 'rigid' game simulations that
are far less capable. Only the problem from my point of view is there isn't
a single good language make the leap with. Sure you can have a truely
expert programmer get your threading right with the current tools available,
but the reality is the current tools and languages available to make
threaded programs are pretty bad. In addition you end up with only 'one
guy' who is responsible for far too much code since he has to wrangle
everyone elses broken stuff into shape, and always on a rather riduclous
timeline.
So how do games, threading, and D all tie into this?
Ok so I've been following D as a lurker off and on. From the outside as a
C++ programmer, D looks great to me. I literally get more angry working
with C++ every day. And its all because doing anything 'cool' in C++,
particularly templates, requires jumping through hoops. Jumping through
hoops is really the reality of having to deal with language deficiencies.
So looking at D the initial impression is 'sweet' I want to write something
in that. Except from my world there are several huge problems:
Problem A : Garbage Collection is a dealbreaker. But not because it exists
or even that is is forced or a default. We definitely want garbage
collection. It is a dealbreaker because of how it behaves. There are
several behaviors that make it such a strong negative as to be a
dealbreaker, primarily the unknown frequency of collections, duration of
collections, and the fact all our threads get suspended. The more hardcore
games run at a required 60 fps (gran turismo, god of war, etc). This means
all threads are executing a full cycle in 16.6 ms. How much time do we want
the GC to spend? The answer is 0 ms. Even spending 1ms for any kind of
function in any game is pretty damn slow in a game engine. If it starts
pushing 5 or 10ms it starts impacting response of input, and noticebly
hitches the rendering, since this hitch generally isn't every single frame.
Consistency of the frame rate matters a lot. In fact consistency matters
so much that collecting could take 2ms of every 15 and we would be ok with
it as long at was predictable so we can budget the game around it. Even if
it would only really need 10ms every 5 minutes, that is unacceptable,
because a collector taking 10ms is a dealbreaker.

I think this is one area where D will improve quite a bit over time.
Personally, the GC I am most interested in is IBM's Metronome (a Java
GC), and I'm hoping that a similar approach will be possible with D.

Problem B : Threading support. The language of the future addresses
threading head on, understanding that the number of cores on CPU processors
is going to rapidly be in the tripple digits and beyond. The chip makers
have hit a wall, and the gains are going to come predominantly from core
increases and memory I/O catching back up to the CPUs. Eventually the line
between CPU's and GPU's will blur quite a bit. Which means we need to write
threadable code safely, efficiently, without jumping through hoops, and not
even really worrying about it a whole lot. If our CPU based languages fail
at this, we are going to be ending up writing game-physics raytracing code
on the GPU instead via stuff like NVIDIA's GPGPU. Which is essentially
going to be stealing GPU performance to make up for the inability to take
advantage of a massively parallel CPU core architecture. Languages that are
created, or are modified to make this leap cleanly will be the dominant
players of the future. I believe that if D makes this leap, it will be the
one of them. Judging by the progress of c++0x, I believe it has lost the
agility necessary to make this transition and will be superceded by another
language at this transition.

D is in a better position than C++ in this respect, and things will only
get better. So far, the greatest obstacle has been available time to
develop such tools.

My gut feeling says that if the threading issues of are dealt with up front
it would help a lot with garbage collection, since the requirements of
moving the data efficiently in a massively threaded environment would drive
evoloution in the GC system. Even if the end-result is simply that you can
construct isolated threads to get private GC heaps, and that data must be
marshalled to them as if they were in another process space, it would be an
improvement, because at least then the GC doesnt show down every single
thread, and thin threads can collect very very fast.

The static data region is still an issue, since it is obviously shared
between threads. But it may be that either something could be done with
multiple specialized allocators, or simply linking against a custom GC
if you're willing to code to a certain model. That said, do some
Googling for Metronome, which I mentioned above. It's a hard realtime
GC and would be perfect for games, since they have fairly
well-established runtime requirements, memory use, etc.
Sean

Reposting, as I didn't have my name show up properly due to outlook
express configuration fun, and any threaded readers are going to
camoflauge my response to a nearly year old post :)
"Marcio" <mqmnews123 sglebs.com> wrote in message
news:ecipfc$15v4$1 digitaldaemon.com...

I'm new here but this somewhat old thread starting with this .ppt is very
interesting to me since I work with the UE3 engine as a middleware. Its
pretty much dead on, as far as what the future is going to be, and
that is
in a massively parallel/threaded environment. Those that do not make the
transition are going to have very noticably 'rigid' game simulations that
are far less capable. Only the problem from my point of view is there
isn't
a single good language make the leap with. Sure you can have a truely
expert programmer get your threading right with the current tools
available,
but the reality is the current tools and languages available to make
threaded programs are pretty bad. In addition you end up with only 'one
guy' who is responsible for far too much code since he has to wrangle
everyone elses broken stuff into shape, and always on a rather riduclous
timeline.
So how do games, threading, and D all tie into this?
Ok so I've been following D as a lurker off and on. From the outside
as a
C++ programmer, D looks great to me. I literally get more angry working
with C++ every day. And its all because doing anything 'cool' in C++,
particularly templates, requires jumping through hoops. Jumping through
hoops is really the reality of having to deal with language deficiencies.
So looking at D the initial impression is 'sweet' I want to write
something
in that. Except from my world there are several huge problems:
Problem A : Garbage Collection is a dealbreaker. But not because it
exists
or even that is is forced or a default. We definitely want garbage
collection. It is a dealbreaker because of how it behaves. There are
several behaviors that make it such a strong negative as to be a
dealbreaker, primarily the unknown frequency of collections, duration of
collections, and the fact all our threads get suspended. The more
hardcore
games run at a required 60 fps (gran turismo, god of war, etc). This
means
all threads are executing a full cycle in 16.6 ms. How much time do
we want
the GC to spend? The answer is 0 ms. Even spending 1ms for any kind of
function in any game is pretty damn slow in a game engine. If it starts
pushing 5 or 10ms it starts impacting response of input, and noticebly
hitches the rendering, since this hitch generally isn't every single
frame.
Consistency of the frame rate matters a lot. In fact consistency
matters
so much that collecting could take 2ms of every 15 and we would be ok
with
it as long at was predictable so we can budget the game around it.
Even if
it would only really need 10ms every 5 minutes, that is unacceptable,
because a collector taking 10ms is a dealbreaker.

I think this is one area where D will improve quite a bit over time.
Personally, the GC I am most interested in is IBM's Metronome (a Java
GC), and I'm hoping that a similar approach will be possible with D.

I was going to point out this GC: at first GCs for Java were not very
real-time compatible either, but IBM has made one which seems quite good
(I don't know if Sun has a RT GC also).
So it's not really a language issue but a language implementation issue.
Of course, to really use those multicore computers for games, not only
the GC should be real-time compatible (which is already a tough nut to
crack) but it should also be distributed..
So it could take many years before the D runtime is up to the task, but
as this issue is common to any language using a GC (even Monotone is not
SMP friendly if I read correctly 'between the lines' of IBM's articles),
in the meantime the only solution is to disable the GC in the real-time
part of your program, it's still better than using C++..
Regards,
renoX

Reposting, as I didn't have my name show up properly due to outlook express
configuration fun, and any threaded readers are going to camoflauge my
response to a nearly year old post :)
"Marcio" <mqmnews123 sglebs.com> wrote in message
news:ecipfc$15v4$1 digitaldaemon.com...

I'm new here but this somewhat old thread starting with this .ppt is very
interesting to me since I work with the UE3 engine as a middleware. Its
pretty much dead on, as far as what the future is going to be, and that is
in a massively parallel/threaded environment. Those that do not make the
transition are going to have very noticably 'rigid' game simulations that
are far less capable. Only the problem from my point of view is there isn't
a single good language make the leap with. Sure you can have a truely
expert programmer get your threading right with the current tools available,
but the reality is the current tools and languages available to make
threaded programs are pretty bad. In addition you end up with only 'one
guy' who is responsible for far too much code since he has to wrangle
everyone elses broken stuff into shape, and always on a rather riduclous
timeline.
So how do games, threading, and D all tie into this?
Ok so I've been following D as a lurker off and on. From the outside as a
C++ programmer, D looks great to me. I literally get more angry working
with C++ every day. And its all because doing anything 'cool' in C++,
particularly templates, requires jumping through hoops. Jumping through
hoops is really the reality of having to deal with language deficiencies.
So looking at D the initial impression is 'sweet' I want to write something
in that. Except from my world there are several huge problems:
Problem A : Garbage Collection is a dealbreaker. But not because it exists
or even that is is forced or a default. We definitely want garbage
collection. It is a dealbreaker because of how it behaves. There are
several behaviors that make it such a strong negative as to be a
dealbreaker, primarily the unknown frequency of collections, duration of
collections, and the fact all our threads get suspended. The more hardcore
games run at a required 60 fps (gran turismo, god of war, etc). This means
all threads are executing a full cycle in 16.6 ms. How much time do we want
the GC to spend? The answer is 0 ms. Even spending 1ms for any kind of
function in any game is pretty damn slow in a game engine. If it starts
pushing 5 or 10ms it starts impacting response of input, and noticebly
hitches the rendering, since this hitch generally isn't every single frame.
Consistency of the frame rate matters a lot. In fact consistency matters
so much that collecting could take 2ms of every 15 and we would be ok with
it as long at was predictable so we can budget the game around it. Even if
it would only really need 10ms every 5 minutes, that is unacceptable,
because a collector taking 10ms is a dealbreaker.

I doubt that many class-A games would use garbage collection if they had
the possibility (ie, the language supported it), even if the GC was a
very good one, Java VM like. The need for performance is too great for
that. And yes, maybe an app using a very good GC can be faster that a
normal manually-memory-managed app (Walter's words, not mine, according
to his GC page), but I doubt using any GC could ever beat a well
optimized manually-memory-managed app.

Problem B : Threading support. The language of the future addresses
threading head on, understanding that the number of cores on CPU processors
is going to rapidly be in the tripple digits and beyond. The chip makers
have hit a wall, and the gains are going to come predominantly from core
increases and memory I/O catching back up to the CPUs. Eventually the line
between CPU's and GPU's will blur quite a bit. Which means we need to write
threadable code safely, efficiently, without jumping through hoops, and not
even really worrying about it a whole lot. If our CPU based languages fail
at this, we are going to be ending up writing game-physics raytracing code
on the GPU instead via stuff like NVIDIA's GPGPU. Which is essentially
going to be stealing GPU performance to make up for the inability to take
advantage of a massively parallel CPU core architecture. Languages that are
created, or are modified to make this leap cleanly will be the dominant
players of the future. I believe that if D makes this leap, it will be the
one of them. Judging by the progress of c++0x, I believe it has lost the
agility necessary to make this transition and will be superceded by another
language at this transition.

This has been said countless times, and I think everyone (in D and the
overall programming community) acknowledges that. What happens is that
no one really yet knows how to make parallelism and concurrency easier
to do. So there is really no point in asking for D to be better at this,
if the way *how* to do it is not yet know.
I read in a recent article (I think it came from Slashdot, but not sure)
that a new programming paradigm is needed to make concurrency easier,
just in the same way as OO (and class encapsulation) improved on the
previous data abstraction paradigm to make code cleaner and easier to
write. Just in the same way as structured programming (ie, using
functions/scopes/modules) improved on the previous paradigm of
sequential/global/goto-using code, so to speak.
--
Bruno Medeiros - MSc in CS/E student
http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D

I read in a recent article (I think it came from Slashdot, but not sure)
that a new programming paradigm is needed to make concurrency easier,
just in the same way as OO (and class encapsulation) improved on the
previous data abstraction paradigm to make code cleaner and easier to
write. Just in the same way as structured programming (ie, using
functions/scopes/modules) improved on the previous paradigm of
sequential/global/goto-using code, so to speak.

I read in a recent article (I think it came from Slashdot, but not sure)
that a new programming paradigm is needed to make concurrency easier,
just in the same way as OO (and class encapsulation) improved on the
previous data abstraction paradigm to make code cleaner and easier to
write. Just in the same way as structured programming (ie, using
functions/scopes/modules) improved on the previous paradigm of
sequential/global/goto-using code, so to speak.

Thanks for the link Brad, this is a great article. I've never really
looked into erlang, but it sounds like it's the type of language I've
been thinking we'll end up with (based on message-passing, but a bit
evolved from that). I'm still not convinced that a functional language
is required for this, but it is certainly a more natural fit. Seems
like the greatest obstacle for erlang would be getting schools to teach
functional programming again--it's a bit of a chicken & egg problem.
Sean

I read in a recent article (I think it came from Slashdot, but not sure)
that a new programming paradigm is needed to make concurrency easier,
just in the same way as OO (and class encapsulation) improved on the
previous data abstraction paradigm to make code cleaner and easier to
write. Just in the same way as structured programming (ie, using
functions/scopes/modules) improved on the previous paradigm of
sequential/global/goto-using code, so to speak.

Thanks for the link Brad, this is a great article. I've never really
looked into erlang, but it sounds like it's the type of language I've been
thinking we'll end up with (based on message-passing, but a bit evolved
from that). I'm still not convinced that a functional language is
required for this, but it is certainly a more natural fit. Seems like the
greatest obstacle for erlang would be getting schools to teach functional
programming again--it's a bit of a chicken & egg problem.
Sean

Interesting. I wonder how Erlang scalability and performance compares to
other functional languages.

I read in a recent article (I think it came from Slashdot, but not
sure)
that a new programming paradigm is needed to make concurrency easier,
just in the same way as OO (and class encapsulation) improved on the
previous data abstraction paradigm to make code cleaner and easier to
write. Just in the same way as structured programming (ie, using
functions/scopes/modules) improved on the previous paradigm of
sequential/global/goto-using code, so to speak.

Thanks for the link Brad, this is a great article. I've never really
looked into erlang, but it sounds like it's the type of language I've
been thinking we'll end up with (based on message-passing, but a bit
evolved from that). I'm still not convinced that a functional
language is required for this, but it is certainly a more natural
fit. Seems like the greatest obstacle for erlang would be getting
schools to teach functional programming again--it's a bit of a chicken
& egg problem.
Sean

Interesting. I wonder how Erlang scalability and performance compares
to other functional languages.

Um, I think it's something like "stupid-good" or other such technical
term. Other than Termite Scheme, I believe it's the only one with such
lightweight processes, to the point where you can spawn near a million
on one box w/o bogging down. Some CL dialects have green threads, which
are lighter-weight than OS ones, but I'm not sure there's a comparison
to Erlang's. Termite was admittedly not as robust on the distributed
side of things, which helps scalability considerably. However, in
subsequent releases they have gotten better.
I got the sense (although I'm not sure yet) that Paul Graham's Arc
language hasn't given concurrency as much consideration as I think it
needs. Hopefully that will change...
I would really enjoy a language that had the Lisp look & feel, but with
all the concurrency primitives, distributed nature, and fault tolerance
of Erlang. Oh, and the pattern matching a la Prolog. I don't think I'm
asking for much ;)
BA

Thanks for the link Brad, this is a great article. I've never really
looked into erlang, but it sounds like it's the type of language I've
been thinking we'll end up with (based on message-passing, but a bit
evolved from that). I'm still not convinced that a functional language
is required for this, but it is certainly a more natural fit.

I read in a recent article (I think it came from Slashdot, but not sure)
that a new programming paradigm is needed to make concurrency easier,
just in the same way as OO (and class encapsulation) improved on the
previous data abstraction paradigm to make code cleaner and easier to
write. Just in the same way as structured programming (ie, using
functions/scopes/modules) improved on the previous paradigm of
sequential/global/goto-using code, so to speak.

I read in a recent article (I think it came from Slashdot, but not sure)
that a new programming paradigm is needed to make concurrency easier,
just in the same way as OO (and class encapsulation) improved on the
previous data abstraction paradigm to make code cleaner and easier to
write. Just in the same way as structured programming (ie, using
functions/scopes/modules) improved on the previous paradigm of
sequential/global/goto-using code, so to speak.

The video is a bit old (that or the people who made it were being
silly), but it's really not bad. From it I learned that Erlang can do
realtime programming, can handle errors locally without much explicit
coding to do so (or so it seemed), and is dynamically loaded. Their
conclusions at the end also suggest that it's much easier to write such
programs in Erlang than in C. Having worked in that particular field
before, I found the demo to be pretty interesting.
Sean

I read in a recent article (I think it came from Slashdot, but not sure)
that a new programming paradigm is needed to make concurrency easier,
just in the same way as OO (and class encapsulation) improved on the
previous data abstraction paradigm to make code cleaner and easier to
write. Just in the same way as structured programming (ie, using
functions/scopes/modules) improved on the previous paradigm of
sequential/global/goto-using code, so to speak.

I read in a recent article (I think it came from Slashdot, but not sure)
that a new programming paradigm is needed to make concurrency easier,
just in the same way as OO (and class encapsulation) improved on the
previous data abstraction paradigm to make code cleaner and easier to
write. Just in the same way as structured programming (ie, using
functions/scopes/modules) improved on the previous paradigm of
sequential/global/goto-using code, so to speak.

Hum, again Erlang, interesting. I had heard a bit about it before, on an
article (again don't remember where) about a comparison between Apache
and a web server built in Erlang. On a multicore machine Erlang did much
because of it's massively parallel capabilities, etc..
This makes Erlang very interesting, but one must then ask questions
like: What restrictions does Erlang's approach have? Does it have
disadvantages in other areas or aspects of programming? Is it good as a
general purpose programming language, or is it best only when doing
concurrent applications? Can any of it's ideas be applied to imperative
languages like D, Java, C#, etc.?
I personally am not looking deep into this (never had the use to study
concurrency in-depth so far), I'm just pointing out that a lot of things
have to be considered, and I have a feeling that there must be some
downside to Erlang, or otherwise everyone else would be trying to bring
Erlang aspects into their languages. Or maybe Erlang is just taking
momentum. Time will tell.
--
Bruno Medeiros - MSc in CS/E student
http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D

It is also nice to know that Erlang has been used for desktp software (and not
just parallel computing, servers...). The open-source wings3d modeller is
written in Erlang. www.wings3d.com
Bruno Medeiros Wrote:

Brad Anderson wrote:

Bruno Medeiros wrote:

I read in a recent article (I think it came from Slashdot, but not sure)
that a new programming paradigm is needed to make concurrency easier,
just in the same way as OO (and class encapsulation) improved on the
previous data abstraction paradigm to make code cleaner and easier to
write. Just in the same way as structured programming (ie, using
functions/scopes/modules) improved on the previous paradigm of
sequential/global/goto-using code, so to speak.

Hum, again Erlang, interesting. I had heard a bit about it before, on an
article (again don't remember where) about a comparison between Apache
and a web server built in Erlang. On a multicore machine Erlang did much
because of it's massively parallel capabilities, etc..
This makes Erlang very interesting, but one must then ask questions
like: What restrictions does Erlang's approach have? Does it have
disadvantages in other areas or aspects of programming? Is it good as a
general purpose programming language, or is it best only when doing
concurrent applications? Can any of it's ideas be applied to imperative
languages like D, Java, C#, etc.?
I personally am not looking deep into this (never had the use to study
concurrency in-depth so far), I'm just pointing out that a lot of things
have to be considered, and I have a feeling that there must be some
downside to Erlang, or otherwise everyone else would be trying to bring
Erlang aspects into their languages. Or maybe Erlang is just taking
momentum. Time will tell.
--
Bruno Medeiros - MSc in CS/E student
http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D

Hum, again Erlang, interesting. I had heard a bit about it before, on an
article (again don't remember where) about a comparison between Apache
and a web server built in Erlang. On a multicore machine Erlang did much
because of it's massively parallel capabilities, etc..

This makes Erlang very interesting, but one must then ask questions
like: What restrictions does Erlang's approach have? Does it have
disadvantages in other areas or aspects of programming? Is it good as a
general purpose programming language, or is it best only when doing
concurrent applications? Can any of it's ideas be applied to imperative
languages like D, Java, C#, etc.?

My (very limited) experience with Erlang on a (very small) pet project
is that you can't just transliterate a Java or C# app to Erlang. The
paradigm is too different, you have to redesign the entire app to
benefit from Erlang's features; but, once you start to get comfortable
with it you get several times more productive with Erlang.

I personally am not looking deep into this (never had the use to study
concurrency in-depth so far), I'm just pointing out that a lot of things
have to be considered, and I have a feeling that there must be some
downside to Erlang, or otherwise everyone else would be trying to bring
Erlang aspects into their languages. Or maybe Erlang is just taking
momentum. Time will tell.

Some of it's concurrency features could be implemented as a library.
gen_server and family is a Template Method using callbacks + Green
threads + error recovery on steroids.
What I'd really like to see on D is the bit syntax and pattern matching.
It is very useful to implement binary protocols and parsers. But I'm not
holding my breath.

Hum, again Erlang, interesting. I had heard a bit about it before, on an
article (again don't remember where) about a comparison between Apache
and a web server built in Erlang. On a multicore machine Erlang did much
because of it's massively parallel capabilities, etc..

This makes Erlang very interesting, but one must then ask questions
like: What restrictions does Erlang's approach have? Does it have
disadvantages in other areas or aspects of programming? Is it good as a
general purpose programming language, or is it best only when doing
concurrent applications? Can any of it's ideas be applied to imperative
languages like D, Java, C#, etc.?

My (very limited) experience with Erlang on a (very small) pet project
is that you can't just transliterate a Java or C# app to Erlang. The
paradigm is too different, you have to redesign the entire app to
benefit from Erlang's features; but, once you start to get comfortable
with it you get several times more productive with Erlang.

I personally am not looking deep into this (never had the use to study
concurrency in-depth so far), I'm just pointing out that a lot of things
have to be considered, and I have a feeling that there must be some
downside to Erlang, or otherwise everyone else would be trying to bring
Erlang aspects into their languages. Or maybe Erlang is just taking
momentum. Time will tell.

Some of it's concurrency features could be implemented as a library.
gen_server and family is a Template Method using callbacks + Green
threads + error recovery on steroids.
What I'd really like to see on D is the bit syntax and pattern matching.
It is very useful to implement binary protocols and parsers. But I'm not
holding my breath.

Hum, again Erlang, interesting. I had heard a bit about it before, on an
article (again don't remember where) about a comparison between Apache
and a web server built in Erlang. On a multicore machine Erlang did much
because of it's massively parallel capabilities, etc..

This makes Erlang very interesting, but one must then ask questions
like: What restrictions does Erlang's approach have? Does it have
disadvantages in other areas or aspects of programming? Is it good as a
general purpose programming language, or is it best only when doing
concurrent applications? Can any of it's ideas be applied to imperative
languages like D, Java, C#, etc.?

My (very limited) experience with Erlang on a (very small) pet project
is that you can't just transliterate a Java or C# app to Erlang. The
paradigm is too different, you have to redesign the entire app to
benefit from Erlang's features; but, once you start to get comfortable
with it you get several times more productive with Erlang.

I personally am not looking deep into this (never had the use to study
concurrency in-depth so far), I'm just pointing out that a lot of things
have to be considered, and I have a feeling that there must be some
downside to Erlang, or otherwise everyone else would be trying to bring
Erlang aspects into their languages. Or maybe Erlang is just taking
momentum. Time will tell.

Some of it's concurrency features could be implemented as a library.
gen_server and family is a Template Method using callbacks + Green
threads + error recovery on steroids.
What I'd really like to see on D is the bit syntax and pattern matching.
It is very useful to implement binary protocols and parsers. But I'm not
holding my breath.

There's very little that's black and white when it comes to the lines
between processes and threads (both kernel and user space). I worked
with a thread library that allows migration of individual threads
between _processes_. It's the only one like that I've ever seen, but it
worked, and quite elegantly (and unfortunately proprietary). It's
relatively easy to produce of a M:N model that's a combination of kernel
level threads combined with user space threads.
Erlang is indeed a very interesting system if for no other reason that
it's fairly unique. I wish I could find 3-6 months to really get my
hands dirty with it, but alas, that's behind another dozen or so
projects. Anyone wanna be my lackey, I mean research assistant? The
pay would be crappy, the housing not free, and the hours sucky.. but
it'd be fun, honest!
Later,
Brad

Erlang is indeed a very interesting system if for no other reason that
it's fairly unique. I wish I could find 3-6 months to really get my
hands dirty with it, but alas, that's behind another dozen or so
projects. Anyone wanna be my lackey, I mean research assistant? The
pay would be crappy, the housing not free, and the hours sucky.. but
it'd be fun, honest!

Now that I'm learning about Erlang I'm discovering that it seems to work
a lot like how I wanted to approach concurrency in D, so I'm definitely
going to try and find some time to play with it.
Sean

Erlang is indeed a very interesting system if for no other reason that
it's fairly unique. I wish I could find 3-6 months to really get my
hands dirty with it, but alas, that's behind another dozen or so
projects. Anyone wanna be my lackey, I mean research assistant? The
pay would be crappy, the housing not free, and the hours sucky.. but
it'd be fun, honest!

Now that I'm learning about Erlang I'm discovering that it seems to work
a lot like how I wanted to approach concurrency in D, so I'm definitely
going to try and find some time to play with it.

The Scala developers have tried to implement something similar to Erlang
as a Scala's library 'Actors' [1].
Because of some Scala features (especially pattern matching and symbols as
method names) Scala code looks like Erlang.
But attempts to implement message passing model in a universal language
(like Java, Scala, C++ or D) lead to some drawbacks. At first, active
entities will be represented as threads. So, to pass message from one
thread to another it is necessary to use some synchronization techniques
(like mutexes and condition variables). But the price of synchronization
of threads by OS mechanism is too high. Because of that maximum throughput
is not too high [2]. In contast Erlang VM use its own synchronization
mechanisms which are cheaper than OS ones. Another limitation is the count
of parallel threads on each platfom.
Next, processes in Erlang are isolated. So Erlang VM can easyly wipe out
any broken or dangling process without any interference to other
processes` data. With OS threads in C++/D/Java programs it not so easy.
Even if we terminate thread by some way there could be damaged or
inconsistent data to which other thread refer.
Next (as addition to previous), Erlang is a pure functional language. So
there no any global variables or shared data beetwen processes. Erlang
simply doesn't allow processes to modity some shared variable. In a
imperative language is very hard to write in such style, becasue some
hidden link beetwen two threads` data can be introduced by a mistake.
Next, reliabilty of Erlang programs highly depend on specific mechanism of
Erlang VM -- notification of other process termination. If two processes
linked one to another than Erlang VM sents special messages about linked
process termination with description and additional information.
Next, Erlang is a very successful and handy DSL for isolated processes
communication. Special syntax and pattern matching make Erlang programs
small and readable. In staticaly typed languages without pattern matching
sending and receiving messages could be much more verbose.
Just my $0.02.
[1] http://lamp.epfl.ch/~phaller/doc/ActorsTutorial.html
[2] http://lampwww.epfl.ch/~odersky/papers/jmlc06.pdf
--
Regards,
Yauheni Akhotnikau

Erlang is indeed a very interesting system if for no other reason
that it's fairly unique. I wish I could find 3-6 months to really
get my hands dirty with it, but alas, that's behind another dozen or
so projects. Anyone wanna be my lackey, I mean research assistant?
The pay would be crappy, the housing not free, and the hours sucky..
but it'd be fun, honest!

Now that I'm learning about Erlang I'm discovering that it seems to
work a lot like how I wanted to approach concurrency in D, so I'm
definitely going to try and find some time to play with it.

The Scala developers have tried to implement something similar to Erlang
as a Scala's library 'Actors' [1].
Because of some Scala features (especially pattern matching and symbols
as method names) Scala code looks like Erlang.

All good points. And I concede that it would be difficult to achieve
the level of concurrency used in Erlang applications in an imperative
language like D. But I do believe that the basic style of programming
could be used with reasonable results. Processes wouldn't be quite as
"throw-away" as in Erlang, which would have an impact on error handling
and such, but the proper message-oriented design could do fairly well
anyway. CSP, for example, assumes a more heavy-weight process model.
It doesn't span networks so well, but it at least shows that similar
approaches to parallelism meet with reasonable results in imperative
languages.
But my post was really about Erlang anyway :-) It sounds very
interesting and I may well end up wishing I had a project in which to
use it. I'd certainly rather layer Erlang atop D than atop C anyway.
Sean

Now that I'm learning about Erlang I'm discovering that it seems to
work a lot like how I wanted to approach concurrency in D, so I'm
definitely going to try and find some time to play with it.

The Scala developers have tried to implement something similar to
Erlang as a Scala's library 'Actors' [1].
Because of some Scala features (especially pattern matching and symbols
as method names) Scala code looks like Erlang.

All good points. And I concede that it would be difficult to achieve
the level of concurrency used in Erlang applications in an imperative
language like D. But I do believe that the basic style of programming
could be used with reasonable results.

It's true. I have used at least last five years -- we have developed our
own agent-oriented framework in C++ and have used it in several projects.
Agents interoperate each to another only by messages. Agents handle
messages by events (special object methods) and special entity,
dispatcher, dispatches agent events to one of him worker threads. Some
agents can share single worker thread, some agents can own their own
thread (active agents).
But as a consequence of C++ usage our code much more verbose than Erlang :(
That approach changes way of thinking completely. So now I'm a
message-passing addicted man :))
But because message-passing in C++ more expensive than in Erlang, then it
is necessary to divide application into rather big parts (agents). And all
communication inside those parts (agent) make via ordinal synchorized
calls, but there is almost no traditional multithreading programming in
agent implementation. For example, our biggest project, which has been
delevoped using that framework, now consists near two hundred agents and
more than 90 threads.
--
Regards,
Yauheni Akhotnikau

Heh, guess I've been using mesage-based concurrency for a while without knowing
there was anything special about it. But, if I'm understanding this right, in
an iterative language, if there _is_ any state shared between actors, you still
need explicit locking, neh?
eao197 Wrote:

On Thu, 19 Jul 2007 19:39:59 +0400, Sean Kelly <sean f4.ca> wrote:

eao197 wrote:

On Thu, 19 Jul 2007 02:54:29 +0400, Sean Kelly <sean f4.ca> wrote:

Now that I'm learning about Erlang I'm discovering that it seems to
work a lot like how I wanted to approach concurrency in D, so I'm
definitely going to try and find some time to play with it.

The Scala developers have tried to implement something similar to
Erlang as a Scala's library 'Actors' [1].
Because of some Scala features (especially pattern matching and symbols
as method names) Scala code looks like Erlang.

All good points. And I concede that it would be difficult to achieve
the level of concurrency used in Erlang applications in an imperative
language like D. But I do believe that the basic style of programming
could be used with reasonable results.

It's true. I have used at least last five years -- we have developed our
own agent-oriented framework in C++ and have used it in several projects.
Agents interoperate each to another only by messages. Agents handle
messages by events (special object methods) and special entity,
dispatcher, dispatches agent events to one of him worker threads. Some
agents can share single worker thread, some agents can own their own
thread (active agents).
But as a consequence of C++ usage our code much more verbose than Erlang :(
That approach changes way of thinking completely. So now I'm a
message-passing addicted man :))
But because message-passing in C++ more expensive than in Erlang, then it
is necessary to divide application into rather big parts (agents). And all
communication inside those parts (agent) make via ordinal synchorized
calls, but there is almost no traditional multithreading programming in
agent implementation. For example, our biggest project, which has been
delevoped using that framework, now consists near two hundred agents and
more than 90 threads.
--
Regards,
Yauheni Akhotnikau

Heh, guess I've been using mesage-based concurrency for a while without
knowing there was anything special about it. But, if I'm understanding
this right, in an iterative language, if there _is_ any state shared
between actors, you still need explicit locking, neh?

In our framework explicit locking needed only if agents work on different
dispatcher threads (for example, if two active agents share some data).
But in majority cases cooperative agents work on the same dispatcher
thread. Therefore they cannot work in parallel and do not need any locking
at all.

eao197 Wrote:

On Thu, 19 Jul 2007 19:39:59 +0400, Sean Kelly <sean f4.ca> wrote:

eao197 wrote:

On Thu, 19 Jul 2007 02:54:29 +0400, Sean Kelly <sean f4.ca> wrote:

Now that I'm learning about Erlang I'm discovering that it seems to
work a lot like how I wanted to approach concurrency in D, so I'm
definitely going to try and find some time to play with it.

The Scala developers have tried to implement something similar to
Erlang as a Scala's library 'Actors' [1].
Because of some Scala features (especially pattern matching and

symbols

as method names) Scala code looks like Erlang.

All good points. And I concede that it would be difficult to achieve
the level of concurrency used in Erlang applications in an imperative
language like D. But I do believe that the basic style of programming
could be used with reasonable results.

It's true. I have used at least last five years -- we have developed our
own agent-oriented framework in C++ and have used it in several
projects.
Agents interoperate each to another only by messages. Agents handle
messages by events (special object methods) and special entity,
dispatcher, dispatches agent events to one of him worker threads. Some
agents can share single worker thread, some agents can own their own
thread (active agents).
But as a consequence of C++ usage our code much more verbose than
Erlang :(
That approach changes way of thinking completely. So now I'm a
message-passing addicted man :))
But because message-passing in C++ more expensive than in Erlang, then
it
is necessary to divide application into rather big parts (agents). And
all
communication inside those parts (agent) make via ordinal synchorized
calls, but there is almost no traditional multithreading programming in
agent implementation. For example, our biggest project, which has been
delevoped using that framework, now consists near two hundred agents and
more than 90 threads.
--
Regards,
Yauheni Akhotnikau

Sean Cavanaugh wrote:
I doubt that many class-A games would use garbage collection if they had
the possibility (ie, the language supported it), even if the GC was a
very good one, Java VM like. The need for performance is too great for
that. And yes, maybe an app using a very good GC can be faster that a
normal manually-memory-managed app (Walter's words, not mine, according
to his GC page), but I doubt using any GC could ever beat a well
optimized manually-memory-managed app.

I think it depends on the app design. Without garbage collection,
sharing data between threads can be quite expensive. For example,
boost::shared_ptr uses an atomic operation to adjust its reference
counter, which is typically more than 70 cycles if a LOCK operation is
used on x86 (in truth, I think they've optimized it to use a spin-lock,
which is more efficient but more complicated to get right). But I do
agree that explicit allocation and deletion only is more efficient than
allocation and deletion combined with the occasional GC sweep (for
obvious reasons).

This has been said countless times, and I think everyone (in D and the
overall programming community) acknowledges that. What happens is that
no one really yet knows how to make parallelism and concurrency easier
to do. So there is really no point in asking for D to be better at this,
if the way *how* to do it is not yet know.

Well, there are a lot of ways to make it easier than explicit
manipulation of mutexes and such--some of the involved research dates
back to the early 60s--but even with these alternate methods,
concurrency isn't easy.

I read in a recent article (I think it came from Slashdot, but not sure)
that a new programming paradigm is needed to make concurrency easier,
just in the same way as OO (and class encapsulation) improved on the
previous data abstraction paradigm to make code cleaner and easier to
write. Just in the same way as structured programming (ie, using
functions/scopes/modules) improved on the previous paradigm of
sequential/global/goto-using code, so to speak.

This is what I think needs to happen. Concur and such are an
improvement, but they still require the programmer to do a lot
explicitly. Ultimately, we need a fundamental change in the way we do
multithreaded programming if we want our applications to scale on future
architectures.
Sean

Sean Cavanaugh wrote:
I doubt that many class-A games would use garbage collection if they had
the possibility (ie, the language supported it), even if the GC was a
very good one, Java VM like. The need for performance is too great for
that. And yes, maybe an app using a very good GC can be faster that a
normal manually-memory-managed app (Walter's words, not mine, according
to his GC page), but I doubt using any GC could ever beat a well
optimized manually-memory-managed app.

I think it depends on the app design. Without garbage collection, sharing
data between threads can be quite expensive. For example,
boost::shared_ptr uses an atomic operation to adjust its reference
counter, which is typically more than 70 cycles if a LOCK operation is
used on x86 (in truth, I think they've optimized it to use a spin-lock,
which is more efficient but more complicated to get right). But I do
agree that explicit allocation and deletion only is more efficient than
allocation and deletion combined with the occasional GC sweep (for obvious
reasons).

This has been said countless times, and I think everyone (in D and the
overall programming community) acknowledges that. What happens is that no
one really yet knows how to make parallelism and concurrency easier to
do. So there is really no point in asking for D to be better at this, if
the way *how* to do it is not yet know.

Well, there are a lot of ways to make it easier than explicit manipulation
of mutexes and such--some of the involved research dates back to the early
60s--but even with these alternate methods, concurrency isn't easy.

I read in a recent article (I think it came from Slashdot, but not sure)
that a new programming paradigm is needed to make concurrency easier,
just in the same way as OO (and class encapsulation) improved on the
previous data abstraction paradigm to make code cleaner and easier to
write. Just in the same way as structured programming (ie, using
functions/scopes/modules) improved on the previous paradigm of
sequential/global/goto-using code, so to speak.

This is what I think needs to happen. Concur and such are an improvement,
but they still require the programmer to do a lot explicitly. Ultimately,
we need a fundamental change in the way we do multithreaded programming if
we want our applications to scale on future architectures.
Sean

It may very well be true that we need something that isn't available yet.
However, I don't think we should wait for something better than Concur.
Concur in it's current form is way better than anything offered by today's
OOP languages. I think we should pursue implementing these abstractions
now. If something better presents itself, then we can leverage that as
well.
-Craig

Sean Cavanaugh wrote:
I doubt that many class-A games would use garbage collection if they
had the possibility (ie, the language supported it), even if the GC
was a very good one, Java VM like. The need for performance is too
great for that. And yes, maybe an app using a very good GC can be
faster that a normal manually-memory-managed app (Walter's words, not
mine, according to his GC page), but I doubt using any GC could ever
beat a well optimized manually-memory-managed app.

I think it depends on the app design. Without garbage collection,
sharing data between threads can be quite expensive. For example,
boost::shared_ptr uses an atomic operation to adjust its reference
counter, which is typically more than 70 cycles if a LOCK operation is
used on x86 (in truth, I think they've optimized it to use a spin-lock,
which is more efficient but more complicated to get right). But I do
agree that explicit allocation and deletion only is more efficient than
allocation and deletion combined with the occasional GC sweep (for
obvious reasons).

This has been said countless times, and I think everyone (in D and the
overall programming community) acknowledges that. What happens is that
no one really yet knows how to make parallelism and concurrency easier
to do. So there is really no point in asking for D to be better at
this, if the way *how* to do it is not yet know.

Well, there are a lot of ways to make it easier than explicit
manipulation of mutexes and such--some of the involved research dates
back to the early 60s--but even with these alternate methods,
concurrency isn't easy.

Well, there are a lot of ways to make it easier than explicit
manipulation of mutexes and such--some of the involved research dates
back to the early 60s--but even with these alternate methods,
concurrency isn't easy.

Hum, like conditional variables?

I was thinking of Agents. Hoare's CSP is fairly old as well--I think
the original paper was published in the mid-late 70s. Condition
variables are just a building-block, along with mutexes, semaphores, etc.
Sean

Well, there are a lot of ways to make it easier than explicit
manipulation of mutexes and such--some of the involved research dates
back to the early 60s--but even with these alternate methods,
concurrency isn't easy.

Murphy's Law #NaN: Concurrent programming is hard.
Might it be the case that there is something fundamental about concurrent
programming that makes it difficult for most, if not all, people to work
with? Might it be that the normal[*] human brain just can't think that way?
[*] think Rain Man <g>

Well, there are a lot of ways to make it easier than explicit
manipulation of mutexes and such--some of the involved research dates
back to the early 60s--but even with these alternate methods,
concurrency isn't easy.

Murphy's Law #NaN: Concurrent programming is hard.
Might it be the case that there is something fundamental about
concurrent programming that makes it difficult for most, if not all,
people to work with? Might it be that the normal[*] human brain just
can't think that way?

I think the issue has more to do with the legacy of old decisions made
for the sake of efficiency and the difficulty with which the result of
these decisions scale as parallelism increases. Near as I can tell,
message-passing never became terribly popular in the 80s largely because
mutually exclusive access to shared data required less memory overhead,
and because it could be more easily done in library code for existing,
popular programming languages (ie. C).
But perhaps you're right in that people tend to be self-centered in how
they approach problems. A recipe for baking a cake, for example,
assumes a single baker in that it consists of a series of sequential
steps from beginning to completion. Most programs are written the same
way. But a more accomplished cook quickly learns that steps can be
performed out of order, and kitchen staffs delegate different portions
of the cooking process to different individuals to increase throughput.
For comparison, both mutual exclusion and message-passing delegate tasks
to multiple distinct workers. But the way each operate are subtly
different. Mutual exclusion can be thought of as having a single shared
program state, and mutexes and such are a means of protecting this state
from corruption. By comparison, message-passing has no shared program
state. Each distinct worker could exist within the same process, a
different process, or on another machine entirely. So rather than the
kitchen somehow delegating work to various chefs and micro-managing
their interaction (the mutually exclusive approach), the chefs each go
on about their assigned task and interact whenever they need an
ingredient (the message-passing approach).
I think the important shift in mindset regards how to deal with common
resources. Typically, the mutually exclusive approach implies that
workers queue up and take turns utilizing the resource. Only one person
can use an oven at any given time, for example. The message-passing
equivalent would be to designate a specific worker for baking cakes.
When a cake is prepared, it is left on a table, and the baker takes
cakes off the table as ovens are available and cooks them, placing the
completed product on another table when the cakes are done.
So in conclusion, I think that the message-passing approach is the way
teams of people work together cooperatively, while mutual exclusion is
more like a person working on a task who suddenly finds himself
surrounded by other people. In the former case, concurrency is planned
from the outset, while in the latter case, concurrency is more of a
contingency mechanism. I don't think either one is inherently
incompatible with how people think, but message-passing does require a
bit more consideration or planning than mutual exclusion.
Sean

But a more accomplished cook quickly learns that steps can be
performed out of order, and kitchen staffs delegate different
portions of the cooking process to different individuals to
increase throughput.

This seems to be the main aspect. Current designers and coders are used
to play their music as one-man-bands.
Concurrency requires everyone to upgrade to a conductor of equally
skilled one-man-bands. This includes the ability to plan for the right
equipment, to take into account an unstable number of available skilled
personal and planning for the missing of the conductor itself.
Sadness lies in the fact that quasi-single-cpu cuncurrency seems much
harder to master than massive concurrency.
-manfred

Well, there are a lot of ways to make it easier than explicit
manipulation of mutexes and such--some of the involved research dates
back to the early 60s--but even with these alternate methods,
concurrency isn't easy.

Murphy's Law #NaN: Concurrent programming is hard.
Might it be the case that there is something fundamental about
concurrent programming that makes it difficult for most, if not all,
people to work with? Might it be that the normal[*] human brain just
can't think that way?
[*] think Rain Man <g>

There are moments where I wish I could think *like* Rain Man, especially when
it comes to concurrency.
At a minimum, science fiction is right on target with your comment. In the
Ghost in The Shell (Standalone Series),
there is the occasional reference to an "Autistic Mode" that some cyber-brains
have. So throughout the story, you have
some of these cyborgs flipping that switch whenever they need some Rain Man
style insight to a given situation - like
searching the internet as one would drink from a firehose, or performing
wide-area surveillance via 100+ cameras at
once. If nothing else, it illustrates that there's something extraordinary
about such abilities that may be permanently
out-of-reach for normal people, despite the fact that some people are just born
that way.
Given that cybernetic brain augmentation is a long way off, I think we're stuck
trying to develop a better way to
express the concurrent world in the common tongue of us "flat-landers".
$0.02:
But if you ask me what's needed, I think it comes down to the fact that
concurrency is between the code and data, not
just in the code. So either the developer needs to balance those two, or the
compiler needs to know more about your
data in order to parallelize things. Algol family languages (C, D, Java, etc.)
are all in the first category, hence the
nature of this thread. Erlang is an example of the latter, and benefits
mostly from being a functional language (and
from being purpose-built for parallelization).
I really think that we have the tools we need If we were to teach the compiler
how to perform some calculus on data
structures when their handled in iteration, it's reasonable to assume that it
can take steps to parallelize things for
us - this would get us about half-way to the kind of stuff functional languages
can pull off. The D2.0 additions for
invariance and const-ness will probably help here.
--
- EricAnderton at yahoo

Well, there are a lot of ways to make it easier than explicit
manipulation of mutexes and such--some of the involved research dates
back to the early 60s--but even with these alternate methods,
concurrency isn't easy.

Murphy's Law #NaN: Concurrent programming is hard.
Might it be the case that there is something fundamental about
concurrent programming that makes it difficult for most, if not all,
people to work with? Might it be that the normal[*] human brain just
can't think that way?

>

[*] think Rain Man <g>

There are moments where I wish I could think *like* Rain Man, especially
when it comes to concurrency.
At a minimum, science fiction is right on target with your comment. In
the Ghost in The Shell (Standalone Series), there is the occasional
reference to an "Autistic Mode" that some cyber-brains have. So
throughout the story, you have some of these cyborgs flipping that
switch whenever they need some Rain Man style insight to a given
situation - like searching the internet as one would drink from a
firehose, or performing wide-area surveillance via 100+ cameras at
once. If nothing else, it illustrates that there's something
extraordinary about such abilities that may be permanently out-of-reach
for normal people, despite the fact that some people are just born that
way.

Interesting. In Vernor Vinge's "Fire in the Deep" (if I remember
correctly), there are people who take drugs for basically the same
purpose. They're ship operators and such--jobs that require inhuman
focus to perform optimally.

But if you ask me what's needed, I think it comes down to the fact that
concurrency is between the code and data, not just in the code. So
either the developer needs to balance those two, or the compiler needs
to know more about your data in order to parallelize things. Algol
family languages (C, D, Java, etc.) are all in the first category, hence
the nature of this thread. Erlang is an example of the latter, and
benefits mostly from being a functional language (and from being
purpose-built for parallelization).
I really think that we have the tools we need If we were to teach the
compiler how to perform some calculus on data structures when their
handled in iteration, it's reasonable to assume that it can take steps
to parallelize things for us - this would get us about half-way to the
kind of stuff functional languages can pull off. The D2.0 additions for
invariance and const-ness will probably help here.

Hm... I guess the purpose would be some sort of optimal COW mechanism
for shared data, or is there another use as well? It's an intriguing
idea, though I wonder if such a scheme would make the performance of
code difficult to analyze.
Sean

I really think that we have the tools we need If we were to teach the
compiler how to perform some calculus on data structures when their
handled in iteration, it's reasonable to assume that it can take steps
to parallelize things for us - this would get us about half-way to the
kind of stuff functional languages can pull off. The D2.0 additions
for invariance and const-ness will probably help here.

Hm... I guess the purpose would be some sort of optimal COW mechanism
for shared data, or is there another use as well? It's an intriguing
idea, though I wonder if such a scheme would make the performance of
code difficult to analyze.
Sean

Your guess is as good as mine. I was just making the observation that the
major hurdle is that we're adopting
techniques that are deliberately explicit, to overcome the fact that the D
compiler is unaware of the problem; the
degree of specificity that is required can be very unwieldy. In contrast, the
clear winners in this area are languages
that are /implicitly/ paralellizable by design, so clearly we need to move in
that direction instead. :)
Really, what I'm thinking of is a way to say "give me your best shot, or tell
me why you can't parallelize this". The
parallel() suggestion for foreach (I forgot by who) is a good example of this.
Adding a "shared" modifier for classes
and typedefs might be another.
Like you suggest, a modified CoW would be a good start. At a minimum, if the
GC were more thread aware, we could do
smarter things inside and outside the compiler.
--
- EricAnderton at yahoo

There are moments where I wish I could think *like* Rain Man,
especially when it comes to concurrency.

[...]

If nothing else, it illustrates that there's something
extraordinary about such abilities that may be permanently
out-of-reach for normal people, despite the fact that some people are
just born that way.

I have wondered if this is something like incomputableity with regards to
a Turing machine. Might the normal brain be like a Turing machine and the
autistic brain be something like a brain not limited in the same way? Given
that some people can, for instance, identify large primes in near constant
time, I'd say this is a distinct possibility.
At risk of sounding politically incorrect; does anyone known of an autistic
person who might be interested in learning programming?

There are moments where I wish I could think *like* Rain Man,
especially when it comes to concurrency.

[...]

If nothing else, it illustrates that there's something
extraordinary about such abilities that may be permanently
out-of-reach for normal people, despite the fact that some people are
just born that way.

I have wondered if this is something like incomputableity with regards
to a Turing machine. Might the normal brain be like a Turing machine and
the autistic brain be something like a brain not limited in the same
way? Given that some people can, for instance, identify large primes in
near constant time, I'd say this is a distinct possibility.
At risk of sounding politically incorrect; does anyone known of an
autistic person who might be interested in learning programming?

There are moments where I wish I could think *like* Rain Man,
especially when it comes to concurrency.

[...]

If nothing else, it illustrates that there's something
extraordinary about such abilities that may be permanently
out-of-reach for normal people, despite the fact that some people are
just born that way.

I have wondered if this is something like incomputableity with regards
to a Turing machine. Might the normal brain be like a Turing machine
and the autistic brain be something like a brain not limited in the
same way? Given that some people can, for instance, identify large
primes in near constant time, I'd say this is a distinct possibility.
At risk of sounding politically incorrect; does anyone known of an
autistic person who might be interested in learning programming?

Autism is not synonymous with savantism, which is what you where
thinking of.

No, I was not necessary thinking of savantism. More generally I was
thinking about people whose brains function abnormally. This covers
autism, savantism, insanity, Psychopathy and genius[*] among others.
Autism just happens to be (if I understand correctly) the more profound
and general (savantism seems to be a subset) abnormality with regards to
the type of intellect associated with CS style tasks.
* If you ask me, insanity is where the brain works differently and it
get in the way, genius is where the brain works differently and it
helps. They are not mutually exclusive and if fact probably correlate
quite well.

There are moments where I wish I could think *like* Rain Man,
especially when it comes to concurrency.

[...]

If nothing else, it illustrates that there's something
extraordinary about such abilities that may be permanently
out-of-reach for normal people, despite the fact that some people are
just born that way.

I have wondered if this is something like incomputableity with
regards to a Turing machine. Might the normal brain be like a Turing
machine and the autistic brain be something like a brain not limited
in the same way? Given that some people can, for instance, identify
large primes in near constant time, I'd say this is a distinct
possibility.
At risk of sounding politically incorrect; does anyone known of an
autistic person who might be interested in learning programming?

Autism is not synonymous with savantism, which is what you where
thinking of.

No, I was not necessary thinking of savantism. More generally I was
thinking about people whose brains function abnormally. This covers
autism, savantism, insanity, Psychopathy and genius[*] among others.
Autism just happens to be (if I understand correctly) the more profound
and general (savantism seems to be a subset) abnormality with regards to
the type of intellect associated with CS style tasks.
* If you ask me, insanity is where the brain works differently and it
get in the way, genius is where the brain works differently and it
helps. They are not mutually exclusive and if fact probably correlate
quite well.

Well, this conversation was centering on extraordinary abilities that
made a human be able to do computer-like calculations (processing large
amounts of input instantly, calculating primes, etc.).
That is savantism, not autism. Autism is a wide variety or mental and
behavioral disorders (which savantism is a part of), most of them are
not very pleasant or even programming-friendly. You should check
wikipedia and the web, because autism is kinda of a fuzzy term, and it
took me a while to start understanding it, since it seems some people
use the term with slightly different (and possibly incorrect) meanings.
For instance, one of my AI/Agents teacher used the term autism as if it
meant "not processing external output" which is hardly autism. I also
don't think autism in general is a "type of intellect associated with CS
style tasks", although some of it's sub-disorders may be (which then?).
(There was a fellow in the D NG some time ago who had Asperger's
Syndrome, an autism disorder, but again, autism is not what you were
looking for)
--
Bruno Medeiros - MSc in CS/E student
http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D

Well, this conversation was centering on extraordinary abilities that
made a human be able to do computer-like calculations (processing
large amounts of input instantly, calculating primes, etc.).
That is savantism,

Unless I'm totally missing something (an I did check out wikipedia) savantism
is to narrow a term for what I'm thinking of.

not autism. Autism is a wide variety or mental and
behavioral disorders (which savantism is a part of), most of them are
not very pleasant or even programming-friendly. You should check
wikipedia and the web, because autism is kinda of a fuzzy term, and it
took me a while to start understanding it, since it seems some people
use the term with slightly different (and possibly incorrect)
meanings.

[...]

I also
don't think autism in general is a "type of intellect associated with
CS
style tasks", although some of it's sub-disorders may be (which
then?).

OK I'll grant that autism is to wide a term for what I was thinking of (and
even so might still not encompass what I'm thinking of).
I think you actually used the correct term at the top, "extra-ordinary
abilities"
but only if taken literally as not normal. I may be measuring a cloud with
a micrometer here but I still think it would be interesting to see how an
abnormal mind would approach some of these problems.

There are moments where I wish I could think *like* Rain Man,
especially when it comes to concurrency.

[...]

If nothing else, it illustrates that there's something
extraordinary about such abilities that may be permanently
out-of-reach for normal people, despite the fact that some people are
just born that way.

I have wondered if this is something like incomputableity with regards
to a Turing machine. Might the normal brain be like a Turing machine and
the autistic brain be something like a brain not limited in the same
way? Given that some people can, for instance, identify large primes in
near constant time, I'd say this is a distinct possibility.

I agree. There's a whole range of "brain temperments" that give rise to all
kinds of "ab-normal" behaviors like this.
Autism is one. Synesthesia is another.
I saw this one program about a savant (of the non-idiot variety) that was a
"visual-numerical synesthete": he could read
a number and would see it's "shape" in his mind's eye. By focusing on various
facets of the shape and color, he could
determine all kinds of things without using math: odd/even, prime, factors,
etc. When asked to use clay to model these
shapes, it was found to not be a hoax, and that his reckoning of these numbers
was highly regular and uniform.
Fascinating stuff.
So the real question becomes: If the real top-tier* insights are permanently
out of reach for us "mere mortals", how do
we teach a program to garner these kinds of insights (for parallelism and
optimization) for us instead?
(*I think we can all agree that parallelism is not inherently difficult to
grasp. But for sizable programs, where to
split things up can be a very tough problem to sovle correctly.)
--
- EricAnderton at yahoo

I doubt that many class-A games would use garbage collection if they had
the possibility (ie, the language supported it), even if the GC was a
very good one, Java VM like. The need for performance is too great for
that.

I hate to call you out, but.. I happen to be working for a class-A games'
company who are writing their latest title in C#.

I doubt that many class-A games would use garbage collection if they had
the possibility (ie, the language supported it), even if the GC was a
very good one, Java VM like. The need for performance is too great for
that.

I hate to call you out, but.. I happen to be working for a class-A games'
company who are writing their latest title in C#.

Yeah it's not always wise to make broad statements like that. But I think
he's still right if you are talking about most game engine developers. I've
talked with a few of them myself and they are OBSESSED will performance.
You would think garbage collection was a four letter word. That's not to
say game developers wouldn't use a scripting language or jitted language for
the game itself. But usually it's a combination of C++ and scripted or
jitted code. Your company may be the exception. Was your game engine was
written entirely in C#? I'm also curious, what genre is this game?
-Craig

Craig Black Wrote:
But usually it's a combination of C++ and scripted or

jitted code. Your company may be the exception. Was your game engine was
written entirely in C#? I'm also curious, what genre is this game?

Sorry. I'm NDA'd.
Obviously this is all a bit experimental. And there is some C++ too, but the C#
is doing more of the heavy lifting than you might at first expect. Of course,
it's really the graphics hardware doing most the real work ;-)
My point stands though. And I suspect we may be at the front of a new wave..