I am few days old in playin with D2 and whole shared stuff, so I am probably
wrong in something.
You should probably declare your example class MyValue synchronized instead of
shared. It implies that class is shared too, and this way all methods are
synchronized. In D1 you could mix synchronized and non syncrhonized methods in
class, in D2 its whole or nothing. This way you don't need _lock var in your
example.
So this would work (i guess)
synchronized class MyValue {
int inc() {
return _value++;
}
int get() {
return _value;
}
private int _value;
}
shared MyValue sharedVal;
void main(){
sharedVal = new shared(MyValue );
}
I noticed that in D1 synchronized methods of same class share same lock, while
in this D2 example (when the whole class is declared synchronized), each method
has its own lock.

Also, is there any documentation on the actual semantics of shared?
http://www.digitalmars.com/d/2.0/attribute.html is a blank on the subject, and
the "migrating to shared" article only talks about simple global state. What
are the actual semantics of shared classes, and how do they interact with other
code? For instance, after much banging of my head against the desk, I finally
wrote a working implementation of a simple shared multi-reader var. Obviously
there are better ways to do a simple shared incrementing counter, this is just
a first experiment working toward a shared mutable 512MB trie data structure
that we have in our app's current C++ implementation:

So I can maybe understand the cast(shared) in the ctor. But I have to admit I
have absolutely no idea why I had to cast away the shared attribute in the
inc/get methods. Is there any documentation on what's really going on in the
compiler here? It's a shared method, accessing a shared instance var, why the
cast? Is the compiler upset about something in the definition of ReadWriteMutex
itself?
Also, how would one implement this as a struct? My postblit op generates
compiler errors about casting between shared/unshared MyValue:

I recognize the possible race conditions here, but there has to be *some* way
to implement a postblit op on a shared struct?
I hope this doesn't come across as empty complaining, I'm happy to help
improve the documentation if I can.

It probably wasn't very clear from my simplified example, but I'm looking to
create a shared-reader-one-writer scenario. If I declare MyValue synchronized,
only one thread can be inside the get() method at a time, which defeats the
shared-reader requirement. Imagine this is a much larger more complex data
structure, where get() requires walking through multiple levels of a tree and a
binary search at the last level.
-- Brian
Bane Wrote:

I am few days old in playin with D2 and whole shared stuff, so I am probably
wrong in something.
You should probably declare your example class MyValue synchronized instead of
shared. It implies that class is shared too, and this way all methods are
synchronized. In D1 you could mix synchronized and non syncrhonized methods in
class, in D2 its whole or nothing. This way you don't need _lock var in your
example.
So this would work (i guess)
synchronized class MyValue {
int inc() {
return _value++;
}
int get() {
return _value;
}
private int _value;
}
shared MyValue sharedVal;
void main(){
sharedVal = new shared(MyValue );
}
I noticed that in D1 synchronized methods of same class share same lock,
while in this D2 example (when the whole class is declared synchronized), each
method has its own lock.

Also, is there any documentation on the actual semantics of shared?
http://www.digitalmars.com/d/2.0/attribute.html is a blank on the subject, and
the "migrating to shared" article only talks about simple global state. What
are the actual semantics of shared classes, and how do they interact with other
code? For instance, after much banging of my head against the desk, I finally
wrote a working implementation of a simple shared multi-reader var. Obviously
there are better ways to do a simple shared incrementing counter, this is just
a first experiment working toward a shared mutable 512MB trie data structure
that we have in our app's current C++ implementation:

So I can maybe understand the cast(shared) in the ctor. But I have to admit I
have absolutely no idea why I had to cast away the shared attribute in the
inc/get methods. Is there any documentation on what's really going on in the
compiler here? It's a shared method, accessing a shared instance var, why the
cast? Is the compiler upset about something in the definition of ReadWriteMutex
itself?
Also, how would one implement this as a struct? My postblit op generates
compiler errors about casting between shared/unshared MyValue:

I recognize the possible race conditions here, but there has to be *some* way
to implement a postblit op on a shared struct?
I hope this doesn't come across as empty complaining, I'm happy to help
improve the documentation if I can.

It probably wasn't very clear from my simplified example, but I'm looking to
create a shared-reader-one-writer scenario. If I declare MyValue synchronized,
only one thread can be inside the get() method at a time, which defeats the
shared-reader requirement. Imagine this is a much larger more complex data
structure, where get() requires walking through multiple levels of a tree and a
binary search at the last level.

Yup, I get it. But there is one point in it: write is not atomic operation in
sense that get() might return half written data, right?

It probably wasn't very clear from my simplified example, but I'm looking to
create a shared-reader-one-writer scenario. If I declare MyValue synchronized,
only one thread can be inside the get() method at a time, which defeats the
shared-reader requirement. Imagine this is a much larger more complex data
structure, where get() requires walking through multiple levels of a tree and a
binary search at the last level.

Yup, I get it. But there is one point in it: write is not atomic operation in
sense that get() might return half written data, right?

MyValue synchronized, only one thread can be inside the get() method at
a time, which defeats the shared-reader requirement. Imagine this is a
much larger more complex data structure, where get() requires walking
through multiple levels of a tree and a binary search at the last level.

Yup, I get it. But there is one point in it: write is not atomic
operation in sense that get() might return half written data, right?

MyValue synchronized, only one thread can be inside the get() method
at a time, which defeats the shared-reader requirement. Imagine this
is a much larger more complex data structure, where get() requires
walking through multiple levels of a tree and a binary search at the
last level.

Yup, I get it. But there is one point in it: write is not atomic
operation in sense that get() might return half written data, right?

Have you tried core.sync.rwmutex? Also, please remember that CREW locks
are not composable and can easily lead to dead-locks.

Afaik, the current rwmutex is a wrapper around two separate mutexes (one
for readers, one for writers) and you have to decide whether readers or
writers get precedence, meaning that ether all writers in the queue have
to wait if just one reader has to write or all writers in the queue have
to wait if there is a single reader comes up.
This is very unlike the behaviour I would like to see; I would expect
readers and writers to be in the same queue, meaning the only difference
between the rw and the normal mutex would be that all subsequent readers
in the queue can read at the same time.
/Max

Have you tried core.sync.rwmutex? Also, please remember that CREW locks
are not composable and can easily lead to dead-locks.

Afaik, the current rwmutex is a wrapper around two separate mutexes (one
for readers, one for writers) and you have to decide whether readers or
writers get precedence, meaning that ether all writers in the queue have
to wait if just one reader has to write or all writers in the queue have
to wait if there is a single reader comes up.
This is very unlike the behaviour I would like to see; I would expect
readers and writers to be in the same queue, meaning the only difference
between the rw and the normal mutex would be that all subsequent readers
in the queue can read at the same time.

ReadWriteMutex exposes a read and write interface, but there certainly aren't
two actual mutexes underneath. It's true that the implementation doesn't
explicitly maintain a queue, but this is intentional. If readers and writers
in the queue have different thread priorities set, those priorities should be
honored, and it's pointless to write all that code in druntime when the OS
takes care of it for us. Instead, those waiting for access to the mutex all
block on a condition variable and whoever wakes up first wins. It's up the OS
to make sure that thread priorities are honored and starvation doesn't occur.

Have you tried core.sync.rwmutex? Also, please remember that CREW locks
are not composable and can easily lead to dead-locks.

for readers, one for writers) and you have to decide whether readers or
writers get precedence, meaning that ether all writers in the queue have
to wait if just one reader has to write or all writers in the queue have
to wait if there is a single reader comes up.
This is very unlike the behaviour I would like to see; I would expect
readers and writers to be in the same queue, meaning the only difference
between the rw and the normal mutex would be that all subsequent readers
in the queue can read at the same time.

burned before by things like priority inheritance chaining, and other
ways that thread priorities can be elevated for potentially long periods
of time.
Priority inheritance chaining goes like this:
Thread low locks mutex A, then mutex B
Thread high tries to lock mutex B, elevating low's priority to high's so
that high can get the mutex quickly.
When thread low releases mutex B (letting high get it), the OS has
trouble figuring out what low's priority should now be, and leaves it
elevated until it releases all mutexes it still has (mutex A in this case).
Low is now running at a high priority, preventing thread medium from
getting any CPU.
This scenario happened for me with vxWorks some time back, and is the
reason I no longer do much work at all while I have a mutex locked. I am
confident that it is a real problem to this day.
--
Graham St Jack

that high can get the mutex quickly.
=20
When thread low releases mutex B (letting high get it), the OS has
trouble figuring out what low's priority should now be, and leaves it
elevated until it releases all mutexes it still has (mutex A in this ca=

=20
Low is now running at a high priority, preventing thread medium from
getting any CPU.
=20
=20
This scenario happened for me with vxWorks some time back, and is the
reason I no longer do much work at all while I have a mutex locked. I a=

confident that it is a real problem to this day.
=20

threads prevent low priority ones from running at all. On non
real-time OSes like Windows, Linux, *BSD and MacOS, low priority
threads will always get some CPU cycles too, and AFAIK thread
priorities are never elevated in the way you describe.
That being said, it is always a good practice to spend as little
time as possible holding a lock (whether a mutex or a file lock or
whatever).
Jerome
--=20
mailto:jeberger free.fr
http://jeberger.free.fr
Jabber: jeberger jabber.fr

MyValue synchronized, only one thread can be inside the get() method at
a time, which defeats the shared-reader requirement. Imagine this is a
much larger more complex data structure, where get() requires walking
through multiple levels of a tree and a binary search at the last level.

Yup, I get it. But there is one point in it: write is not atomic
operation in sense that get() might return half written data, right?