Almost, it's very simple, like the plain old function pointer as
callback in C. The main difference is:
- when an object which has member functions connected gets detroyed,
there is no problem, no segfaults.
- signals and slots are not as tightly coupled to a specific class as in
your example. Be it through introspection or ifti, class B needs to know
nothing about A except where it can connect, meaning more less coupling
between classes. I think thats all there is to it in essence.

Almost, it's very simple, like the plain old function pointer as
callback in C. The main difference is:
- when an object which has member functions connected gets detroyed,
there is no problem, no segfaults.
- signals and slots are not as tightly coupled to a specific class as in
your example. Be it through introspection or ifti, class B needs to know
nothing about A except where it can connect, meaning more less coupling
between classes. I think thats all there is to it in essence.

Yeah thats pretty much it, everything can be done with templates etc.
What we dont really have yet is a standardized introspection. One of the
nice things about qt is it provides a typesafe callback which c++ doesnt
have. D has the Wonderful delegate, so at least we get builtin typesafe
callbacks. But on my D wishlist is introspection - or a builtin
messaging system, or at least a standardized implementation in phobos.
But its not a big deal by any means, I just think people who have used
qt see how s&s would be Very Cool in D.

Almost, it's very simple, like the plain old function pointer as
callback in C. The main difference is:
- when an object which has member functions connected gets detroyed,
there is no problem, no segfaults.

Doesn't garbage collection automatically take care of that?

- signals and slots are not as tightly coupled to a specific class as in
your example. Be it through introspection or ifti, class B needs to know
nothing about A except where it can connect, meaning more less coupling
between classes. I think thats all there is to it in essence.

I think that is easily handled with a naming convention - call A.connect().

Almost, it's very simple, like the plain old function pointer as
callback in C. The main difference is:
- when an object which has member functions connected gets detroyed,
there is no problem, no segfaults.

Doesn't garbage collection automatically take care of that?

Yes, but you'd have a reference to a dead object in your delegate array
when it gets deleted and then you emit a signal. You can of course
remove that reference when you so want to delete an object, then you
have to track them and miss some ease of use. Or let some language /
library do it for you.

- signals and slots are not as tightly coupled to a specific class as
in your example. Be it through introspection or ifti, class B needs to
know nothing about A except where it can connect, meaning more less
coupling between classes. I think thats all there is to it in essence.

I think that is easily handled with a naming convention - call A.connect().

Yes, this is also a convention amongst different libraries (I like the
~= syntactic sugar though. C# events use the += operator btw). Given
this convention, some boilerplate code and a way to delete objects
without needing to manually call A.disconnect(&foo), there are two
things missing, but they could be left out:
- Signals as a seperate struct or class instead, no need for a signal to
be limited to be a member of some class.
- It should work for all callable types.

Almost, it's very simple, like the plain old function pointer as
callback in C. The main difference is:
- when an object which has member functions connected gets detroyed,
there is no problem, no segfaults.

Doesn't garbage collection automatically take care of that?

Yes, but you'd have a reference to a dead object in your delegate array
when it gets deleted and then you emit a signal. You can of course
remove that reference when you so want to delete an object, then you
have to track them and miss some ease of use. Or let some language /
library do it for you.

I believe this was one of the reasons people have requested weak
references for D. Though really, the same thing can be accomplished by
registering a proxy class instead of a reference to the class to be
signaled. I do this all the time in C++.
Sean

Here's the mixin. Actually, 3 of them, one each for 0 arguments, 1
argument, and 2 arguments. I added a disconnect() function. Note how
trivial it is to use - no need for preprocessing.

Nice, this is also a good option imho, even though it lacks a few
features. To be fair, the preprocessor of QT adds a lot more stuff than
this. See
http://www.scottcollins.net/articles/a-deeper-look-at-signals-and-slots.html
for a comparison.
I would mix this in a struct, alias emit to opCall, provide a clear
function (remove all delegates) and maybe opApply. Then you can have
something like this:
class Button
{
Signal!() onClicked;
void processInput()
{
if (/*code to detect clicky*/)
onClicked();
}
}
Button clicky = new Button;
Popup hello = new Popup("hello");
clicky.onClicked.connect(&hello.msg);
// or clicky.onClicked.connect(&hello.msg, hello) if connections are
made safe.
// or: clicky.onClicked ~= hello.msg;
If and when D function pointers and delegates get to be compatible,
(will they?) it will get even better for this simple solution.

Couldn't you also do weak pointers by XORing them with 0xFFFFFFFF (or,
better yet, const size_t weakxor = -1), then XORing again before and
after you need to operate on them?
Just thought I'd toss that out there.

Couldn't you also do weak pointers by XORing them with 0xFFFFFFFF (or,
better yet, const size_t weakxor = -1), then XORing again before and
after you need to operate on them?
Just thought I'd toss that out there.

You're very devious! I like that idea. (All you need to do is set the
least significant bit to 1.)

Couldn't you also do weak pointers by XORing them with 0xFFFFFFFF (or,
better yet, const size_t weakxor = -1), then XORing again before and
after you need to operate on them?
Just thought I'd toss that out there.

You're very devious! I like that idea. (All you need to do is set the
least significant bit to 1.)

Couldn't you also do weak pointers by XORing them with 0xFFFFFFFF
(or, better yet, const size_t weakxor = -1), then XORing again before
and after you need to operate on them?
Just thought I'd toss that out there.

You're very devious! I like that idea. (All you need to do is set the
least significant bit to 1.)

Spoke too soon. That won't work.

Inverting the lsb will not, because it is also a valid ptr to the object.
Inverting the msb should always work. Well it should work in the way,
that it does not prevent the object from being deleted.
But how can it be tested, that the ptr is callable?

Inverting the lsb will not, because it is also a valid ptr to the object.
Inverting the msb should always work. Well it should work in the way,
that it does not prevent the object from being deleted.
But how can it be tested, that the ptr is callable?

I don't think msb will work, either, as there's no guarantee the gc pool
won't straddle the boundary.

Inverting the lsb will not, because it is also a valid ptr to the object.
Inverting the msb should always work. Well it should work in the way,
that it does not prevent the object from being deleted.
But how can it be tested, that the ptr is callable?

I don't think msb will work, either, as there's no guarantee the gc pool
won't straddle the boundary.

If you consider that you probably don't want your 'hidden' pointers to
be valid for objects they /didn't/ point to either, this gets harder...
I don't think such a simple scheme (XORing with something) can be
guaranteed to work in the general case, unless you assume the GC pool
spans at most half the address space. [1]
Once it gets to be over 2GB (on x86) I think there's basically no way to
make this work.
[1]: If you *do* assume that, (void* p){ return 2 * start_of_gc_pool -
cast(size_t)p; } should provide unique values guaranteed not to point to
the GC pool as long as the original one did. And feeding the returned
value back to it will return the original.
Maybe you could try splitting the pointer up in two parts, stored
separately? (i.e. use more than size_t.sizeof bytes to store it)
This could be literally, storing the upper half and lower half of the
address in different ints.
Another option is also two ints: one pseudo-random, the other pointer
XOR the first.
Yet another one (I like this one, it's pretty much guaranteed to work):
Find some (ptr_bits/2)-bit address range that's guaranteed to not
contain valid pointers. IIRC, both Windows and Linux use the upper GB or
so for kernel address space, so the GC pool should never be located
there on these OSs. Other OSs probably have something similar, if
perhaps in a different location.
Then just store the upper and lower halves in separate ints, whose upper
half ensure the total value is guaranteed to be int the OS-reserved part
of the address space. (e.g. set the upper 16 bits to 1s, the lower 16 to
the parts of the pointer stored)
Another variant of the "kernel-reserved address space" I just thought
of: If you know that the OS the program is compiled on reserves the top
1 GB of address space for itself, store the top two bits of the pointer
set those to one in your stored pointer, and restore them before
returning. Simpler and only uses 34 bits on a 32-bit computer. Of
course, memory allocation granularity means you'll likely still allocate
at least 5 or 8 bytes and thus still "waste" some bytes.
Or you could "just" implement introspection and update the GC to ignore
non-pointers. Then cast the pointer to a size_t for storage so the GC
ignores it :). This one will probably be the most work, but will also
gives some side-benefits[2]. (I believe it's a long-standing feature
request...)
[2]: Or is it the other way around and is this a side-benefit of
implementing introspection? Not sure :).

The problem in general with hiding pointers is that it'll break with a
moving garbage collector. You could work around this by 'pinning' the
objects, but pinning objects long term is a bad idea.

That's true of course. It's also through though that the current GC
*doesn't* move, so it'll work for now. I think as long as a
WeakReference class (or struct) that works with the current GC is
provided in the same library as that GC itself it'll be fine.
In the general case, such a class *will* have to be tailored to the GC
or the other way around.
I believe the latter is what Java does, it has a WeakReference (IIRC)
class that the GC recognizes. A moving GC could still modify the pointer
contained in such a class, while not considering it for reachability of
the pointed-to object (and setting it to null when that object is
collected).
I think that should also be implementable in Phobos, actually...

The problem in general with hiding pointers is that it'll break with a
moving garbage collector. You could work around this by 'pinning' the
objects, but pinning objects long term is a bad idea.

How about an GC allocation area that isn't searched for valid pointers
but is still collected?
Thomas
-----BEGIN PGP SIGNATURE-----
iD8DBQFFHPsFLK5blCcjpWoRAg5cAJ0Tg9vDT3A7N8XMOMBk5gkVBcU7twCgkVaM
wl6LSgAw6xnyhduUo4tDKcI=
=zWvW
-----END PGP SIGNATURE-----

The problem in general with hiding pointers is that it'll break with a
moving garbage collector. You could work around this by 'pinning' the
objects, but pinning objects long term is a bad idea.

How about an GC allocation area that isn't searched for valid pointers
but is still collected?

Couldn't we use malloc/free + RAII for that? ...auto_ptr<>?

No. The trick is that this area is collected(and updated by a moving
GC), but isn't considered while looking for pointers into the "normal"
area.
Thomas
-----BEGIN PGP SIGNATURE-----
iD8DBQFFHTRkLK5blCcjpWoRAslCAJ4kmatooWOu9NkZP8DkAkhZyE4bQQCfUGYH
bZaJHOtnII3oKxnap7BL+JE=
=q6Fc
-----END PGP SIGNATURE-----

The problem in general with hiding pointers is that it'll break with a
moving garbage collector. You could work around this by 'pinning' the
objects, but pinning objects long term is a bad idea.

How about an GC allocation area that isn't searched for valid pointers
but is still collected?

Couldn't we use malloc/free + RAII for that? ...auto_ptr<>?

No. The trick is that this area is collected(and updated by a moving
GC), but isn't considered while looking for pointers into the "normal"
area.

I've been experimenting a bit and got up with the attached stuff.
I copy-pasted 'main' here to explain my point. SafePtr is a template
class that wraps a pointer. It has custom (de)allocator using
malloc/free, so it's not scanned by GC.
I create an object, Test, on the heap and have just one reference to it,
the one on the stack. I also set the pointer in the SafePtr instance to
that same object, but that occurence of the pointer is not scanned so
doesn't count. When settings the reference to the object to null (and
forcing a full collect cycle) the GC _will_ collect the object
(eventhough we still had another reference to it in SafePtr).
As for the notification, the Test class keeps a list of pointers to
IOnDelete, which it iterates in its destructor. SafePtr implements
IOnDelete.OnDelete and resets its reference.
L.
void main()
{
auto SafePtr!(Test) x = new SafePtr!(Test);
Test test = new Test;
// Let the safe-pointer point to the test object
x.ptr = test;
// Now remove the (last) reference to the test object
test = null;
// We must do some operation here to get rid of any references on the stack
printf("%p\r",test); // overwrite stack
// Let the GC do a full collect (will collect the test object)
std.gc.fullCollect();
// The safe-pointer will have been notified
if (!x.ptr)
printf("Target was deleted by GC\n");
}

But how can you tell if _ptr is valid or not? If you can't, it's pretty
useless, as you can never safely dereference it...

Check my other post. There, I've declared an interface IOnDelete with 1
method OnDelete. The pointer-wrapper implements the interface and sets its
pointer to null. The objects pointed-to would have to keep a list of
IOnDelete's, though.
L.

No. The trick is that this area is collected(and updated by a moving
GC), but isn't considered while looking for pointers into the "normal"
area.

I made some experimental changes to add a per-block "scan through" bit
to the DMD GC to indicate whether a memory block may contain pointers or
not. It works quite well, but granularity is per block, so if you had
something like this:
class C {
C strong;
C weak;
}
There is no way to tell the GC to simply ignore the weak reference--it's
all or nothing.
Sean

No. The trick is that this area is collected(and updated by a moving
GC), but isn't considered while looking for pointers into the "normal"
area.

I made some experimental changes to add a per-block "scan through" bit
to the DMD GC to indicate whether a memory block may contain pointers or
not. It works quite well, but granularity is per block, so if you had
something like this:
class C {
C strong;
C weak;
}
There is no way to tell the GC to simply ignore the weak reference--it's
all or nothing.
Sean

Cool, I'm interested in a patch! What are you using by the way,
sizeof<4? Or something smarter? Even a simple sizeof would suffice for
me; at least it'll prevent the GC scanning strings and such.
L.

No. The trick is that this area is collected(and updated by a moving
GC), but isn't considered while looking for pointers into the "normal"
area.

I made some experimental changes to add a per-block "scan through" bit
to the DMD GC to indicate whether a memory block may contain pointers
or not. It works quite well, but granularity is per block, so if you
had something like this:
class C {
C strong;
C weak;
}
There is no way to tell the GC to simply ignore the weak
reference--it's all or nothing.

Cool, I'm interested in a patch! What are you using by the way,
sizeof<4? Or something smarter? Even a simple sizeof would suffice for
me; at least it'll prevent the GC scanning strings and such.

Less than (void*).sizeof, which equates to the same thing. I'll try to
find the time to get a patch off to Walter.
Sean

The problem in general with hiding pointers is that it'll break with a
moving garbage collector. You could work around this by 'pinning' the
objects, but pinning objects long term is a bad idea.

How about an GC allocation area that isn't searched for valid pointers
but is still collected?
Thomas
-----BEGIN PGP SIGNATURE-----
iD8DBQFFHPsFLK5blCcjpWoRAg5cAJ0Tg9vDT3A7N8XMOMBk5gkVBcU7twCgkVaM
wl6LSgAw6xnyhduUo4tDKcI=
=zWvW
-----END PGP SIGNATURE-----

I wish we had such a gc area.
This could make the gc more efficient as well as solving weak references
(at least weak refs on the heap). You would have two allocation
functions in the GC (ideally exposed for user code use, no more malloc
please) - one that allocates on a heap that is scanned for pointers, and
one that allocates on the unscanned heap. The unscanned heap is still
sweeped of course. Then whenever the compiler sees something like int[]
= new int[4096]; it will make sure that array data is allocated on the
unscanned heap, since its type implies it will not contain pointers.
Now the GC doesn't have to scan all 4096*4=16384 bytes of memory
contained by that array, which in some cases will massively speed up the
mark phase of a collection.
Currently there is some saving grace in the fact that when you use C
libraries like SDL, a lot of your data will end up in the C heap, which
accomplishes the same speed boost. But that still has D reliant on the
C heap, and said data isn't garbage collected unless you use wrappers or
something :( Ultimately this will bite us if we write libraries in D
that use large data structures that contain no pointers (um graphics
libs), so for example a D port of SDL would kinda suck right now unless
it used malloc.
(end of sales speech for gc optimization/modification)

Yes, it is a hack, and an awful one. I think Frits and Thomas have it
right in suggesting support for a 'weak pointer' that the GC updates for
moves, but doesn't scan for roots.

Don't forget it should be nulled when the object is deleted[1].
Otherwise you have a pointer that's valid for the lifetime of the object
but dangles around afterwards.
[1] This may be trickier than it seems.
The most only way I can think of that works for deletion by user and by
any GC[2] would probably be to keep track of which weak pointers point
to an object in the object itself.
Java probably has it a bit easier in this regard, since it doesn't have
a 'delete' statement. If you only need to worry about the GC, it's
possible to make sure the GC deletes objects only after scanning
everything while keeping a list of weak pointers found. Though I'm not
sure how Java implementations do this, just guessing here.
[2] Maybe a specific GC can make this easier? I don't know, but I don't
think so. I think 'manual' deletions are probably the hardest to deal with.

Couldn't you also do weak pointers by XORing them with 0xFFFFFFFF (or,
better yet, const size_t weakxor = -1), then XORing again before and
after you need to operate on them?
Just thought I'd toss that out there.

Couldn't you also do weak pointers by XORing them with 0xFFFFFFFF (or,
better yet, const size_t weakxor = -1), then XORing again before and
after you need to operate on them?
Just thought I'd toss that out there.

What happens the day we're halfway up the virtual memory space?

Then we might keep alive other dead objects. That's life with a
conservative gc. Even mundane variables like int's in your code today
can cause objects to be kept around past their expiration date.

Ok, before anyone jumps on me, this has all been discussed in
http://www.digitalmars.com/d/archives/28456.html
Looks like the deletion problem is a real issue. Let me think about it a
bit.

Hmm. A "big" SS implementation (see previous thread here "Dissecting the
SS") c.f. like the 111 case, has more deletion related issues.
In a non-trivial application one can almost take it for granted that
there's instance pooling going on, too.
A robust implementation would guarantee that any way the observer gets
"removed" guarantees it also ceases to exist as an observer. This includes
- getting deleted
- simply not being referred to anymore
- getting moved to the unused instances pool

Looks like the deletion problem is a real issue. Let me think about it
a bit.

Hmm. A "big" SS implementation (see previous thread here "Dissecting the
SS") c.f. like the 111 case, has more deletion related issues.
In a non-trivial application one can almost take it for granted that
there's instance pooling going on, too.
A robust implementation would guarantee that any way the observer gets
"removed" guarantees it also ceases to exist as an observer. This includes
- getting deleted
- simply not being referred to anymore
- getting moved to the unused instances pool

I like what I see. But there is a problem, a signal is hereby identified
by it's types only. In a real world scenario many signals will have the
same types. Bith a keyUp and a keyDown signal will probably want to send
a key code of the same type.
I see that as a minor problem though, it would work just as the
target/action mechanism of Cocoa. Where the majority of UI controls have
a single "signal". In reality a button rarely need more than "onClick",
a text field "onChange", etc.
But still the exceptional events would need to be handled in some way, I
would suggest going down the object delegate route just as Cocoa. A
simple TextField could be:
interface TextFieldDelagate {
bool shouldChange(TextField, char[]);
void didChange(TextField);
}
And then the TextField class have a delage getter/setter of this
interface type. If no delegate is set then the all calls are simply
ignored, if set then the TextField will call them when appropriate.
For simplicity you do not want more than a single delegate, but if a
control have say 10 delegate methods in it's delegate interface then you
would not want to implement dummies for them all.
May I therefor suggest Interfaces with "optional" methods. Something
like this:
interface TextFieldDelagate {
optinal bool shouldChange(TextField, char[]);
optional void didChange(TextField);
}
You would then need the ability to query the availability of an optional
method. I guess something like this (Somewhere in the TextField class
using the delegate interface):
void doStuff() {
if (_delaget && _delegate.implements(void didChange(TextField))) {
_delegate.didChange(this);
}
}
I guess unimplemented methods would have NULL pointers in the method
tables. So this test stage could be ignored for most cases as the
compiler could simply skip calling the method if it gets a NULL-pointer
when fetching the function pointer.
// Fredrik Olsson

I like what I see. But there is a problem, a signal is hereby identified
by it's types only. In a real world scenario many signals will have the
same types. Bith a keyUp and a keyDown signal will probably want to send
a key code of the same type.

I like what I see. But there is a problem, a signal is hereby
identified by it's types only. In a real world scenario many signals
will have the same types. Bith a keyUp and a keyDown signal will
probably want to send a key code of the same type.

I don't understand the problem.

Lets say you have a UI control that can emit two signals; Click and
DoubleClick, both send the mouse button as argument.
enum MouseButton { LEFT = 0, RIGHT = 1, MIDDLE = 3 };
class MyControl {
mixin Signal!(MouseButton);
void myActualClick(MouseButton mb) {
...
emit(mb);
}
void myActualDoubleClick(MouseButton mb) {
...
emit(mb);
}
}
For the signal targets it will be impossible to tell a click from a
double click. Unless you pass a more arguments, but then you kind of
loose the simple idea of connecting to listen to a single event signal.
But I think this is more easily solved using "informal interfaces" that
can have optional methods, and object delegates listening for the events
instead of complex S&S.
enum MouseButton { LEFT = 0, RIGHT = 1, MIDDLE = 3 };
interface MyControlDelegate {
optional void click(MyControl, MouseButton);
optional void doubleClick(MyControl, MouseButton);
optional bool shouldEnable(MyControl) = true;
}
class MyControl {
MyControlDelage delegate;
...
void myActualClick(MouseButton mb) {
...
delegate.click(this, mb);
}
void myActualDoubleClick(MouseButton mb) {
...
delegate.doubleClick(this, mb);
}
void myActualTestForEnabled() {
this.enabled = delegate.shouldEnable(this);
}
}
class MyActualDelegate : MyControlDelegate {
bool shouldEnable(MyControl) {
return today() is TUESDAY;
}
}
MyControl cnt = new MyControl();
cnt.delegate = new MyActualDelegate(); // Add "automagic" enabling.
The deleagtes will probably not be such specific objects, but rather
some larger business logic objects.
So an "informal interface" is a interface of methods that could be
implemented, not an interface of methods that must be implemented. The
methods are virtual, so testing for implementation should be as easy as
comparing for NULL in the VMT.
// Fredrik Olsson

So an "informal interface" is a interface of methods that could be
implemented, not an interface of methods that must be implemented. The
methods are virtual, so testing for implementation should be as easy as
comparing for NULL in the VMT.

I like what I see. But there is a problem, a signal is hereby
identified by it's types only. In a real world scenario many signals
will have the same types. Bith a keyUp and a keyDown signal will
probably want to send a key code of the same type.

I don't understand the problem.

Lets say you have a UI control that can emit two signals; Click and
DoubleClick, both send the mouse button as argument.
enum MouseButton { LEFT = 0, RIGHT = 1, MIDDLE = 3 };
class MyControl {
mixin Signal!(MouseButton);
void myActualClick(MouseButton mb) {
...
emit(mb);
}
void myActualDoubleClick(MouseButton mb) {
...
emit(mb);
}
}
For the signal targets it will be impossible to tell a click from a
double click. Unless you pass a more arguments, but then you kind of
loose the simple idea of connecting to listen to a single event signal.

For the signal targets it will be impossible to tell a click from a
double click. Unless you pass a more arguments, but then you kind of
loose the simple idea of connecting to listen to a single event signal.

One problem with this setup is that here every observee "knows" about
all its observers.
If, on top of this, we want to give the observers the ability to
unregister themselves (e.g. before getting destroyed), the observer has
to know about all the observees.
This essentially creates a network with pointers.
Having instead an external entity to handle SS reduces drastically the
number of needed connections.

Having instead an external entity to handle SS reduces drastically the
number of needed connections.

Having a global entity do this has some advantages, but some significant
disadvantages. The biggest is handling things in the presence of DLLs
and shared libraries.

Ehh, "does not compute: add information"!
I'm not pursuing a global entity for it's own sake, I just can't see any
other way to reduce the number of interconnections.
And, especially, I wouldn't ever have expected to see it as a
disadvantage with DLLs. (Or SLs.)
Please enlighten.

Having instead an external entity to handle SS reduces drastically
the number of needed connections.

Having a global entity do this has some advantages, but some
significant disadvantages. The biggest is handling things in the
presence of DLLs and shared libraries.

Ehh, "does not compute: add information"!
I'm not pursuing a global entity for it's own sake, I just can't see any
other way to reduce the number of interconnections.
And, especially, I wouldn't ever have expected to see it as a
disadvantage with DLLs. (Or SLs.)
Please enlighten.

An external entity would be a global, singleton, entity. Since DLLs (and
shared libraries) might be shared with other languages, they'll need
their own global entity. But if there are multiple D DLLs, then there
are multiple global entities. Who's in charge? Fixing this is not
impossible, it's just added complexity and risk of bugs, and I'm not
sure it will reduce interconnections anyway (because it'll need a fast
reverse lookup anyway).

All this Signals&Slots business (which I also admit to having zero experience
with) makes
me think of the Actions concept I worked into my hypothetical GUI library,
based on a
similar concept found (with incomplete implementation, last I checked) in
Java's Swing
GUI. An 'Action' is an object representing a behavior (or, well, "action" :))
of the
program, and has three faculties: storage of metadata, such as a name,
associated
resources, etc; generation of Presenters, such as toolbar buttons and menu
items; binding
to Performers -- callbacks that do the work of the Action. Some snips to
(hopefully) make
it clearer:
# // bind this.open(ActionContext) to an appropriate Action
# Action["OpenFile"].append(&open);
Note that we need only refer to the Action instance by its name, and note also
the Context
class which is sent as the only parameter. This would encapsulate any
additional data
needed by the Performer, and can also be subclassed for custom data. (In
theory, anyhow.)
# // retrieve a menu item Presenter for an Action
# auto item = Action["SaveAs"].presenter(new MenuItem);
Surely also self-explanatory.
In addition to the .append() method for adding Performers, there is a
.prepend() -- for
completion, but could be useful -- a .clear() which unbinds all Performers, and
a .set()
which is the same as clearing and then appending. Actions (in my hypothetical
GUI, mind
you) would be triggered by component objects, usually in response to an event
from the
underlying system's GUI concept. (Messages in Windows, for example.) All the
library
user's code need do is bind Performers to Actions, and generate appropriate
components by
asking Actions for their Presenters. The program then, essentially, runs
itself.
How does this idea relate to Signals&Slots? I really want to understand what
exactly
makes S&S so valuable. Is it essentially just a standard for convenience?
(Which would
be a bad thing, neccessarily, but that's all I can figure it to be.) Or does
it
inherently open up some new capability I'm not aware of?
-- Chris Nicholson-Sauls

It's close, but check out the signature of trolltech's connect method:
bool connect (
const QObject * sender, const char * signal,
const QObject * receiver, const char * method,
Qt::ConnectionType type = Qt::AutoCompatConnection
);
The key difference is that the target method is specified by a *string*.
That's the main difference between what Qt has and the S&S
implementations people generally come up with for C++ (or D).
Every QObject subclass has a QMetaObject member.
http://doc.trolltech.com/4.1/qmetaobject.html
QMetaObject has interesting methods like
int indexOfMethod ( const char * method ) const
int indexOfProperty ( const char * name ) const
int methodCount () const
QMetaMethod method ( int index ) const
For looking up parts of the class by name and dynamic introspection.
That's the part that requires the running of their "moc" tool, the
Meta-Object compiler. It scans through headers and picks out that sort
of information.
Ok, you're probably now saying, "yeh, but that's not statically
typesafe, and my implementation is!". You're right, sometimes you do
want static type-safety. But sometimes you'd rather have loose dynamic
coupling and runtime type-safety.
Here's where I get a little hand-wavy, but this dynamic binding is very
useful for writing GUIs (and generally any component system that needs
loose coupling). QtDesigner is Trolltech's GUI builder:
http://www.trolltech.com/products/qt/features/designer
It takes advantage of all the introspection capabilities offered by the
QMetaObject that lives in every component. You can point it to a gui
widget you wrote, and it immediately can show all that widget's
properties, signals, and signalable methods (slots), and you can add
that widget to your GUI and start hooking methods together.
Also it means that at run-time, you can safely try to connect to slots
that may or may not be there. If the target doesn't have that slot, no
harm done. And you don't need to know anything about the object at
compile time other than it's a QObject. Loose coupling.
I think you can get similar results in pure C++ with a lot of templates
plus the requirement that users call some sort of method for every
function or property they want to have dynamically callable:
registerSlot(foo, "foo(int,int)")
I think the CEGUI library (www.cegui.org.uk) is now using something like
that approach. But obviously it requires a lot less maintenance if that
is handled for you automatically, because in C++ the place you call the
registerSlot() method always ends up being separated from the place
where you actual declare the foo method. Qt's "slot:" decorator keyword
basically lets you "register" the method at the place of declaration by
tagging it with one word.
All this is not to say that Qt S&S is the best way. Qt's design is
constrained ultimately by having to work with C++. Hence the separate
"moc" compiler. In the end Qt's QMetaObject provides a certain, fairly
limited amount of dynamic functionality. But as pointed out in the
other thread, something like Objective-C provides a much more general
messaging mechanism. From that you can easily build Qt-like S&S or a
dozen other loose coupling solutions.
I think railroading Qt's S&S into a language is the wrong approach.
What goes into the language should be a more general mechanism on top of
which schemes like dynamic S&S can be easily built.
--bb

I think railroading Qt's S&S into a language is the wrong approach.
What goes into the language should be a more general mechanism on top
of which schemes like dynamic S&S can be easily built.

I agree, and thanks for letting me know about the string matching.
That'll become possible in D later when it gets more introspection
abilities.

Some thoughts about introspection:
The most basic introspection would simply be, for each class and struct
Typeinfo, add a pointer to a string that's just a concatenation of names
and mangled types.
[name]\0[mangleof]\0[name]\0[mangleof]\0...[name]\0[mangleof]\0\0.
Since we have .alignof and .sizeof, this would allow all data members to
be identified; and would allow code to be developed that could do
serialization stuff. It would also be reasonably compact.
And an identical treatment for the functions in the vtable (just need to
maintain the same order of functions). Given a string XXX, you could
search for a function named "slotXXX" in the manglelist, and call the
corresponding entry in the vtable.
It wouldn't deal with static functions (where you need the address as
well as the name and type info)
I guess the challenging issue is to make sure that functions that aren't
referenced don't get type info stored? I imagine those dynamic
languages have trouble discarding unused functions at link time. I think
you'd need to tell the compiler "don't discard this function even if you
think it's not used, it's only referenced in a text string".

The most basic introspection would simply be, for each class and struct
Typeinfo, add a pointer to a string that's just a concatenation of names
and mangled types.
[name]\0[mangleof]\0[name]\0[mangleof]\0...[name]\0[mangleof]\0\0.
Since we have .alignof and .sizeof, this would allow all data members to
be identified; and would allow code to be developed that could do
serialization stuff. It would also be reasonably compact.
And an identical treatment for the functions in the vtable (just need to
maintain the same order of functions). Given a string XXX, you could
search for a function named "slotXXX" in the manglelist, and call the
corresponding entry in the vtable.

I think generating an array of TypeInfo's would be better, because
they're easier to manipulate. TypeInfo instances are also singletons,
which potentially could make it smaller than the mangle strings.
3 pieces of info are needed for each member:
name
typeinfo
offset

It wouldn't deal with static functions (where you need the address as
well as the name and type info)
I guess the challenging issue is to make sure that functions that aren't
referenced don't get type info stored? I imagine those dynamic
languages have trouble discarding unused functions at link time. I think
you'd need to tell the compiler "don't discard this function even if you
think it's not used, it's only referenced in a text string".

The bloat might be bad enough that the full introspection info would
only be generated for specified classes, say, ones that inherit from a
special
interface class.

Also it means that at run-time, you can safely try to connect to slots
that may or may not be there. If the target doesn't have that slot, no
harm done. And you don't need to know anything about the object at
compile time other than it's a QObject. Loose coupling.

Loose coupling also means that you can easily make a GUI in say some
kind of XML-file. In this file the interface is defined, along with it's
connections. A object schema if you like. But then you would need to be
able to pass around classes, like in Object Pascal:
SomeClass createAndInit(SomeClass& aClass) {
SomeClass foo = new aClass();
foo.doComplexStuff();
return foo;
}
Heaven sent for tools. Having an UI tool that manipulates a XML-file is
way better than an UI tool that creates and modifies actual code.
Especially when the user comes and modifies this code by hand later. And
having localization in retargetable text files is just genius.
Hmm... writing a new UI framework, is that a smart idea? There are
already dozens.
// Fredrik Olsson

I think railroading Qt's S&S into a language is the wrong approach.
What goes into the language should be a more general mechanism on top of
which schemes like dynamic S&S can be easily built.

Yeah, I was thinking this also the other day when talking about "hooks".
To be more concrete, I think it would be a great feature to allow
some of the hooking that modern debuggers do - e.g. from now on execute
this bit of code at entry or exit of a given function. In the context
of S&S as discussed in this thread, such functionality could allow
already written functions to start being used as either signals or slots
without requiring source code modifications to their definition. Signals
would be created by some library that hooks the end of the emitting
function and the GC issue could be solved by hooking the destruction of an
object (searching based on its address).
Undoubtably there would be many other cool options and a lot of synergies
with the unit testing functionality for debugging.