1.
Why are these two method header identitcal?
const Foo some_function() {
and
Foo some_function() const {
?
IMO that isn't consistent. IMO only the last is valid.
With this example, it is somewhat understandable.

Some would argue only the *first* should be valid:
----
safe const property nothrow
Foo some_function();
----
Basically, yeah, you have the option of putting qualifiers before
or after.

I think the reason is the same as above. If the return value is const
you need to use parentheses. I think that the syntax would conflict with
the const-method syntax otherwise.
The reason for why const is allowed after the paramter list is because
it can be a bit confusing to have two const next to each other.
class Foo
{
const const (Foo) foo () {}
}
--
/Jacob Carlborg

const(Foo) but ref Foo. This is inconsistency, if you ask me.
So why is between C++ and D such a huge difference?
Why isn't it simply const Foo instead of const(Foo)?

I think the reason is the same as above. If the return value is const
you need to use parentheses. I think that the syntax would conflict with
the const-method syntax otherwise.
The reason for why const is allowed after the paramter list is because
it can be a bit confusing to have two const next to each other.
class Foo
{
const const (Foo) foo () {}
}

Forgot to say, the reason for why the ref(Foo) syntax isn't used is
because there can be no conflict since methods cannot be declared as const.
--
/Jacob Carlborg

Good point. I had completely forgotten, that that is possible.
But it seems that no one would have interest in my second
proposal. :)

Similar things were discussed a lot. And some people think that
similar nonnull annotations are a good idea.
Bye,
bearophile

Then: What is the problem to introduce such shorthand?
To solve the problem with a struct is IMO a really _bad_ idea.
Because then you have to pack the object into the struct (or you
have to create the object with a function that returns this
struct, equally as "scoped" does it currently) and _then_ you can
pass it to the function/method.
Still the same effort as if you solve it with preconditions.
Completely unnecessary work, and such shorthand can avoid this.

I believe at least part of the explanation is that Walter wants NotNull to
implemented in a library. That's part of the reason for introducing
disable this().
Now, one could certainly argue that int? be translated by the compiler into
NotNull!int, and the implementation lie in druntime.
--
Simen

bearophile:
Yes i know, but i see there no answers.
Simen Kjaeraas:
That's exactly what i mean. Foo? or Foo! would be converted into
NotNull!Foo.
I wrote a quick and dirty solution:
http://dpaste.dzfl.pl/400079cb
Which converts this Code:
[code]
import std.stdio;
class Foo {
public:
void echo() const {
writeln("My Name is Foo.");
}
}
void foo(Foo! f) {
f.echo();
}
void bar(Foo! f) {
f.echo();
}
void main() {
Foo f = new Foo();
Foo f2;
foo(f);
bar(f2);
foo(new Foo());
bar(null);
}
[/code]
into: http://dpaste.dzfl.pl/d9375eeb
It is not perfect, but a first step. It would be desirable if the
dmd compiler could do something on its own.
What are the chances that something like this happens?

I believe at least part of the explanation is that Walter wants
NotNull to
implemented in a library. That's part of the reason for
introducing disable this().

Yes, I remember part of the discussions. And I agree that
generally it's better to put meta-features in a language that
allow library code to implement the desired features.
That's why recently in the main D newsgroup I have said that
built-in vector ops may be better replaced by library code
(what's missing is some built-in trick to avoid the creation of
intermediate arrays in complex expression).
But implementing good non-null types in library code is hard
(rather harder than implementing vector ops in library code on
library defined vectors). I think disable isn't enough to cover
what Spec# shows good non-null types are meant to be.
Bye,
bearophile

Nope.
But, that's just a simple assertion. If we'd had real non-nullable types,
we could remove the check completely, because you'd know it held a valid
object.
Now, a library solution has certain limitations a built-in solution would
not - for instance, new X would return non-nullable.
--
Simen

Really? I didn't know it. Surely its non-null design looks quite
refined and thought-out. But maybe as say it's not enough still.
Do you have a link where it shows it's unsound?
Thank you,
bye,
bearophile

Really? I didn't know it. Surely its non-null design looks quite refined
and thought-out. But maybe as say it's not enough still. Do you have a
link where it shows it's unsound?

Google for "freedom before commitment". The example is in the paper.
Apparently the unsoundness was recently fixed though -- the faulty
assignment in question is now correctly rejected by the online Spec#
compiler.

const(Foo) but ref Foo. This is inconsistency, if you ask me.
So why is between C++ and D such a huge difference?
Why isn't it simply const Foo instead of const(Foo)?

Sadly, the reason is consistency. const is an attribute just like pure or
nothrow, and you can do both
pure Foo func() {}
and
Foo func() pure {}
as well as
pure { Foo func() {} }
and
pure : Foo func() {}
If
const Foo func() {}
made Foo const rather than func, it would be inconsistent with the other
attributes, and if const on func were only legal on the right (as in C++),
then it would be inconsistent with the others. Many of us think that
const Foo func() {}
should just become illegal inconsistency or not because of all of this
confusion, but Walter doesn't buy into that.

2.
What's about a shorthand for debug assertions?
E.g. to avoid not-null references (yes, if you're a windows user,
you hate them):

Walter's stand on this is that the OS gives you null-dereferencing detection -
i.e. segfaults and access violations. He's not going to add extra syntax for
it.
- Jonathan M Davis

Walter's stand on this is that the OS gives you
null-dereferencing detection -
i.e. segfaults and access violations. He's not going to add
extra syntax for
it.
- Jonathan M Davis

That is a huge mistake. My OS prints me only a funny "Access
violation". And so i can search for my little null reference by
myself. I _must_ debug for a ridiculous null reference. That cost
time. I have not even a filename or a line number. Only the
message that something went wrong. And all other i have to find
by myself. So much time for such little mistakes.
Why would Walter have a language which doesn't support good error
handling?
D hasn't support for unused variables, unused imports and even
not for null references. Why should everyone use D instead of any
other language?
If i have a big project and i use many objects and one of them
change to null, what now? Should the user really step through
thousand lines of code because D prints only "Access Violation"
without any further information? Or should i use the same
principle as Java, and write every time again pre- and
postconditions? I don't see any reasons why anybody should
realize a big project with D and not with a other language, if
the error handling and not null support remains as it is. Sorry.
To reject even a such handy shorthand is incomprehensible to me.

Walter's stand on this is that the OS gives you
null-dereferencing detection -
i.e. segfaults and access violations. He's not going to add
extra syntax for
it.
- Jonathan M Davis

That is a huge mistake. My OS prints me only a funny "Access
violation". And so i can search for my little null reference by
myself. I _must_ debug for a ridiculous null reference. That cost
time. I have not even a filename or a line number. Only the
message that something went wrong. And all other i have to find
by myself. So much time for such little mistakes.
Why would Walter have a language which doesn't support good error
handling?

Because a debugger will show you exactly where the problem is. So, why add
checking that the OS already does for you? That's his logic. There are plenty
of cases where that really isn't enough (e.g. you get a segfault on a server
application without core dumps turned on when it's been running for 2 weeks),
but it is for all the types of programs that Walter works on, so that's the
way he thinks.

D hasn't support for unused variables, unused imports and even
not for null references. Why should everyone use D instead of any
other language?
If i have a big project and i use many objects and one of them
change to null, what now? Should the user really step through
thousand lines of code because D prints only "Access Violation"
without any further information? Or should i use the same
principle as Java, and write every time again pre- and
postconditions? I don't see any reasons why anybody should
realize a big project with D and not with a other language, if
the error handling and not null support remains as it is. Sorry.
To reject even a such handy shorthand is incomprehensible to me.

Honestly, I think that you're blowing null pointer dereferences way out
proportion. In my experience, they're rare, and I have to wonder what you're
doing if you're seeing them all that often.
That being said, what I think we're likely to end up with is a signal handler
in druntime which prints out a stacktrace when a segfault occurs (and does
whatever the Windows equivalent would be on Windows). That way, you don't have
to have null checks everywhere, but you still get the debug information that
you need. But no one has done that yet.
- Jonathan M Davis

I also get null references (and every time i hate D a bit more),
but mostly my classmates and other friends whom I've shown D. And
most of them are already back to C++ or C#. And I can understand
them.
If you want that D is sometimes taken seriously (and it reached
only if you win more people for D), then perhaps you should do
something for more usability.
Such small handy shorthands are easy to implement and even more
understandable as a stacktrace.

I also get null references (and every time I hate D a bit
more), but mostly my classmates and other friends whom I've
shown D. And most of them are already back to C++ or C#. And I
can understand them.
If you want that D is sometimes taken seriously (and it reached
only if you win more people for D), then perhaps you should do
something for more usability.
Such small handy shorthands are easy to implement and even more
understandable as a stacktrace.

On Windows, an access violation (from a null pointer or other
causes) is an exception that is thrown and can even be caught.
On Linux, a segfault is a signal that just kills the program,
it doesn't work like a regular exception.
The Windows exceptions can do pretty stack traces, including
on null derefs, if you have some debugging library installed...
and I've done it before, but I don't remember the link right now.
It's something from Microsoft.

On Windows, an access violation (from a null pointer or other
causes) is an exception that is thrown and can even be caught.
On Linux, a segfault is a signal that just kills the program,
it doesn't work like a regular exception.

Linux also dumps the state into a file. So I'd have to wonder
what the problem was, you would have all the information at hand.

Only if core dumps are enabled... but I think someone did
a Linux stack trace signal handler somewhere for D, but
it never got merged into druntime. (What it'd do is print
out some info before exiting, instead of just saying
"segmentation fault". Still not an exception, but a little
more immediately helpful).

On Windows, an access violation (from a null pointer or other
causes) is an exception that is thrown and can even be caught.
On Linux, a segfault is a signal that just kills the program,
it doesn't work like a regular exception.

And that is the explicit way with pre- and postconditions of
Java, which i want to avoid.
I see, the most of you prefer to write "try and catch" or use the
java principle with explicit pre- and post conditions.
The time will show, if D get therewith enough members to get
serious.
But this is what Java and C# have already.

Me again.
What's the matter if i write something for that shorthand and dmd
has only to switch to it before the compiling begins?
My little test program works with VisualD.
I goto the build events and there i write into "Pre-Build
Command":
[quote]not_null main.d #t[/quote]
and into "Post-Build Command":
[quote]del main.d
rename clone_main.d main.d[/quote]
Of course i have to find a way to generate this for all included
files and not do this manually as
not_null a.d #t
not_null b.d #t
But if dmd would do this e.g. with a compiler flag like
"-notnull" it would lighten my workload a lot.
Here is my current example code: http://dpaste.dzfl.pl/8d41468a
It replace Class? obj statements and generate two files. The
normal file changes to valid D code which can compile. The
"original" code with Class? obj statements will copied into
clone_filename.d
I know it isn't perfect, but maybe it is a beginning.

Me again.
What's the matter if i write something for that shorthand and dmd
has only to switch to it before the compiling begins?

Doing stuff like that makes your code completely unportable. It's _bad_
practice. Don't go and try to redesign the language if you want to be playing
nice with other people. If you can do something completely within the
language, then that's different (other people may still hate what you're up to,
but at least they can compile it), but don't use a preprocessor unless you
really don't care about anyone else ever using your code but you, and even
then, I'd argue against it, because if you get into the habit of doing that,
you're screwed when you actually have to interact with other D programmers.
- Jonathan M Davis

Therefore i hope that it will be official added into D.
Otherwise of course i use it only for projects between me and my
other students.
I don't know what is wrong with this shorthand. So why don't give
it a try?
I'm absolutely sure that Walter will _never_ add real
non-nullable references.
All what will maybe come are further structs in std.alghorithm
which blows up your code as assertions even do.

Therefore i hope that it will be official added into D.
Otherwise of course i use it only for projects between me and my
other students.
I don't know what is wrong with this shorthand. So why don't give
it a try?

I don't even know what the last time I dereferenced a null pointer or null
reference was. It almost never happens to me. I really think that if you're
seeing very many null dererences, you're doing something fundamentally wrong
with your code. At minimum, it indicates that you're not unit testing enough,
since if you do that right, it'll catch the logic errors which give you null
pointers/references very quickly.

I'm absolutely sure that Walter will _never_ add real
non-nullable references.
All what will maybe come are further structs in std.alghorithm
which blows up your code as assertions even do.

We will get a NotNull struct at some point (probably in std.typecons). It'll
statically prevent assignments from null where it can and use assertions where
it can't. Adding something to the language doesn't buy you much more than that
anyway. At this point, any new language feature must meet a very high bar, and
if we can do it in the library instead, we will. D is incredibly powerful and
is already plenty complex, so we'll take advantage of that power where we can
rather than trying to change the language further. D arguably has too many
features as it is.
And as big a deal as you seem to think that this is, the _only_ C-based
language that I'm aware of which has non-nullable references as part of the
language is C#. So, while they may have their uses, it's actually very
uncommon to have them, and since we can add a library type to do it, we can fix
the problem without altering the language.
- Jonathan M Davis

Adding something to the language doesn't buy you much more than
that anyway.

In the case of not-nullability, this isn't true. Integrating
not-null in the type system allows the language to do things you
can't do with NotNull, like:
// x is a nullable class reference
if (x == null) {
...
} else {
// here the type system sees x as not null.
}
There are some other similar things you can't do with NotNull. In
my enhancement request about not-nullability there are references
to articles that explain the situation.

D arguably has too many features as it is.

I don't agree, the number of features is not important. What's
important is how clean and intelligently they are designed, how
cleanly they interact with the other features. etc.

And as big a deal as you seem to think that this is, the _only_
C-based language that I'm aware of which has non-nullable
references as part of the language is C#.

This is not true. Scala, Rust, some new Java-derived languages,
and more modern languages have not nullable references. In
practice I think most or all new languages coming out now have
this feature. In my opinion in few years programmers will expect
to have it in all languages that are not too much old and that
support some kind of nullable references.
Bye,
bearophile

Adding something to the language doesn't buy you much more than
that anyway.

In the case of not-nullability, this isn't true. Integrating
not-null in the type system allows the language to do things you
can't do with NotNull, like:
// x is a nullable class reference
if (x == null) {
...
} else {
// here the type system sees x as not null.
}

??? What does it matter if the type system knows whether a pointer is null
unless it's trying to warn you about dereferencing null? It's not checking for
it. If we had null checks built in, that would buy you something, but we
don't, and we're not going to, if nothing else because Walter is completely
against it.

There are some other similar things you can't do with NotNull. In
my enhancement request about not-nullability there are references
to articles that explain the situation.

D arguably has too many features as it is.

I don't agree, the number of features is not important. What's
important is how clean and intelligently they are designed, how
cleanly they interact with the other features. etc.

There's always a cost to having more features. The more there are, the more
that you have to know, and the more that it takes to learn the language.
Having the features be well-designed definitely helps, and for the most part,
I'm fine with the number of features that D has, but there probably are a few
that ideally would be dropped but can't be at this stage (as was discussed not
all that long ago in a big thread on what language features weren't useful),
and adding more does come at a cost. A particular feature may be worth the
cost that it brings, but the more features that you have, the more value each
additional feature must bring to the table.

And as big a deal as you seem to think that this is, the _only_
C-based language that I'm aware of which has non-nullable
references as part of the language is C#.

This is not true.

Actually, it is. I said "that I'm aware of." I didn't say that there weren't
others, just that I didn't know of any others. But out of the mainstream C-
based languages, it's definitely rare, much as it may be becoming less rare as
new languages come along.

Scala, Rust, some new Java-derived languages,
and more modern languages have not nullable references. In
practice I think most or all new languages coming out now have
this feature. In my opinion in few years programmers will expect
to have it in all languages that are not too much old and that
support some kind of nullable references.

It's not necessarily a bad feature, but I do think that it's highly overrated,
and regardless, there's no way that it's being added to D at this point in its
life cycle. Maybe they'll be added in D3, but I wouldn't expect to see them
before then at the earliest. The push right now is to use the language that we
have to get things done rather than trying to constantly add features and
tweak existing ones. There are probably some features that we wouldn't even
have now if we had taken that approach earlier (e.g. base two literals).
- Jonathan M Davis

??? What does it matter if the type system knows whether a
pointer is null
unless it's trying to warn you about dereferencing null?

In the else branch the state of the type of x is not-null, so as
example in the else branch you are allowed to call a function
that only accept not null references, with the x variable.
Not-nulls integrated in the type system makes sure you have well
initialized variables in the class constructors in presence of
inheritance and other complexities.
It also statically requires you to test for null before
deferencing a nullable class reference, ans so on.
Those are the fruits that a good not-null implementation gives
you. You can't do all this with the NotNull toy. NotNull solves
only the easy part of the whole problem, and it's a small part.

In the else branch the state of the type of x is not-null, so as
example in the else branch you are allowed to call a function
that only accept not null references, with the x variable.
Not-nulls integrated in the type system makes sure you have well
initialized variables in the class constructors in presence of
inheritance and other complexities.
It also statically requires you to test for null before
deferencing a nullable class reference, ans so on.
Those are the fruits that a good not-null implementation gives
you. You can't do all this with the NotNull toy. NotNull solves
only the easy part of the whole problem, and it's a small part.
Bye,
bearophile

I found this an interesting read. The implementation likely doesn't interfere
much with other language features (in a sense like trusted does to templated
functions that take potentially unsafe code as parameters).
I especially like how the compiler _statically_ knows, that x is not null in
the else case. The runtime cost is moved from every call on x, to a single
if-statement! It could require a little logic to parse complex conditions like
"if (a == 1 && !x != null)", though ;)
--
Marco

Doing stuff like that makes your code completely unportable.
It's _bad_
practice. Don't go and try to redesign the language if you want
to be playing
nice with other people. If you can do something completely
within the
language, then that's different (other people may still hate
what you're up to,
but at least they can compile it), but don't use a preprocessor
unless you
really don't care about anyone else ever using your code but
you, and even
then, I'd argue against it, because if you get into the habit
of doing that,
you're screwed when you actually have to interact with other D
programmers.
- Jonathan M Davis

I can give them the "clone_*.d" files, which contains valid D
code. No problem.

And that is the explicit way with pre- and postconditions of Java, which
i want to avoid.
I see, the most of you prefer to write "try and catch" or use the java
principle with explicit pre- and post conditions.
The time will show, if D get therewith enough members to get serious.
But this is what Java and C# have already.

This is a NotNull I just implemented. It is designed to create a strict
division between things that can be null, and those that cannot. The idea
being that the programmer should be aware of it when he needs to convert
between them, and whole call graphs can more easily be made free of
null checks.
--
Simen

This is a NotNull I just implemented. It is designed to create
a strict
division between things that can be null, and those that
cannot. The idea
being that the programmer should be aware of it when he needs
to convert
between them, and whole call graphs can more easily be made
free of
null checks.

Foo f = new Foo();
some_function(NotNull!Foo(f)); <-explicit conversion and because
it's a struct it's better to deliver it by ref.
// ---
Foo f = new Foo();
some_function(f);
// ...
void some_function(Foo f) in {
assert(f !is null);
} body {
^--- explicit. Unnecessary write effort.
A struct as solution to avoid not null references is a bad
solution. It is a nice play tool but as solution it is crap. To
pack my object into a struct with ensures that it is not null,
what's the difference if i use only structs and avoid classes?
Why should i initialize first my object and put it then into a
struct if i can even use only structs?
That isn't comprehensible to me.

This is a NotNull I just implemented. It is designed to create a strict
division between things that can be null, and those that cannot. The
idea
being that the programmer should be aware of it when he needs to convert
between them, and whole call graphs can more easily be made free of
null checks.

Foo f = new Foo();
some_function(NotNull!Foo(f)); <-explicit conversion and because it's a
struct it's better to deliver it by ref.

The conversion from a pointer to a struct containing a pointer should be
without cost when compiling with optimizations on. The effect is exactly
the same as with a pointer, which I hope you don't habitually pass by
reference.

A struct as solution to avoid not null references is a bad solution. It
is a nice play tool but as solution it is crap. To pack my object into a
struct with ensures that it is not null, what's the difference if i use
only structs and avoid classes? Why should i initialize first my object
and put it then into a struct if i can even use only structs?
That isn't comprehensible to me.

Huh? I believe you have misunderstood something here. The struct is a form
of smart pointer. It behaves like a pointer does, and lets you have
polymorphism, inheritance and all that stuff that comes with classes.
Granted, I have found a few issues with the version I posted (mostly to do
with subclassing). Most have been fixed in this version, but some are
unfixable until issue 1528 has been resolved.
--
Simen

Huh? I believe you have misunderstood something here. The
struct is a form
of smart pointer. It behaves like a pointer does, and lets you
have
polymorphism, inheritance and all that stuff that comes with
classes.
Granted, I have found a few issues with the version I posted
(mostly to do
with subclassing). Most have been fixed in this version, but
some are
unfixable until issue 1528 has been resolved.

Many of us think that
const Foo func() {}
should just become illegal inconsistency or not because of all of this
confusion, but Walter doesn't buy into that.

Like monarch_dodra said, this is also a style favored by some:
pure property const
int foo() {
//...
}
Having to write that
pure property
int foo() const {
//...
}
at the very least feels weird.
int foo()
const pure property {
}
could work, I guess. But it feels backward.

Personally, I _always_ put the attributes on the right-hand side save for the
ones which exist in C++ and Java (which is pretty much just the access
specifiers, static, override, and final), and I think that it's ugly and
confusing to have them on the left, but that's a matter of personal
preference. const on the other hand constantly causes issues because - unlike
the others - it can be applied to the return type as well. And the question
comes up often enough that I think that it's a real problem and one that
merits making putting it on the left illegal. At minimum, making it illegal on
the left without other attributes between it and the return type should be
illegal IMHO (though that could cause even more confusion depending on the
error message, since then it might be confusing why you could put it on the
left but only in some positions). That change isn't going to happen at this
point, but I think that we'd be better off if it were.
- Jonathan M Davis