I have eliminated a lot of NPEs by using semi-immutable objects with member variables marked 'final', and asserting that they're non-null in the c'tor. It's not perfect but it tends to fail faster and make the root cause obvious.

I'll have to give one of the design-by-contract annotation processors a go at some point - I worry you could waste a lot of time adding constraints rather than getting things done, but you can abuse any tool if you really try.

So you tweak a bit of code to add a contraint... then some other code without DBC which it relies upon violates that constraint. So you go through and attempt to put constraints in there, too... and the whole thing snowballs and you end up spreading contracts throughout your code until you get to code which you have no control over and are forced to either suppress DBC or write constraining wrappers around it all.

Yes, contracts will infect a great deal of the rest of your code in much the same way that static types do. If you end up having to work around the contracts you stick in your code, then perhaps you need to review why you're violating the contracts in the first place, or whether you were really providing that contract at all in every case.

As for nulls and asserts and sentinels and parameter checking and all that ... those are all runtime checks. The purpose of @NotNull and its moral equivalents is to do it all at compile time.

NPE's are very annoying to track down, but I agree... they are not a problem except for beginners who are new to the language, or experts that completely forgot to initialize something. Seriously, it is because Java is "Object" based why everything in it is capable of causing these exceptions. But, when I moved to the language, I got used to it. Just like I got used to not having access to pointers, or having to name my class name exactly like my file name.

If you are creating a bunch of NPE's anyway, you really don't have any business coding. No program works well with creating a bunch of references you aren't going to use. Such bad coding practice of not initializing your Objects/variables is why there are so many segmentation faults and Heap errors in C/C++. How are programmers supposed to learn anything if we aren't getting punished for writing bad code.

Funnily, I am in favor of a parameter that gives this functionality (like @Nonnull). It is a very good idea. However, I don't think it should be default, because it produces lazy code in where programmers don't have to code defensively. Leave that notation for people who want to make their code behave in a certain way.

If it is one thing I like Java for, it makes good programmers. The language has very high standards when it comes to design and how it was written. The fact that they are actively preventing us from causing segmentation faults is one of the best fail-safe designs I've ever witnessed in a language to date. NPE's are not a problem at all. They get beginners writing better code faster. That is something I can always get behind.

If you are creating a bunch of NPE's anyway, you really don't have any business coding.

I wouldn't normally respond, but it goes on and on with this sentiment. This sort of argument has been advanced against things as basic as memory protection, and some advocates of "duck typed" languages even now put it forth against static typing: to prevent errors, just Be A Better Programmer. It's facile and bankrupt. No compiler technology has ever advanced due to moral condemnation of programmers who just aren't manly and robust enough to deal with Things As They Are.

Programmers have to take a little bit of responsibility for the code they write. We can't just bandage and cover up bad coding design because "we are not manly enough". No computer language is perfect, and no computer language is ever going to be perfect. We are the ones who are responsible for creating the next generation of programmers. If we make it all "rainbows and butterflies and roses", then how are we going to get robust workable code.

People learn how to code better through failure. That is a fact of life. I would rather throw someone a NPE and have them learn to be better, then to bandage the error and have them post on a form "Why isn't this working?" or "can you help me find the error in this code?". Bandages make life harder for everyone because they don't produce stack-traces. It is frustrating for you, me, and the entire community to make code run error-less.

We don't fix anything this way, all we do is move compiler errors to logic errors. People have to learn to program, and compiler errors are the best way to learn. The fact that only that statement was taken as a singleton, means that there is a lot of passion in this subject and your programming skills are way above novice. So, please, take a step back and try and remember how it was when you were a beginner. The best way to learn is from mistakes... always.

I simply don't agree that you throw out any notion of contracts that enforce against a whole class of mistakes, statically, before the program is ever run, simply because of some notion of "learning through mistakes". And neither does anyone writing in anything above assembly.

So, I see that the "human element" is completely "null" from coding practice then. If many computer science people are thinking this way, then we are just setting up another language for failure. Of course, you want to prevent errors. All good coders want to prevent errors.

However, you can't code against errors if errors never show up.

It is like putting yourself in a bubble so you would not get disease. There is fail-safes for code... and then there is just overdoing it. NPE's are something very trivial that any good programmer can easily prevent. Seriously, why should we promote that programmers will not get errors for not initializing Objects. That is bad coding practice for "all languages". It isn't just for assembly.

Threads like these, of which we have a bunch. They always end up stating the obvious and feed endless derailments over semantics nobody really cares about but everybody insists them to be phrased correctly from their own point of view. In the end everybody gets tired of the discussion, and nothing of value was added.

I was wrong about everybody getting tired, it seems.

Hi, appreciate more people! Σ ♥ = ¾Learn how to award medals... and work your way up the social rankings!

I simply don't agree that you throw out any notion of contracts that enforce against a whole class of mistakes, statically, before the program is ever run, simply because of some notion of "learning through mistakes". And neither does anyone writing in anything above assembly.

Well the problem is that null reference problems are a subset of a larger class of errors, not the other way around. Not using nulls to your advantage just masks one of the most telltale symptom of logic errors when working with Objects. Put your hand on a hot stove and you feel pain. Take away the burning sensation and you have a worse problem. You're just as likely to come close to accidentally burning yourself one way or the other, but now without the sensation of heat or pain you're going to hurt yourself sooner and more often.

There is a need to strike a balance a balance between compile time and run time checks. Java does a better job than most (all?) other languages in doing that with a few exceptions. ("Optional" methods in interfaces and not providing read only interfaces to collections, for example.) Using null in a program is not a problem though. It's very useful. Java's interfaces and single inheritance model is very good for defining contracts to humans. Static typing helps the compiler and other programmers understand your code better. There's a huge benefit to both the human and the computer. Knowing how an interface works isn't the compilers job though. Enforcing just one part of that contract such as nullability wouldn't provide any marginal benefit to the compiler and would not eliminate the necessity for the human to refer to a class's documentation.

Edit: Speaking of assembly and null pointers. I've programmed in assembly on a system that used addresses 0 through 127 for debugging purposes. There is nothing an assembler could do to prevent you from writing to those addresses, but it would have been nice if the hardware treated writes to address 0 and 1 as invalid. Not only did it not fail fast and it failed silently, but when you tried to debug code suffering from a "null pointer" write it would screw up the behavior of the debugger and act differently than if you ran it without a debugger.

You may attempt to make a counterargument that expected exceptions (oxymoronic) need to be handled correctly, but then you have to assume that any method call can fail at any time for any reason through the long list of unchecked exceptions that can already be thrown anyway (eg. NPE, CCE, IOOBE, OOME, etc).

Final nail in the coffin is that C# does without them and they have mysteriously suffered a grand total of 0 problems as a result.

Cas

I'm kinda ambivalent about checked exceptions. Yes, they add a bit of boiler plate. Is that the end of the world? No, but it is pretty ugly. Is there a good side? Yup - sometimes being forced to deal with an exception is a reminder about something you've forgotten. But that said, most of the time it's just a PITA.

I think that checked exceptions are clever - it's smart getting the compiler to tell you what might be thrown from library code - but in practice it's usually just extra work. When you want to know that you've handled all expected kinds of exceptions, they great. The other 95% of the time, not so much.

The most important fact about Kotlin is that they're trying to keep it simple. It has everything you'd expect from a modern language, plus a few clever extras, but it's still something a Java dev can pick up easily. It's for the exact opposite reason that I think Scala won't ever become mainstream.

Yes! Exactly! I had a look at Scala and Clojure and I really like them both (esp. Clojure as I love Lisp) but Kotlin's the only one I can see working for Java programmers as a whole.

Threads like these, of which we have a bunch. They always end up stating the obvious and feed endless derailments over semantics nobody really cares about but everybody insists them to be phrased correctly from their own point of view. In the end everybody gets tired of the discussion, and nothing of value was added.

I was wrong about everybody getting tired, it seems.

Apparently!

@Cas: Yes, I think Design by contract could be an awesome feature for Java. It might even solve the few cases where you do want checked exceptions, perhaps? Maybe you could assert that a method will or will not propagate certain classes of exceptions?

Without wishing to hijack this thread (which I'm finding quite interesting) with a further discussion on pixel bit operations, that example from ra4king is wrong (doesn't do clamping). I recommend this old thread which contains loads of working pixel blend modes based on bit shift operations. Maybe start a new thread if you want to discuss further - I'd be tempted to claim it's off-topic, though this thread seems to be pretty much everything goes.

If you are creating a bunch of NPE's anyway, you really don't have any business coding. No program works well with creating a bunch of references you aren't going to use. Such bad coding practice of not initializing your Objects/variables is why there are so many segmentation faults and Heap errors in C/C++. How are programmers supposed to learn anything if we aren't getting punished for writing bad code.

Funnily, I am in favor of a parameter that gives this functionality (like @Nonnull). It is a very good idea. However, I don't think it should be default, because it produces lazy code in where programmers don't have to code defensively. Leave that notation for people who want to make their code behave in a certain way.

If it is one thing I like Java for, it makes good programmers. The language has very high standards when it comes to design and how it was written. The fact that they are actively preventing us from causing segmentation faults is one of the best fail-safe designs I've ever witnessed in a language to date. NPE's are not a problem at all. They get beginners writing better code faster. That is something I can always get behind.

Ahaaarrr, I actually disagree with every part of this post As I sup another freshly drawn pint of foaming virtual ale, I counter with:

1. Sproingie is dead right about telling people to just be better at it. No point in trying to be some sort of righteous idealist. If you were right then nobody's programs would ever crash would they, because we're all perfect. It would seem that the empirical evidence points to exactly the opposite conclusion: we are highly fallible. Let a machine do the job of telling me if I'm doing it wrong or right. The longer I do this (32 years and counting) the more I wish computers told me I was doing things wrong sooner rather than later. And the only people who get punished by programs that crash are users, not developers.

2. null object pointers are critically important. null means something. It means, this is pointing at nothing. I have maybe not allocated it. Quite probably I do not want to waste the memory, because memory is indeed still a finite resource. I specifically make use of the null "pattern" for things such as lazy instantiation for expensive-to-construct objects and things which may take up a lot of RAM. It's fine, for example, to have a million objects, but what if each of those million objects was forced to have some reference in each of 4 fields which were effectively useless? You'd have to point them instead at some stupid NullThingy instance which threw... RuntimeExceptions on every method you tried to use it for probably, because it's not supposed to be there. It's doable but means every class effectively needs a NullInstance which throws exceptions when any of its methods are called... hideous. null is a trivial solution to a real problem: saving space and being trivial to detect (OS signal).

3. Another interesting tidbit about @Notnull - research quite a few years back on Java programs discovered that the majority of cases where an object was referenced actually assumed @Notnull rather than @Nullable. There was therefore a reasonable school of thought that leads us to thinking the default should be @Notnull (or rather, undecorated), and you'd specifically have to annotate with @Nullable to allow nulls otherwise.

There was therefore a reasonable school of thought that leads us to thinking the default should be @Notnull (or rather, undecorated), and you'd specifically have to annotate with @Nullable to allow nulls otherwise.

That's exactly the problem with Java right now; you can't have @NotNull as the default. We're at a point where we have a "dumb" Java compiler and we're supposed to use tools (intelligent IDEs, bytecode transformers, etc) for everything. Ok, that's fair, but the only way to protect myself from passing null to something that expects non-null, is to explicitly annotate that something with @NotNull. But that's like 95% of the codebase! So, we end up using @Nullable only and not @NotNull at all, to avoid the code mess, and only gain half the benefit of compile-time null safety.

Assuming we don't want to use another language like Kotlin, this could be solved with an IDE that supports a "null-safe-Java-mode". When it's on, @NotNull doesn't exist, everything is annotated with it automatically. You use @Nullable where necessary. While you're at it (time of order more virtual ale? ), make everything final as well (except methods/classes ofc) and have a @Mutable annotation to indicate mutability.

Yes, that'd be a great option for Eclipse to support.<edit> Not sure about the mutability idea - may as well put the const keyword to use if you're going to go that far. And look what a mess that seems to have made of C++.

No need to go all the way to const. It would just be very convenient if all primitives/references were immutable by default. Maybe I should have said @NonFinal and not @Mutable.

This is what I think:

- You want to initialize class fields in the constructor and never change their values (more immutable classes = good). This means final fields by default.- Method code that changes the value of passed arguments is confusing and a frequent source of bugs. This means final method arguments by default.- Local variables that get assigned more than once are relatively rare and again may lead to confusion (e.g. when coupled with long if/then/else chains). One might say that mutable loop variables are very common, but there's no reason to worry about that these days with the enhanced for loop, forEach, map/reduce, etc. This means final local variables by default.

Anyway, only the first one might be a bit problematic with injection frameworks, but in general it should lead to cleaner code.

- You want to initialize class fields in the constructor and never change their values (more immutable classes = good). This means final fields by default.

I understand the viewpoint, and mostly agree with it, and in a new language it would be a great move. It's never going to happen in Java, and it would be a bad idea if it did, because it would change the semantics of the language.

I can definitely see the benefit in compile time annotations for classes / fields though, so that warnings are produced if fields aren't final and not marked - that's something I could definitely envisage using.

Slight aside on the importance of final fields - I was interested to find out during the recent thread on double-checked locking that final fields have different assignment semantics in the new Java 5+ memory model. They can never be seen from another thread in an invalid state (unlike mutable fields).

It's fine, for example, to have a million objects, but what if each of those million objects was forced to have some reference in each of 4 fields which were effectively useless? You'd have to point them instead at some stupid NullThingy instance which threw... RuntimeExceptions on every method you tried to use it for probably, because it's not supposed to be there.

Null is still a reference, it's just an all-zero bit pattern (it doesn't have to be, but that's how every JVM does it). And it throws NPE for every method you try to use it on. Scala's None is a global value, so it's not taking up any extra space other than the single None object. And since None is a subclass of Option[T], but a sibling of Some[T], and Option[T] is of course a different class than T, there's never any danger of mixing them up -- and it's the compiler that tells you when you do, not the runtime.

Scala doesn't actually try to solve the null problem globally -- null is still there, and any reference can still be null. It's just not used that much because there are type-safe alternatives. Option isn't a panacea either: for one it doesn't solve the problem of "where did this null come from", since now it's "everything's resulting in None, where did this None come from?". It just makes the handling it a lot more explicit, and if you use it monadically, easier to swap out with something better like Validation.

If your problems only involve positive integers and you have no subtract or divide operation, you too would have a right to fret over seeing a zero involved. Compare by analogy to operations involving only known valid objects that return other valid objects. When you assert that condition, it's nice to have the compiler prove that it's true.

Threads like these, of which we have a bunch. They always end up stating the obvious and feed endless derailments over semantics nobody really cares about but everybody insists them to be phrased correctly from their own point of view. In the end everybody gets tired of the discussion, and nothing of value was added.

I was wrong about everybody getting tired, it seems.

I'm late for this response, but I really have to say. Everytime I log in to JGO, I check this thread.I'm not getting tired of this... I'm so suprized this is not a flame-war...

The thread just gives me sympathy for James Gosling and the other language designers. All the devs at Sun telling them Do This, Do That... A pretty thankless task. All I know is I found C and C++ stressful and Java a joy to work with. As a self-taught programmer, I feel I can do creative things with Java, while with the C-team I spent my time worrying about memory allocation and pointer math.

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org