Ah, but if I was attempting to be misleading (for some strange reason) I wouldn't have pointed to a presentation that contradicts a "bulletpoint" from some other presentation that I misinterpreted. I assumed this point was dead from spasi's numbers. I wish there was an easy way to track features that make it into mainline.

@cas: structs - That hand wavy presentation say in version 10+! So if all goes well, in 2017! (excuse me if I don't hold my breath..but at least they're thinking about it...finally)

I'm fine with #2 as is, but things like: if (obj instanceof Foo) { casting to Foo inside this block is silly}

All these spinoffs of Java like Kotlin and Scala all share one particularly irksome trait, which is that they keep changing things for the sake of changing things. Eg. removal of semicolons to terminate statements, the use of the word "fun" to introduce a function declaration when we already have a way. Why can't someone make Java++, where all the lexical bits of Java are preserved and all the annoying unneccesary syntax that bothers us is fixed.

Semicolons in Kotlin are optional. Also, once you start writing code without semicolons, you also start wondering why the hell you're forced to type them in Java. They're ok for the really few cases where they're necessary and totally useless everywhere else. In properly formatted Java code anyway.

Keywords like "fun" are needed once you move types to the right hand side of declarations. I'm no expert on compilers, but I think left vs right isn't simply a matter of aesthetics, it enables more powerful/flexible features to be built.

2. automatic casting eg. Object x = .... ; String y = x; // no need to write narrowing cast as it HAS to be a String to run3. all the little things in Coin I've been waiting for like ?.4. iterable arrays5. that nifty default implementation thing for interfaces

works without an explicit cast. As far as the compiler is concerned, the foo variable is a String inside that if block and an Object outside. The compiler is clever enough to automatically cast much more complex usages than this simple example.

3. Kotlin's null-safety is top of the line. Elvis, safe calls, safe casts are all available.

4. Object arrays can be used as Iterable<T> in Kotlin (primitive ones cannot).

5. Extension functions are much more powerful than default interface implementations.

2. Kotlin has safe-casts. ... As far as the compiler is concerned, the foo variable is a String inside that if block and an Object outside. The compiler is clever enough to automatically cast much more complex usages than this simple example.

hmm ... yeah, I like the Kotlin way! Java doing automatic casts within an instanceof block I'd be happy with. Forcing the developer to deal with what happens if the cast fails is a good thing.

The difference so far as I can make out is... with forcing explicit casts we're insisting the developer patronises the compiler by asserting that he/she knows what he/she is doing. Without explicit casting we are saying to the compiler effectively that we are unaware that this cast operation might fail and therefore we are poor stupid humans.

However, the runtime catches the problem.

The same argument goes for checked vs. unchecked exceptions. The whole idea of forcing people to catch checked exceptions is daft, especially when it ends up with this appearing everywhere:

1 2 3 4 5 6

try {...} catch (Exceptione) {e.printStackTrace(System.err); // This is not handling, this is IGNORING// or worse.. TODO: handle exception but output nothing until someone writes some code}

Again, as all exceptions are caught and meticulously detailed by the runtime, all we're doing is polluting readable code with failure handling everywhere when there is actually already a perfectly robust failure handling mechanism in the JVM. As an example of just how lame this is look no further than RemoteException.

You may attempt to make a counterargument that expected exceptions (oxymoronic) need to be handled correctly, but then you have to assume that any method call can fail at any time for any reason through the long list of unchecked exceptions that can already be thrown anyway (eg. NPE, CCE, IOOBE, OOME, etc).

Final nail in the coffin is that C# does without them and they have mysteriously suffered a grand total of 0 problems as a result.

Almost every one of those wishlist features is in Scala (including structs as of 2.10, though getting them to always be stack-allocated is now in the JVM's court). The "elvis operator" is really flatmap over Option, since null itself is really a second-class citizen.

Final nail in the coffin is that C# does without them and they have mysteriously suffered a grand total of 0 problems as a result.

Actually, every language that isn't java does fine without checked exceptions -- far as I can tell, java's the only one. Ocaml has a compiler feature that lets you turn them on much in the same way as asserts, and I think Ada has something similar, and that's really how Java should have done it too.

I'm not sure I agree they're that similar. Personally I think checked exceptions have their place, but unfortunately are far too widely used. Don't have too much of a problem with them when they're about outward facing code (user input, IO, etc.) Beats dealing with error codes!

The documentation also seems to imply Option is nothing but typesafe null, and that makes me wonder if they'll recognize the utility of monads the way C# has. With a for-comprehension or a LINQ select, I can use the same code to deal with failure synchronously or asynchronously, with or without error messages, with or without alternatives. I just change the type that goes in, and my flow control logic itself is all polymorphic.

I spent some time last night implementing some basic monads in Kotlin. It was pretty straightforward with a few wrapper classes and a bunch of extension methods. The way I understand it (please let me know if I'm wrong), monads is a design pattern really, not a language feature. Handy features like LINQ and the yield operator make the code easier to read, but they're not really needed to have monads in a language. Anything that can be done with yield can also be done with forEach/map/flatMap. LINQ is nice of course, but it's as far away as you can get from Java.

Kotlin is really young, even Jetbrains won't be using it in production for at least a few more months. They're still working on "low-level" features (e.g data classes atm). The most important fact about Kotlin is that they're trying to keep it simple. It has everything you'd expect from a modern language, plus a few clever extras, but it's still something a Java dev can pick up easily. It's for the exact opposite reason that I think Scala won't ever become mainstream.

I spent some time last night implementing some basic monads in Kotlin. It was pretty straightforward with a few wrapper classes and a bunch of extension methods. The way I understand it (please let me know if I'm wrong), monads is a design pattern really, not a language feature. Handy features like LINQ and the yield operator make the code easier to read, but they're not really needed to have monads in a language. Anything that can be done with yield can also be done with forEach/map/flatMap. LINQ is nice of course, but it's as far away as you can get from Java.

True, monads aren't necessarily a language feature, but Scala has for-comprehensions, C# has LINQ, and of course Haskell has do-notation and they go a long way toward making monads actually pleasant to use -- and I might add, simple. If Java wanted to do something similar, I imagine it could simply extend its foreach syntax to anything that can be fmap'd ... not that that sort of thing would ever happen, mind you.

(EDIT: For those not familiar with monads: A monad in X is just a monoid in the category of endofunctors of X, with product × replaced by composition of endofunctors and unit set by the identity endofunctor...just in case you were curious )

It makes me feel good that Java only evolves slowly. In actual fact it hardly changed at all in the early 2000s and it has evolved fairly rapidly in recent years. The introduction of generics and annotations prove that Java can still grow and stay relevant.

Well, they're either burritos or elephants (web-search that if you're motivated)...I'll let people with recent experience attempt to explain. I suppose maybe the thinking here is that if we had functionality that would allow writing monads is the point and not the monads themselves...but I think that would require pattern/transformations, which (as much as I love) I don't think would be a reasonable addition to Java.

If those doSomethings return an Option, then anytime any of them return None, the whole thing will return None (notice we didn't need to do any checking). If it's a Validation or Either, then a failure will propagate through the same way as None. If it's a Future, then each one of those steps is a Future on the previous Future's result. A list will run each step on all the previous results (what we normally think of 'map' meaning). And there's a bunch more monads where those came from like Reader, Writer, and of course Haskell's infamous sin bin, IO.

The gist of it is, Monads are about having a single way to glue effects together, where "effect" depends on the context, whether it's something that could fail anonymously (Option), fail with a result (Either/Validation), have multiple results (List), depend on another value supplied "just in time" (Reader), collect information "on the side" such as warnings or summaries (Writer), or any other "computational context" on a type into which you plug functions that work on that type.

Now I admit it's nice to be able to use a hybrid language where you don't have to string together effects in monads (especially IO), but it's damn handy to have when you want to. I mostly use them to string together Futures in my fully async codebase that because of monads and the syntax sugar for them, doesn't actually look one bit async except for one teensy timeout handler at the end.

@Riven (Reply #111) - That is extremely disappointing. Taking that particular course of action is like replacing the head of a hammer with an extra claw because someone was trying to remove nails using the wrong side of the hammer. Like I said, the copy constructor and intern() method are there for a reason. It must have been an intentional design decision. I had faith that whoever had the authority to make that decision would understand the tool they were attempting to modify a little better than that. I almost wrote a long rant about developers thinking they can add new features in a vacuum without thinking about their side effects or incompatibility with stronger features of a language. It's even worse that they would disregard backwards compatibility for something as dumb as that. (It's not considered a memory leak if you set a StringBuilder or Collection's capacity too large after all; you're supposed to copy and trim them too if you're going to store them long term and need to conserve space) but not sacrifice backwards compatibility for serious annoyances like type erasure or lack of unboxed primitive parameter types in Generics.) I imagine there are less controversial and more practical changes you could make if backwards compatibility could be ignored everywhere.

@Princec (Reply #141)1. That's nice, but would it be necessary if the default visibility of fields and methods were better?2. That would be pretty scary to encounter in the tall grass without any pokeballs. Up casting seems to be relatively rare with Generics in my experience of reading and writing code. It's also a symptom of other problems with class designs. Maybe not worth losing the ability to catch certain errors or read the programmer's intent from only the source code (vs code + comments).3. I don't know what that is, but I already said some of the proposals scare me. 4. Thought that was already the case. It definitely should be if it's not though...5. I don't know what you mean, but it reminded me of an idea I once had. I think I gave up the idea because it looked too much like multiple inheritance. What about a mustoverride keyword or annotation for non-abstract class functions? Occasionally I've wanted to force a sub class to explicitly override a method (even if it only called the super function) but couldn't use abstract because I wanted a default function and a non-abstract class. This would have been useful for me in things like initialize(), reset(), dispose() methods in some game engines.6. That's been near the top of my wish list ever since I started doing A-star searching and complicated vector calculations. I thought about hacking together a translating compiler that added them using groups of primitives for assignments and function calls and ByteBuffers or parallel arrays for struct arrays, but I don't know how you could return multiple values from a function. I'd rather that be supported in the JVM for performance reasons and to avoid another source of type erasure in addition to Generics. 7. Don't know if that's good or unhelpful. Does that require 5?8. I can't envision what features would be needed or what syntax could be used, but it's probably worth standardizing.

@Princec (Reply #146)That's because C++ uses special constructors overloaded assignment operators (?) to handle conversions like that. It makes less sense in Java's (reference centric) type system than in C++'s (value centric) type system. It's a different style of coding. C++'s type conversions make code much more complicated without providing much benefit. Plus, your example would not work in C++ either. You have to call the c_str or copy functions, although you could say str = char_array; instead of string = new String(charArray); if the assignment types were swapped.

@delt0r (Reply #147)Not exactly. It's much worse. Calls to virtual functions on C++ object pointers behave nearly the same to calls on Java references to Java objects. (String)obj does not change obj, it just asserts that it is the type String. Primitive casts in both languages work the same way. Casting a double to a long truncates the number in both cases. You have to do something else to reinterpret the bits in the double type as a long type. Primitives in C++ can be turned into any other primitive using implicit casting. (Groan.) Other conversions can also be made implicit, but you have to create an overloaded assignment operator function for each type. C++ complicates things by allowing you to do both and hiding the meaning of type conversions. There's something called slicing, where passing a type by value will give you a different version of the data and different virtual functions than passing the sub class to a function as a reference.

Speaking of casts as operations. I would actually like to see things like (int), (short), (long), etc replaced with int(x), short(x), long(x), etc. And conversion from floating points to integers be done by global floor(y), ceil(y), and truncate(y) functions. Primitive casts are basically functions anyway, and sometimes it would save two keystrokes. But what about getting rid of casts entirely? Maybe you could find alternatives to up casting or eliminate the need to do that. No class cast exceptions, no instanceof, and you could pass a reference to a ReadOnlyType and know it could never be cast to a ReadWriteType. I'm not sure about that, though.

@Spasi (Reply #151)I've programmed in languages with and without semicolons. It's not a deal breaker either way. When it's habitual it doesn't waste time and can be used as a sanity check and a hint to help your editor format your code as you type. Line continuation characters and writing multiple statements on one lines was a little distracting without having that habit, but it was a long time ago for me.

You're right about right hand type declarations. The main example I can think of is separating function modifier from return type modifiers (like const), but not many languages need that. Things like fun or var could be dropped in some languages, but the syntax highlighting probably helps.

Any language addressing null "safety" is a mistake. It only means something bad in C and C++, and the billion dollar mistake in those languages were letting people treat pointers like ints. C++'s is one of very few languages that define objects by there value instead of their identity. Most variables have an empty/default value... I guess the rational behind saying null is a bad thing is because it has different semantics as a "default" pointer than the standard default values. Getting rid of "null" doesn't get rid of the possibility of 0x0 being invalid memory, or 0x01 being equally invalid, or unaligned addresses being incorrect, or having dangling pointers, or having dead pointers, or pointers to incorrect variables, or pointers that accidentally get assigned an int value. It's one of those things I've thought over a lot but still can't see a convincing argument from other points of view. When you have a language which does NOT mix value and pointer semantics and uses exact references, you don't have a problem. You tend to have an entirely different set of problems in those languages. In that case it's normally related to using uninitialized values or not obeying the preconditions of a function.

NullPointerExceptions are the type of thing that annoys you when you're a total noob to a language. Then you realize that using meaningless dummy objects, breaking preconditions of a function, and forgetting to assign a value to a variable are a bad thing no matter what language you're using.

@princec (Reply #153)Can you explain why checked exceptions are a problem? I've heard it a lot, and everyone seems to gloss over the fact that runtime exceptions (which are the only ones which are actually controllable and preventable by the programmer) don't need to be checked. I've seen it in every anti-Java rant, but the argument that the programmer is a big boy seems to be a straw man... given that they pretend all exception types are checked. We're grown ups. It's not like connections ever get dropped or other programs modify the file system while my own program is running.

Also glossed over is the fact that it's easy to say "throws IOException" for functions that actually do have a chance of doing that. Just attached the appropriate throws clause to all your help methods and put your try catch block in your one entry method that in turn calls IO related helper functions. Plus there's the fact that you can do try-finally. Also never mentioned. It's not in C# or C++ or anything else, so let's ignore that it's in Java. The problem is that most places where you have to handle a checked exception in Java you also have to handle in any language. It's actually shorter to do using throws or try-finally than it is to do with a ton of if statements or lots of try-catch-rethrow's.

Getting rid of "null" doesn't get rid of the possibility of 0x0 being invalid memory, or 0x01 being equally invalid, or unaligned addresses being incorrect, or having dangling pointers, or having dead pointers, or pointers to incorrect variables, or pointers that accidentally get assigned an int value.

That's what I meant. I was referring to C++. Though I may have been making a bad argument because I forgot the billion dollar mistake wasn't originally C or C++. I don't know the type system of ALGOL W, so I can't say if it made sense for that particular language. My point was that null pointer issues are normally caused by other problems, mainly uninitialized variables and improperly assigned variables, so it doesn't matter if you take null references or null pointers away.

Edit: Can someone direct me to information on "the one billion dollar mistake"?

Quote

The null reference was invented by C.A.R. Hoare in 1965 as part of the Algol W language. Hoare later (2009) described his invention as a "billion-dollar mistake":

I call it my billion-dollar mistake. It was the invention of the null reference in 1965. At that time, I was designing the first comprehensive type system for references in an object oriented language (ALGOL W). My goal was to ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler. But I couldn't resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years.

Is he taking credit/blame for all languages with null references or just one? The language is nearly 50 years old. Is that number already adjusted for inflation? If he really was working with null references and not null pointers, I wonder why there were vulnerabilities. Depending on how you define safe, that could mean different things. If he meant only accessing valid memory of objects, then it was merely a failure to implement error checking, like not doing bounds checking on an array. That could have been fixed easily. If he meant being able to access any object at any time with the expectation that the data stored in it is meaningful, then that's not fixable by making null pointers disappear. Unless you're working with raw data, there's no universal solution. You're going to have problems of the same class as null pointer exceptions for at least one operation.

NullPointersExceptions are one of those things that bug noobs. So are type errors. And so are unrecognized class errors. I knew people that half jokingly said "why can't I use import *.*; to fix all my compiler errors?" while learning Java. Maybe these three things are why PHP and JavaScript are so popular.

Am I the only one that never had NPE problems? If I ever got an NPE, even in my older n00bish days, I would go to the line number, use reasonable deduction to find out what exactly was null and where, and easily figure out my mistake. Was I special or are newbies conditioned to be dumber these days......?

Everybody asking in public whether they are special, are special in their very own way.

Other languages hide the 'null' concept from the developer, like PHP, were it is either resolved to zero, an empty array, an empty string, false, or whatever it can throw at you, excluding an exception ofcourse. To me, that only exacerbates the problem, as the program will chug along without telling the developer that some unexpected state was reached, leading to corrupted data and silent failures.

(uncaught) NPEs are nothing special, they indicate that wrong assumptions were made, which can cause any sub-type of Throwable to be thrown. Any such problem can be solved using 'reasonable deducation' - as in: 'understanding the code'. It's not special, solving it not special, reducing the occurence of unexpected exceptions is not special, it's part of the skillset required to be a reasonably qualified programmer.

It's also normal to have a skewed sense of reality in that you do not understand the skills of others that you do not master, and frown upon those that do not have the skills you mastered. This leads to thinking of others being dumber, while everybody more skilled than you has skills 'nobody needs anyway'. In the end everybody deems themselves significantly above average intelligent. I hope, after initially being a bit vague about it, this fully explains how you are not special.

Having said that... regarding the NPE problem: the alternative solution to this problem is telling the compiler which variables (including primitives) can be nullable, either using annotations or using special syntax (not available in Java). IMHO annotations for this problem cause an explosion in code verbosity worse than generics, and should be avoided.

Hi, appreciate more people! Σ ♥ = ¾Learn how to award medals... and work your way up the social rankings!

C# suffers one big problem as a result: when I'm using a library, I can never get a reliable list of what exceptions I might need to handle. The main value of checked exceptions is documentation.

Absolutely! In fact, I'd call having them as a defined part of a method signature a bit more than just documentation, and a good thing too.

I disagree completely with princec's statement earlier that expected exceptions are oxymoronic. Rare but expected failures are exactly where checked exceptions makes some sense. I like this article, in particular the table on page 2 which I think defines well the places where checked exceptions should be used. I also like it because it shares my view that checked exceptions are useful but overused!

(remember solution, use bytebuffer, get from it vice/versa int[] and byte[],don’t remember why I not using it, maybe because byte signet pf ;S, or it slow, hm)Update(check, bytes signets, and I can’t convert bytebuffer to char array vise/versa, for unsigned byte )

3# Want do more then 1 parent for class, interfaces not help some times.(don’t need explanation ^^)

Other languages hide the 'null' concept from the developer, like PHP, where it is either resolved to zero, an empty array, an empty string, false, or whatever it can throw at you, excluding an exception of course. To me, that only exacerbates the problem, as the program will chug along without telling the developer that some unexpected state was reached, leading to corrupted data and silent failures.

PHP is a language where false, 0, [], "", "false", "zero", and "0" can sometimes (but not consistently) be used interchangeably. I once tried to make a SQL abstraction layer library in that language. Something to add enough type safety and validation to prevent exploits and handle type conversions. It worked, but it gave me a lot of insight on how terrible the language is if you're trying to build something secure and easily maintainable or if you just have pride in your work. I found so many other problems in the language that I never used it again once I finished all my ongoing projects.

NPEs are nothing special, they indicate that wrong assumptions were made, which can cause any sub-type of Throwable to be thrown. Any such problem can be solved using 'reasonable deducation' - as in: 'understanding the code'. It's not special, solving it not special, reducing the occurence of unexpected exceptions is not special, it's part of the skillset required to be a reasonably qualified programmer.

They're no different then any other run time exception. You have ArrayIndexOutOfBounds exceptions, but almost no one advocates getting rid of arrays. (But the sad thing is that some people actually do... Some languages make everything an associative array or a list.)

It's also normal to have a skewed sense of reality in that you do not understand the skills of others that you do not master, and frown upon those that do not have the skills you mastered. This leads to thinking of others being dumber, while everybody more skilled than you has skills 'nobody needs anyway'.

Perspective always skews your sense of reality. I notice I sometimes sounded elitist in this topic, but I think some level of disgust is justified. Someone commented on one type of programming language feature involving a theological argument, but I think everything here can be debated on its merits. I didn't post it because it was part of a rant I deleted, but I wrote something along the lines that programming languages are just tools. It's better to have multiple tools that handle some things perfectly and most other things very well than to have one tool that handles nothing perfectly, that is awkward to use, that is dumbed down, and sacrifices effectiveness for ease of use. People that want to mandate that an existing language change to accommodate something they're used to from another language take the skills and language features other types of programmers have mastered for granted. Some things are even mutually exclusive with existing features, such as adding run time duck typing to statically typed languages or pointers to something that only uses exact references. It's okay to have multiple languages, but I think infiltrating the design team and converting programmers of another language is too aggressive and won't serve anyone well.

Having said that... regarding the NPE problem: the alternative solution to this problem is telling the compiler which variables (including primitives) can be nullable, either using annotations or using special syntax (not available in Java). IMHO annotations for this problem cause an explosion in code verbosity worse than generics, and should be avoided.

What bit pattern would you use to represent null primitives? Raw data shouldn't be nullable because it forces you to add a layer of indirection or restrictions. I'm a little confused about what the merits of non-nullable types would be. Would full support for contracts or having the ability to use compile time asserts accomplish the same thing? I can imagine something like kotlin's safe casts thing, but don't know why you would stop at null references or if that would even have a good return on investment on its own. They're not a major problem. Maybe requiring all fields to be initialized the same way final fields and local variables are would also be along the same line by saying "It doesn't make sense to give this a default value, null or not. You need to explicitly initialize it to something."

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org