>Unfortunately, it is not really possible to change Kotlin behave the same way. Apple uses a bridging mechanism to connect Objective-C and Swift binaries, when Kotlin uses the same bytecode as Java. To make a simpler mental picture, imagine Objective-C and Swift being connected side-by-side and Kotlin and Java as a stack, where Kotlin is on top. I presume it would be pretty challenging to provide a proper compatibility with Java, transforming all nullable Kotlin values to Optional and vice-versa, especially in such tight areas like Java reflection.

This is not correct. The Kotlin compiler could treat every parameter and return value that doesn't have a nullability annotation as an implicitly unwrapped optional (Type!). It could even support all of the popular nullability annotations. There is no requirement to choose just one.

The beauty of IUO is that you can ignore the nullability and you have the same amount of safety you have today: if you touch a null value you get an exception (or abort in Swift's case). The benefit is you can insert a null check in one place which then propagates through the rest of your Kotlin (or Swift) code.

Yeah (agree) that's not correct. Kotlin has to compile to Java bytecode, true, but it does not overlay Java like e.g. Groovy does. Kotlin and Java very much exist side-by-side in exactly the same way Swift and objective-c do: they (can) target the same machine but have different language scemantics.

The Chris Lattner quote at the end explains the fundamental difference. Kotlin is an interloper in a Java world - that includes the JVM, a VM designed to run Java.

Swift integrates and interoperates with but is not based on the Objective C runtime. Apple also controls all the relevant bits and pieces and they can bridge, compile-to, shim, wrap, etc however they want. They can can pick the design approach and fiddle with all the parts to make them fit. This isn't a luxury Kotlin has.

That would have been true to say if this was a decade ago. After the advent of .NET and CLI's ability to support multiple languages, Sun started moving the JVM away from being Java specific and including features to support other languages. Your criticism hasn't been correct for years.

It is not a criticism as much as it is a fact, no less true than ever. The JVM is designed to run Java. Other languages also compile to Java bytecode, and Sun and Oracle have made some helpful changes.

But the VM is OVERWHELMINGLY designed around the requirements of Java and to be performant for Java. Try compiling a non-OO language to fast bytecode without spending a lot of time considering what the analogous Java code would compile to.

Java bytecode is so close to Java, you can decompile it almost directly to readable Java source. Try that with JRuby or Clojure or Scala and see how close you land.

I'm not sure I understand this, it seems you're responding to some perceived 'criticism' rather than things Lattner mentions. Yes, a few things have been tweaked in the JVM to make life easier for non-Java languages. Fundamentally, its semantics are closely tied to Java's (or vice versa or both or something!). You aren't going to write a (sanely performing) JVM language that has a different memory management model or value types or TCO - not until the JVM supports those things and even then, hopefully in a way that matches what you have in mind for your language. This isn't so much a criticism as a non-controversial fact.

Up until recently it would have been non controversial. With Graal/Truffle it's possible to run many languages on the JVM whilst bypassing bytecode entirely. Those language implementations usually run as fast or much faster than other runtimes. It includes LLVM bitcode that does manual memory management, which can run at least some benchmarks about as fast as gcc would.

First you said 'not correct for years', now you're talking about stuff that's still in development and is not really the JVM anyone actually uses. I don't think you're making a good faith argument (or simply didn't click through Lattner's bit the first time around) - you just don't want to be wrong on the internet. That's fine but it's not an interesting conversation.

D'oh! My bad, apologies. But it's the dual of that argument and similarly orthogonal. Future-JVM becomes some other, more general JVM but that doesn't change the design constraints of Kotlin, its relationship to Java and how that relationship is fundamentally different than Swift's relationship to Objective C. Kotlin wasn't designed to run on Truffle or with Truffle in mind.

Right, but there are a lot of unrelated, orthogonal and potentially true things. I don't understand the point, if the topic is 'things that influenced the design of Kotlin or Swift and stuff we say about them'.

Apache Groovy 1.x was dynamically typed only, and focused on scripting and glue code for the JVM and syntactic compat with java, so could be said to "overlay Java". Since Groovy 2.0 however, it changed tack with static typing targeted for Android development, not keeping up with syntax changes in Java such as lambdas from Java 8, and tried to compete with Java instead of complementing it. It no longer "overlays Java", and in fact lost out to Kotlin and Scala in its effort to compete with Java. Groovy should have stuck to its knitting instead of changing direction every few years.

They could have gone hardcore and said any non-primitive, unanotated platform type is nullable but that would have made interop really ugly. And all of the null checks would have muddied up code and added (admittedly minimal) runtime costs.

I want something in between Kotlin and Scala. I want option as a real type that is treated as a max-single-element collection that can be flattened with the same APIs as other collections. But I want no runtime penalty. People in Rust are so lucky to have zero cost abstractions for these things. I suppose I'll need better compile time support (even more than Scala macros) and whole program optimizations (i.e. cross JAR) to get zero cost optional on the JVM. Things like Scala Native use LLVM and surely Option things are inlined or otherwise optimized out.

I wrote a zero-allocation `Option`-style (monadic) data structure for Scala a while ago [1]. Unlike all the previous attempts, it supports the distinction between `None`, `Some(None)`, `Some(null)`, `Some(Some(None))`, etc., which is what allows it to remain monadic. The surprise is that it does not use `null` as the `None` value. The downside is that `toString()` is altered: `Some("hello").toString()` returns `hello`, not `Some(hello)`.

There was an experiment to use it as a replacement for the implementation of `scala.Option` in the dotty compiler code base [2], but it is inconclusive so far; it should be tried directly in the collections library.

Oh primitive types do get one level of boxing (on the JVM): a `Double` becomes a `java.lang.Double`. But it doesn't become a `Some` with a `java.lang.Double` inside, so we gained one allocation anyway. It is not possible to remove that box without compiler support, and even then not in all cases (return values for examples) because `Double` contains a finite amount of values 2^64, and `Option[None]` has 2^64 + 1 values.

And in that implementation, `None.toString == "None"`, `Some(None).toString == "Some(None)", etc. Although that could be changed.

Seems like one could use the wasted digit of signed numbers to stor options rather than have asymmetric range (two's complement) or positive/negative zero... IE have a [ed:bit string] that indicates none/some?

It is monadic if the monad laws about it hold. `null` has nothing to do with it. In the context of my unboxed option, `null` is like `5`: one of many primitive values, uninterpreted by the abstraction, and therefore it does not break the abstraction.

If you think my unboxed option is not monadic, please provide a counter-example to one of the monad laws.

> Instead you just get a crash if it is null which is worse in my opinion

Only in the same instances you would in Java-to-Java. So the ergonomics aren't improved or reduced. And really, this only helps on returns anyways. Marking every Java param nullable gets you nothing if the implementer didn't handle it well.

> This already happens automatically for parameters to a Kotlin function; check out the Kotlin Intrinsics checks.

Yup, and I don't like it. So even public Kotlin-to-Kotlin calls suffer. Haven't checked in a while, but I would like an option to use annotations only and skip those top-of-method checks.

> Only in the same instances you would in Java-to-Java. So the ergonomics aren't improved or reduced.

This isn't correct. For example you have a String that was passed as a param and were shoving it into JSONObject. In Java if it's null nothing bad happens (except shoving a null value into your JSON). Shoving a null into a JSONObject in Kotlin won't crash in the put operation; you'd get the crash as soon as the method is called and it does the Intrinsics check.

I've measured the null-checks and other people have too... the runtime performance cost is negligible, so disabling them would get you nothing useful. If they did, Kotlin would have made the compiler flag to disable them (which exists) public... but until someone comes up with an actual reason other than "I feel like it's better" that won't change.

I have reasons. For example, I need to have as few instructions between a JNI call and the next line (I am starting a SingleStep JVMTI callback as part of my development on a fuzzer) and those intrinsics don't help. It's gotten to the point on my advanced JVM projects where I have to just drop into Java all the time to avoid this stuff, get the proper MethodHandle.invokeExact semantics, etc, etc.

I'm tired of being surprised by the bytecode that Kotlin generates. You could argue javac has some magic too, but not near as much and it's pretty well spelled out whereas you won't find the docs for all of these intrinsics.

It's even worse when things just get dismissed as "I feel like it's better" and then when you provide reasons people will say it's not normal. Languages (and their proponents) should not treat the end user (i.e. the dev) like they are stupid, or at least give them the option to turn things off. The trade-off of hiding this stuff is not worth it...it's more like it's the language people doing the "I feel like it's better".

Null checks are effectively free on the JVM. One of the advantages of using null-as-real-null that is rarely discussed by functional programming fans is that null is well supported by the hardware. Just keep the bottom pages unmapped and an attempt to access a null pointer will trigger a hardware fault. If you don't do it that way then you have to do the null checks with branches in software, which bloats your code footprint and now with Spectre perhaps requires slower code too.

As for your point wrt to options in Scala, there were some libraries that used marker values for none (e.g. NaN for double) that then used value classes to basically provide a zero alloc option type. I’ve done my own internal versions for similar purposes. They work well with the caveats of value classes in Scala having their own set of issues.

It's not that they normalize them per se, it's just that NaN bit patterns are mostly undefined behavior. I think the JVM spec (or JLS, I forget) implies that you can't really rely on the NaN bit patterns being retained. Double.longBitsToDouble mentions that some processors might not return a double in the same bit pattern as passed.

I know you have got a TON of replies demonstrating better ways to check if a value is null or not.
I believe they are all missing the point. In a language with monads, Option's should not be part of the function signature unless it is important to the logic of that function. Your given example should look like:

func1(a: i32, z: i32) -> i32 {
return a + z
}

I know it's not 1 to 1, but the idea is there. You would then use a tiny bit of glue code to combine all the stuff you have to get what you want. For example, if you have 2 nullabes as parameters, you use liftM2; if you have 1 nullable, you use liftM; or perhaps you just want reduce a structure, so you reach for foldM. etc. If your monadic code has to constantly figure out what monad it is, you aren't buying yourself much and I could see why you don't find them valuable. And if the function explicitly needs an Option, then it must be important and must be taken into consideration by the caller. I just don't think they should force the caller to consider them where not needed.

I wanted to mention I often see similar statements with almost the identical code comparison that you made. I believe it has to do with retraining oneself to think functionally instead of imperatively. I'm curious about your background.

For good measure, here's another example in Haskell:

func1 a b = fromMaybe 0 (liftM2 (+) a b)

And an uglier, but fun, point-free version:

func1 = (fromMaybe 0 .) . liftM2 (+)

I believe real-world examples would hold up better because the glue code would only be where needed.

At least for android kotlin programming, with the release of api level 27 + support libraries, it’s a lot better. They have placed the annotations all over the place. Using nullable annotations have good use cases in java too.

I would think optional types and everything else being non nullable is the way forward for java too

Well, the null issues in Java libraries have always been there, I don't really think it's a fault of Kotlin and its tooling that those bugs are not caught at all by the compiler, as all object types in Java are Kotlin optionals if not annotated.

Maybe in time most of the necessary open source projects will be pure Kotlin ones, but for now I believe annotating Java code with null constraints is certainly a boon for everyone and should be encouraged.

I agree that the tooling is getting better. Some of the popular libraries are still figuring out the right patterns to apply to make them more "Kotlin friendly". Tools like the Checker Framework [1], JSR 308 [2], and compiler plugins like Traute [3] can also help with the safety issues on the Java side. Kotlin is beginning to understand these more, and some of the basic annotations are supported, but the tooling is only going to improve with all of the backing that Kotlin currently has.

Mostly true under the hood; optional value types are tagged unions (I didn't know this before reading this article and checking for myself), but optional reference types are still, as you'd expect, nullable pointers.[1]

The java compiler might not know if a variable is safe to pass as null, but it is not unknowable. I think on every thread similar to this that I am on, I have plugged Coverity as a tool that can and will flag spots like this. That is, it won't just say "it is possible this will be a null pointer exception," instead it will say "setting this value to null will lead to this dereference of a null." It can look magical sometimes, because they will trace pretty deeply into your code.

Which is just my way of saying over and over again that tooling can improve that does not require a complete rewrite of your software.

One major difference is that Kotlin can tell you about nullability problems from simply analysing the method body, without going any further down the execution tree. this is a lot more reliable, and fast enough for real time IDE error reporting, than doing symbolic execution and diving into every method call to figure out all the possible paths.

Same with Rust: it can determine multi threading issues from the surface layer, whereas in other languages in order to detect data races and contention you need some serious tracing analysis as well as doing a lot of runtime profiling.

Ah, please don't take my point as dismissing the new tools, either. Indeed, I would hope that a synergy between all tools lead to better tools for us all. I just have a pet peeve for the attitude of jumping rather quickly to "rewrite it all in new language" and not realizing you can bring a lot of this into existing codebases.

The issue on the Java side is more one of compatibility than technical feasibility. IDEA and Eclipse have been able to infer potential null issues for a while now, but it's difficult to retrofit those features into a compiler that must also accept code written 15 years ago.

The claim seems to be that the tool will always tell you absolutely whether a null-dereference will happen, and never say “maybe.” The Halting Problem is trivially reducible to this (just put an intentional null-deref in front of each halt).

Not necessarily the same point being made. At least, they are subtly different, to my understanding.

It can tell you that, "if this line is reached, and it was called from this path (with evidence on how this could happen), it will dereference a null." It is not saying that "running this program will guarantee give you a null dereference."

The halting problem is more in line with "this program will terminate on any possible inputs", which is more expansive. It is trivial to say that you can prove some programs won't terminate on a particular input. Question is if you can do it for all inputs, no?

Rice's theorem is a bit more direct here: C is Turing complete, some C programs derefernce null pointers, and some don't. Therefore, the question of whether a an arbitrary C program derefernces a null pointer is in general undecidable.

But I don't think the Op was claiming it was possible to do this perfectly, just possible to write very useful tools. You will always have either flase positives or false negatives (or I suppose inputs where you just hang).

Turing completeness is the bane of static analysis, but that doesn't make it a fruitless endeavor.

I wouldn't say that Rice's thm is the bane of static analysis. It is what makes the field interesting. If the problems were not undecidable then we wouldn't have do so many interesting and challenging things to make working tools.

I think you may be conflating the Halting Problem. The problem simply states whether it can be determined that a program, given a set of inputs, will eventually halt. Attempting to use a null pointer can be deterministic at compile time but it's not telling you that your program would halt or not.

A tool that can reliably tell whether a given program will hit a null deference or not can also trivially be repurposed to solve the Halting Problem. Thus, no such perfect deref tool can exist. The best you can claim is that there’s a tool with few false positives and false negatives. But the original post seems to claim that there’s a perfect one.

No, it can't. You are just playing around with a problem you don't understand (null checks have 0 to do with the halting problem, it's always possible to know that a value will never be null in Java at compile time as long as you have all the source code - no dynamic libraries loaded at runtime)... unless you can actually show the proof of that, that would be really interesting to see.

The halting problem is used as a means to stick one's head in the sand. The halting problem imo is fundamentally uninteresting. A question that's almost as useful and is answerable is whether a program might not halt. Using your mapping to nullability, this tries to answer whether a pointer might be null. That's all the tools are trying to do and the halting problem doesn't get in the way.

While the halting problem can be pretty important in some cases, it's not very relevant in most day to day code. If you're not sure if your code will ever terminate, that usually means that you're doing something wrong.

So sure, sometimes Coverity will fail because of the halting problem, and in those cases it can give an appropriate error message. Most of the time, though, it'll work just fine and be a very useful tool.

After coding enough Crystal, which instead of optionals have raw and anonymous sum types, I can say this: I don't see the point of a type system where nil is a special case. I don't want optionals, I want to have nil as a totally separate type.

Remove part of your sentence, and I think you've answered your own question.

> How do you know it's "the problem"? [...] because this is a Java interop call.

If you're making platform calls, you know you have to deal with nulls (or at least, you should, in my opinion). So you declare returns nullable and avoid passing nulls unless that's documented to be okay.

The problem isn't with `foo`, it's with how Kotlin's Java interop treats `just`.

Because Java doesn't have a language-level concept of non-nullability, Kotlin's interpretation of the type signature for `just` must accept nulls (sure enough, non-nullability is enforced at run time inside RxJava). Kotlin trains you to expect nullability to be a compile-time constraint, and has borderline frictionless with Java. Put the two together, and it's very easy to walk into a runtime-error trap.

I think the point is different. Author is saying Java interop is fraught with runtime nullable issues that can't be caught at compile time for Java libs sans annotations. It has nothing to do with how it's set in Kotlin. Author says with nullable annotations on tge Java side, it'd bark at your first example too which is the real point.

The author introduces his example with "Let’s take a look at a very short Kotlin + RxJava sample." But IMHO his sample describes a Java-only problem (and solution). He could have written a quite similar article without any mentioning of kotlin.

Assume that there is a valid reason for the "foo" to be allowed as null, but that you wanted to guard against allowing calls to Observable.just to take in a null.

For example, "foo" is a variable at your boundary between the user and the main logic of the system, and the Observable is in the heart of your code and you expect that it should never be passed nulls.

Now, to your point, you could just make sure that you filter the "nullable" foo through a non-nullable "bar." Which is ultimately what you will do.

The article's point is that if the Observable part had been written in Kotlin, this would have likely been the default. Adding a marker to say "nullable" is how you have to do it in Kotlin. In Java, it is the opposite. You have to add a marker saying non-nullable.

null pointers can be encoded as an option type[0] (that's basically what languages like Swift or Rust do, they eschew nullable pointers and wrap non-nullable pointers in an option type instead), but they are not because

1. you don't have the attending type-safety of knowing that some pointers can't be null

2. the compiler requiring checking for nullity before every pointer access would be horrendously unwieldy

[0] which can then be compiled back to a regular nullable pointer at runtime