Scala - a Roadmap

There is much controversy surrounding SIP 18. I believe at least partof it is because people don't have a clear idea where Scala is going.So people are making up crazy reasons like that the primary purpose ofSIP 18 is to pander to naysayers.

Let me first say something about that:

First, the criticism of Scala that's most vexing for me is when peoplesay it's like C++, or it's a huge language that includes everythingincluding the kitchen sink. I'm hurt by the criticism because Ibelieve it describes Scala as the exact opposite of what I panned itto be. I have always tried to make Scala a very powerful but at thesame beautifully simple language, by trying to find unifications offormerly disparate concepts. Class hierarchies = algebraic types,functions = objects, components = objects, and so on.

So I believe the criticisms are overall very unfair. But they docontain a grain of truth, and that makes them even more vexing. WhileScala has a simple and consistent core, some of its more specializedfeatures are not yet as unified with the rest as they could be. Myambition for the next 2-4 years is that we can find furthersimplifications and unifications and arrive at a stage where Scala isso obviously compact in its design that any accusations of it being acomplex language would be met with incredulity. That will be the bestcounter-argument to the naysayers. But much more importantly, it willbe a big help for the people writing advanced software systems inScala. Their job will be easier because they will work with fewer butmore powerful concepts.

If we arrive to do the simplifications, that would be a good basis fora Scala 3. Right now, all of this is very tentative. Scala 3 does nothave an arrival date, and it's not even sure that it will ever arrive.But to give you an idea on what I will be working on, here are somepotential simplifications.

- The easiest win is probably XML literals. Seemed a great idea at thetime, now it sticks out like a sore thumb. I believe with the newstring interpolation scheme we will be able to put all of XMLprocessing in the libraries, which should be a big win. It also meanswe could provide swappable alternatives to the current XML system suchas Anti XML or others.

- The type system. Ideally, Scala's types will be built from justtraits, mixin composition, refinements, and paths, and nothing else.That's the true core of Scala as is captured in our dependent objecttypes formalism. We'll throw in classes for Java compatibility. Westill have to make this into a practical programming languagecompatible with what Scala currently is. The potential breakthroughidea here is to unify type parameters and abstract type members. Theidea would be to treat the following two types as equivalent

trait Seq[type Elem] and trait Seq { type Elem }

The definition

trait Seq[Elem]

could still be kept and be interpreted as a type that has aninaccessible member Elem, similar to the distinction between

class C(x: T) and class C(val x: T)

that we have now. The big simplifications are then:

(1) Any type can be written without its parameters.(2) Any type that has abstract type members can be retroactively parameterized.(3) Type parameters can be unified by name.

To illustrate (3), right now to create a synchronized hashmap, one has to write:

new HashMap[String, List[Int]] with SynchronizedMap[String, List[Int]]

That's one of the thankfully few aspects where current Scala violatesthe DRY principle. In the future you'd be able to write

new HashMap[String, List[Int]] with SynchronizedMap

because SynchronizedMap could refer to the type Key and Valueparameters in Map. Or you could write

new HashMap[Key = String, Value = List[Int]] with SynchronizedMap

to make it even clearer. The two formulations would be equivalent.

Now if we do that then we have suddenly gained the essentialfunctionality of higher-kinded types and existential types for free! Ahigher-kinded type is simply a type where some parameters are leftuninstantiated. And an existential type is the same! Now clearly,something must get lost in a scheme that unifies higher-kinded andexistential types by eliminating both. The main thing that does getlost is early checking of kind-correctness. Nobody will complain thatyou have left out type parameters of a type, because the result willbe legal. At the latest, you will get an error when you try toinstantiate a value of the problematic type. So type-checking will bedelayed. Everything will still be done at compile-time. But some ofthe checks that used to raise errors at the declaration site will nowraise errors at the use site. In my mind that could be a price worthpaying for the great overall simplification and gain in expressivepower. The other thing that gets lost are the more complicated formsof existential types that cannot be expressed as a (composition of)types with uninstantiated type members.

One particularly amusing twist is that this could in one fell sweepeliminate what I consider the worst part of the Scala compiler. Itturns out that the internal representation of higher-kinded types inthe Scala compiler is the same as the internal representation of rawtypes in Java (there are good reasons for both representationchoices). But raw types should map to existentials, not tohigher-kinded types. We therefore need to map Java raw types to Scalaexistential types. The code that does this is probably the mostfragile and intricate part of the Scala compiler. There's basically nogood way to do it without either forgetting some transformations oraccidentally tripping off cyclic reference errors. But with theprojected simplifications we would get

- we could eliminate two classes of types in Scala, so that only asimple core remains, - we could gain expressive power through unification of concepts, - we could avoid unnecessary repetition of parameters in mixincompositions and extends clauses - we could strengthen the analogies between type parameters and valueparameters.

Other more narrowly scoped ideas for the type system are to introduceleast upper bounds and greatest lower bounds of types as typeconstructors. This would avoid the explosion of computed lub typesthat we sometimes see in codebases today. And there are some ideas tomake type inference more powerful by making it constraint based.

One big unknown right now is how to ensure a high degree of backwardscompatibility or alternatively provide migration strategies. It'sclear that we have to do this if we want this to fly. It will requirean implementation and lots of experimentations. Therefore, I don'texpect any of these things to materialize before a timeframe of 2-4years.

So, what does this have to do with SIP 18? Two things:

First, while we might be able to remove complexities in the definitionof the Scala language, it's not so clear that we can removecomplexities in the code that people write. The curse of a verypowerful and regular language is that it provides no barriers againstover-abstraction. And this is a big problem for people working inteams where not everyone is an expert Scala programmer. Hence the ideato put in an import concept that does not prevent anything but forcespeople to be explicit about some of the more powerful tools that theyuse. I am certain there is no way we can let macros and dynamic typesinto the language without such a provision.

Second, the discussion here shows that complex existentials mightactually be something we want to remove from a Scala 3. Andhigher-kinded types might undergo some (hopefully smallish) changes tosyntax and typing rules. So I think it is prudent to make people flagthese two constructs now with explicit imports, because, unlike forthe rest of the language we do not want to project that these twoconcepts will be maintained as they are forever. If you are willing tokeep your code up to date, no reason to shy away from them. But if youwant a codebase that will run unchanged 5 years from now, maybe youshould think before using complex existentials or higher kinded types.Of course the docs for these two feature flags will contain adiscussion of these aspects, so people can make an informed choice forthemselves.

I know that despite these explanations SIP 18 will still becontentious. But let's keep the discussion of SIP 18 on scala-sips. Ofcourse I'd be happy to see responses to all other parts of this mailin this thread.

...I'm writing a macro. Why is my IDE/paperclip saying "it looks like you are writing a macro - want me to import language.macroDefs?" adding anything? Why is it being any more explicit about whether macros are being used in the program than the program, um, containing macros?

Having said all that; very interesting update on the direction of scala

I really like this proposed unification, the way you present it suggests that working with generic types will become much easier to think about and nicer to write down. Simplifying the compiler is more than just sugar on top: I take it as an omen that this idea is a very powerful one. And if you are right that most current code will remain valid, this means that the current system—while doing it a bit convoluted—is actually not that far from being ideal ;-)

And one thing probably a little off topic: in my experience most communities which are formed around something always have more people who just use/enjoy the thing and don't discuss it too much in public. It works, it's fun to use, i'm teaching new people in my company to use it and it's all ok! It's always easier to make it look like a language or technology is responsible for your fails but we don't do that.

Scala has some quite usual problems for a growing language, but it is really very strong and beautiful and the roadmap looks very much in Scala's spirit.

...I'm writing a macro. Why is my IDE/paperclip saying "it looks like you are writing a macro - want me to import language.macroDefs?" adding anything? Why is it being any more explicit about whether macros are being used in the program than the program, um, containing macros?

Try turning the chessboard around : if you are a non-advanced user of Scala and you heard scary stories about macros, and Scala in general. You might want to start learning the language without having to care about advanced features like macros. In that perspective, reducing the minimal core of the language is interesting. Sure, if you already master current Scala, adding a new feature is exciting for you, and you will avoid most of the dangerous orthogonal feature alchemies, but Scala was probably meant to be safe (typesafe, and generally safe to use for projects). It would be a big loss for the language if it were to stay safe only for the already advanced users.

How would you call that philosophy ? Incremental design ?

Having said all that; very interesting update on the direction of scala

- The type system. Ideally, Scala's types will be built from just
traits, mixin composition, refinements, and paths, and nothing else.
That's the true core of Scala as is captured in our dependent object
types formalism. We'll throw in classes for Java compatibility. We
still have to make this into a practical programming language
compatible with what Scala currently is. The potential breakthrough
idea here is to unify type parameters and abstract type members. The
idea would be to treat the following two types as equivalent

trait Seq[type Elem] and trait Seq { type Elem }

I don't see how this works out from a theoretical standpoint (and thus, from a practical standpoint). Abstract type members are a generalization of existential types, as you point out. You're proposing to unify existential types with universal types. What you're literally saying is that universal quantification is just existential quantification in disguise. Unless I missed a huge chunk of predicate calculus, I don't think this is true.

Existential types may unify with let-bound polymorphism in the way you suggest, but not all of Scala's universal quantification is let-bound according to the mapping of let given in your example (a class/trait type declaration). Thinking along these lines, one practical impact of this theoretical discord comes to light immediately:

def identity[A](a: A): A = a

So…is this a universal type? There aren't any classes here onto which you could inject type members (maybe Function1?), so I don't see a way to carry your unification strategy through every case. In other words, attempting to implement this strategy would make the language less consistent, rather than more, since it would then have two distinct ways of encoding universal types, and that's assuming the encoding works at all at the class level!

There just seem to be a lot of holes here that for which I can't see a resolution, generally stemming from the duality (not isomorphism) of existential and universal quantification.

On Tue, Mar 20, 2012 at 3:38 PM, Daniel Spiewak <djsp...@gmail.com> wrote:>>> - The type system. Ideally, Scala's types will be built from just>> traits, mixin composition, refinements, and paths, and nothing else.>> That's the true core of Scala as is captured in our dependent object>> types formalism. We'll throw in classes for Java compatibility. We>> still have to make this into a practical programming language>> compatible with what Scala currently is. The potential breakthrough>> idea here is to unify type parameters and abstract type members. The>> idea would be to treat the following two types as equivalent>>>> trait Seq[type Elem] and trait Seq { type Elem }>>> I don't see how this works out from a theoretical standpoint (and thus, from> a practical standpoint). Abstract type members are a generalization of> existential types, as you point out.

They are considerably more powerful since they can be used as inputtypes as well as output types. What you are arguing is that the basisof any good type systems should be F_omega_sub, or something close. Iwant to explore something completely different. But let me finish theresearch before going into details.

> def identity[A](a: A): A = a>> So…is this a universal type? There aren't any classes here onto which you> could inject type members (maybe Function1?), so I don't see a way to carry> your unification strategy through every case.

Nothing new here. Polymorphic types for methods already exist and willbe maintained. In a calculus we could model identity as a member of aparameterized class, but in a practical programming language thingswill stay as they are.

I think these changes are hard to sell to people and will put adoption at risk: "we will have a new major release in 2-4 years" when people are already scared from point releases? People won't use Scala _now_ if they get told that there will be a huge new version with probably breaking changes ahead.

A value proposition which would make many people more willing to move to Scala 3:

The changes you mentioned

PLUS:

Simplifications to reduce the complexity of signatures in the collection space.

Better/reified Generics. Seeing Oracle slides mentioning the possibility of reified Generics in a future version of the JVM/Java I think Scala can push this forward. The main thing in Generics I care about is not reflection stuff, but the whole overloading/overriding/subclassing problem. Other platforms don't use the erasure scheme either.

Having better default/named arguments so that overloading can be put to rest completely. (Makes reflection much more simpler)

No nullable reference types by default.

Further unification of AnyVal/AnyRef.

No any2StringAdd.

No unsafe implicit conversions for primitive types.

I think having a look at the stuff Kotlin (reified generics) or Ceylon (union types instead of nullable reference types) are trying to do makes sense.

I don't think the feature flags are the right way to pull off a migration to a future version. Imho it makes more sense to not annoy people about existing stuff but offer them a way to opt-in to "future implementations" on a non-global base. E. g. the thing Adrian mentioned about virtpatmat.

I still think import is the wrong way and I also dislike using "language" for it. We don't have "utilities" but "util", so I think "lang" is more consistent and looks more familiar to Java people.

Imho the crucial thing is to have an existing implementation before starting to make any warning noises.

In the end I think Scala 3 makes sense, but it should be done in a more continous way, e. g. folding these changes into the next 2.x release when they become ready and declare Scala 3 when all the changes are already in. E. g. "This is what we think is worth calling 3.0, because it is stable mature and all features are well-tested. They are no compatibility issues." E.g. don't repeat Python2 -> Python3. And don't keep people wating forever like in Scala 2.7 -> Scala 2.8.

I think there is a scheme were I could agree having some language pragma: - Offer a future implementation with a pragma in version 2.X - Make the pragma the default in 2.X+1 but allow people to revert using the old implementation - Drop the pragma and the possibility to use the old implementation in 2.X+2

trait Seq{type Elem } is existential quantification over {X / X <: Any}. (There exists a type X <: Any such that Elem = X and P(X)), where P(X) are the requirements inferred through the trait for the Elem type. but for as long as we don't identify an X that satisfies the condition implied by the trait, it is a universal quantification : (for all X such that (X <: Any and P(X)), Seq{type Elem = X} can exist). When instancing Seq, we must provide an X satisfying P, therefore satisfying both universal and existential quantifications as previously mentioned.

Then ... you can define the existential quantification as equivalent to universal quantification when applied on a singleton. That is :

trait Seq{type Elem = A} hints either (for all X in {A}, P(X)) for
universal quantification, or (there exists X such that P(X) and X = A), for
the existential version.

I don't really know what theoretical value this as, since I deliberately used different sets over which to work the quantifications, but a compiler would most probably be able to handle that, isn't it ?

They are considerably more powerful since they can be used as input
types as well as output types. What you are arguing is that the basis
of any good type systems should be F_omega_sub, or something close. I
want to explore something completely different. But let me finish the
research before going into details.

Sounds interesting! I wish you luck, but I find it hard to imagine how you'll be able to escape the lambda cube.

Type members are more powerful than existentials (as I said, they are a generalization). That certainly makes them sufficiently powerful to simulate universally quantified types under certain circumstances (e.g. probably let bindings), but I'm not convinced it allows them to represent universals in all cases.

Nothing new here. Polymorphic types for methods already exist and will
be maintained. In a calculus we could model identity as a member of a
parameterized class, but in a practical programming language things
will stay as they are.

I think I'm going to withhold judgement until I see the results of your research. As I said, it sounds very interesting, but I can't quite see it from where I'm standing.

I just wanted to share an other idea:
- As of today most coders are used to this advanced feature, and I personally use some of them very often.

It coud be possible to have the "-language:all" flag enabled by default
(sort to say) on version 2.X, and when switching to 3.X the flag won't
be added by default. This could make code compatible without too much
hassle for people using theses features today in 2.X.

Of course I suppose it should be too possible for the compiler to add
the given import when necessary by analysing the source code and
checking if theses features are used, doing so will ease the
hypothetical migration and lower all frustration from advanced users.

Anyway, I think it could be worth having Martin finishing his
experimentation, and be sure this path is solid and feasible before
adding a verbose import system to enable theses features.

<simon.ochsenreit...@googlemail.com> wrote:
> Hi Martin,
>
> Having some cleaned-up Scala 3 would be huge, BUT:
>
> I think these changes are hard to sell to people and will put adoption at
> risk: "we will have a new major release in 2-4 years" when people are
> already scared from point releases? People won't use Scala _now_ if they
> get told that there will be a huge new version with probably breaking
> changes ahead.
>
> A value proposition which would make many people more willing to move to
> Scala 3:
>

> possibility of reified Generics in a future version of the JVM/Java I think
> Scala can push this forward. The main thing in Generics I care about is not
> reflection stuff, but the whole overloading/overriding/subclassing problem.

Sorry, I know this is off topic, but I just have to jump in here.

Reified types are EVIL, and I dearly hope that they never become part
of Scala or the JVM. The reason that reified types are evil is that
type reification has only two uses: first, it would allow you to
reflectively inspect parameterized type values, which you shouldn't be
doing anyway, or, second, overloading on parameterized types, which is
unnecessary since overloading is a purely syntactic concern that can
be avoided by simply using different names. The only condition under
which reification would actually be helpful is at serialization
boundaries - but here, you *actually* gain nothing from reification
because you're already forced to perform a cast, and secondly if
you've got type information reified into the serialization format, you
can just go ahead and dispatch upon that.

Please, please stop spreading the incorrect notion that reified types
are an appropriate solution to any important problems. Writing
programs in the presence of erasure *forces* you to avoid excessive
coupling to runtime type knowledge, which is required if you actually
want to write reusable code.

> Other platforms don't use the erasure scheme either.

> - Having better default/named arguments so that overloading can be put

I was finally able to puzzle through what it is you lose if you treat type members as universals: higher-rank types. Let-bound polymorphism is satisfied, and this is trivially easy to show due to the fact that type members can be given definite instantiations. When instantiations are given, the existential type is given a definite binding and the behavior is equivalent to when a similar universal type is bound. This is why cases like Seq[A] work out just fine under this regime.

However, you have no way to encode the type forall a . a -> a. Naively, one might want to try something like this:

type Id = { type A; def apply(a: A): A }

Unfortunately, that falls over immediately:

def foo(id: Id) = id(42) // error!!

The reason this falls over is A is not any type, it is some type. We can't just mash Int into apply and expect it to work. This is where the fundamental duality between existential and universal types rears its head. You could fix this by playing Oleg's double-negation trick, but that always ends up being extremely ugly.

Given that Scala theoretically shouldn't have higher-rank types at all, maybe this isn't a serious issue. Still, it bugs me.

On Tue, Mar 20, 2012 at 4:02 PM, Simon Ochsenreither<simon.och...@googlemail.com> wrote:> Hi Martin,>> Having some cleaned-up Scala 3 would be huge, BUT:>> I think these changes are hard to sell to people and will put adoption at> risk: "we will have a new major release in 2-4 years" when people are> already scared from point releases? People won't use Scala _now_ if they get> told that there will be a huge new version with probably breaking changes> ahead.

I am a bit more optimistic than you. Java has published a roadmap andsome of the changes might well be breaking (I would not see how elseto introduce reified types). Now we all know that roadmaps like thatare tentative and if the breakage is too serious one won't be able todo it. But I believe it's better to be clear about the directions of_work on the language_ without already being explicit about releases.

> A value proposition which would make many people more willing to move to> Scala 3:>> The changes you mentioned>> PLUS:>> Simplifications to reduce the complexity of signatures in the collection> space.

I do not see how you will be able to do this without complicatinggreatly the use of collections. If you want to convince me otherwise,write an alternative implementation and convince more then 10% ofScala programmers to use it. Then, and only then, I will give it aserious look.

> Better/reified Generics. Seeing Oracle slides mentioning the possibility of> reified Generics in a future version of the JVM/Java I think Scala can push> this forward. The main thing in Generics I care about is not reflection> stuff, but the whole overloading/overriding/subclassing problem. Other> platforms don't use the erasure scheme either.

Our answer to that is manifests / type tags. I am convinced we canmake this work well so that no reified types are needed. Regardingother platforms: The only platform that uses reified types is .NET andit's by no means accepted in their core developer team that it was agood idea. Haskell, SML, OCaml use erased types just like Scala andJava. C++ templates do not count, IMO, because that's a compile-timeexpansion mechanism.

> Having better default/named arguments so that overloading can be put to rest> completely. (Makes reflection much more simpler)

I am very sympathetic to avoid overloading but do not see how we cando that and maintain Java compatibility.

> No nullable reference types by default.

We'd need to add non-null types. Not convinced it is worth it becauseOption fulfills this role. I'm sitting on the fence on this one, butmy gut feeling is it's better to improve Option.

> Further unification of AnyVal/AnyRef.

Will hopefully happen in 2.10. See value classes SIP.

> No any2StringAdd.

I believe once we have string interpolation in 2.10 (needs a vote, butI believe this one will be accepted),we can deprecate any2StringAdd afterwards. Maybe even deprecate in2.10.1 if we want to go fast, otherwise 2.11.

> No unsafe implicit conversions for primitive types.

Well, it was a design decision of Scala to keep Java expressionsas-is, and I believe it was a good one. Maybe at some point in thefuture we want to revise that. But right now I prefer we keep it.

>> I don't think the feature flags are the right way to pull off a migration to> a future version. Imho it makes more sense to not annoy people about> existing stuff but offer them a way to opt-in to "future implementations" on> a non-global base. E. g. the thing Adrian mentioned about virtpatmat.>

We can do it for the pattern matcher. There is simply no way to do itfor the core type system, without maintaining two different compilersat the same time.

So, your email put me on to the precise issue that arises when you try to do this, which I have now posted to the list. Basically, type members are existentials, it's just that the pack/unpack is hidden by the language (as it is with most languages that support existential quantification). It's pretty easy to see this existentiality though if you look:

type Id = { type A; def apply(a: A): A }

def foo(id: Id) = id(42) // error!

Let-bound polymorphism works just fine, since there's no difference between an instantiated universal and an instantiated existential. Higher-rank types (true universal polymorphism) do not work at all, and that's where the weakness of this approach shows up. It's possible that this may be resolved by leveraging the fact that module members are late-bound in the resolution (basically, the same trick we currently use to wrestle higher-rank types out of what is fundamentally let-bound polymorphism), but I'm not sure.

Higher-kinded types seem like the most dubious part of the proposal. I'm not even sure how it would all work out, and the theory here is generally untested waters. I'm still thinking about it though, and I look forward to seeing what Martin comes up with!

It's probably off topic for this thread, but I couldn't help but start thinking about the problem you and Martin were discussing. (FYI, I'm just an amateur who enjoys functional programming and has read TAPL/ATTAPL, so take this for what it's worth). I've enjoyed reading some of your blog posts (and your data structures talk), so I was wondering if you would give me your perspective on this interpretation:

I think if you think of Scala objects as first-class modules, then the ability to get universal quantification out of type members makes sense. Type members are far from regular existentials because they don't require pack/unpack (I think that's what TAPL called it) , which can only be done inside the scope of the function, where results of the existential type can't be returned. So you could imagine a function that takes a module and returns a modified copy of that module, while treating the input module's type member in a generic (a.k.a universally quantified way). I have to admit this is complete hand-waving (and very much out of my depth) but this is my intuition of how you get back F-sub type behavior from first-class modules (still not sure how to get the omega part).

So your identity example would be like:

type valueWithType = new {

type T

def value: T

}

And the function would then just be valueWithType => valueWithType#T

I think what Martin is saying is that you can turn the argument list of a function into a module, and then the type parameters of the function become abstract type members of the module. Not sure how higher kinded types works into there. Any thoughts?

They are considerably more powerful since they can be used as input
types as well as output types. What you are arguing is that the basis
of any good type systems should be F_omega_sub, or something close. I
want to explore something completely different. But let me finish the
research before going into details.

Sounds interesting! I wish you luck, but I find it hard to imagine how you'll be able to escape the lambda cube.

Type members are more powerful than existentials (as I said, they are a generalization). That certainly makes them sufficiently powerful to simulate universally quantified types under certain circumstances (e.g. probably let bindings), but I'm not convinced it allows them to represent universals in all cases.

Nothing new here. Polymorphic types for methods already exist and will
be maintained. In a calculus we could model identity as a member of a
parameterized class, but in a practical programming language things
will stay as they are.

I think I'm going to withhold judgement until I see the results of your research. As I said, it sounds very interesting, but I can't quite see it from where I'm standing.

I think it still works because you have to translate the argument list to "apply" into a module as well... So:

type Id = { def apply[A](a:A): A }

becomes

type Id = { def apply(a: { type A; def value: A } ): a#A }

And then

def foo(id: Id) = id(42)

works as long as you have a mechanism to automatically translate argument lists into modules, which was I think the gist of Martin's original idea.

It's not really translating, but wrapping. You're taking advantage of the following tautology:

forall a . exists b . a ⟷ b

Another way to handle this encoding is to do something like the following:

type Id = { def apply[A](a: A): A }

// becomes

type Id = { def apply = {

type A
def apply(a: A): A }}

This would be more consistent with what we have in Scala today. Basically, Martin's proposal is to replace instantiated universal types (traditional let-bound polymorphism) with instantiated existential types. Scala's higher-rank types arise at the intersection between first-class modules and let-bound polymorphism (due to the fact that the let-binding is on the method, and therefore free within the module itself). This trick to achieve higher-rank types with instantiated universal types is just as applicable to instantiated existential types, as my above snippet shows. This holds because an instantiated universal is trivially equivalent to an instantiated existential.

Still makes me itch. :-) In any case, I'm still really looking forward to what Martin is cooking up in this area.

On Tue, Mar 20, 2012 at 10:27 AM, Alex Kravets <kra...@gmail.com> wrote:> It's a very small thing, but simply restricting valid spacing to only the> space character would, IMHO, be very beneficial.>> It would end all the space-vs-tabs-vs-mix wars and make the source> indentation much more regular.

One is tempted to observe that this is also how people proposeperiodically to end real wars (the complete extermination of theOther) with generally unpleasant results. I'm not a big fan of tabseither, but it seems unwise to martyr the tabbies.

>> If one looks at most files in java.lang or java.util. packages they represent a jumble of > space-based and tab-based indentation.>> It's a very small thing, but simply restricting valid spacing to only the space character > would, IMHO, be very beneficial.

What a nightmare!! The result would be massive space filling (by IDEs or editors) to produce indentation !

>> It would end all the space-vs-tabs-vs-mix wars and make the source indentation much more regular.

To me whitespace is perfectly defined by combinations of space + tab + nl, like in regexp's.

>> Is there any chance that tabs can be prohibited from source (outside of comment blocks) in any > future version of Scala ?>> Cheers...>

We'd need to add non-null types. Not convinced it is worth it because
Option fulfills this role. I'm sitting on the fence on this one, but
my gut feeling is it's better to improve Option.

Just to hijack this thread somewhat... Given 'extends AnyVal', is it any more feasible today to revisit the old alchemists' dream of transmuting Some(x) to x, and None to null? (i.e. an unboxed Option)

On Tue, Mar 20, 2012 at 10:49 AM, Alex Cruise <al...@cluonflux.com> wrote:> Just to hijack this thread somewhat... Given 'extends AnyVal', is it any> more feasible today to revisit the old alchemists' dream of transmuting> Some(x) to x, and None to null? (i.e. an unboxed Option)

We'd need a new Option type, one which explicitly disallowed null.Anything you used it with would have to disallow null as well. Maybeif we had a fully working NotNull there could be an Option[T <:NotNull].

I've been pondering an idea of specializing Option[T] into T (with transparent treatment of None as null) but I got stuck completely with case of Option[Option[T]] and distinguishing between Some(None) and None.

If somebody comes up with an a sensible idea how to deal with nested options then I might give it a whirl and hack some experimental compiler phase that would specialize option by leveraging what's been implemented for value classes.

I'd be extremely curious to see how much performance gains we could get from such a specialization.

Second, the discussion here shows that complex existentials might
actually be something we want to remove from a Scala 3. And
higher-kinded types might undergo some (hopefully smallish) changes to
syntax and typing rules. So I think it is prudent to make people flag
these two constructs now with explicit imports, because, unlike for
the rest of the language we do not want to project that these two
concepts will be maintained as they are forever. If you are willing to
keep your code up to date, no reason to shy away from them. But if you
want a codebase that will run unchanged 5 years from now, maybe you
should think before using complex existentials or higher kinded types.
Of course the docs for these two feature flags will contain a
discussion of these aspects, so people can make an informed choice for
themselves.

What about types computed for mix-in composition? I thought DOT/Scala3 would result in very different typing for mix-in composition. Isn't it the case?

Option[Option[...[T]]] would have to be stored underneath as AnyRefand not T, since it would need to be able to refer to T and NoneN, butthat doesn't seem like a blocker. Maybe it slows things down with morepolymorphism, but it still reduces allocation of Some instances.

Using spaces vs. tabs is not entirely a matter of taste or personal preference, there's actually a valid, IMHO, reason to avoid tabs: Tab rendering is not standardized across all OS's, IDE's etc.

Therefore sometimes tabs are rendered as 2, 4, 6 or 8 times the width of a space and sometimes even a fractional number, this causes misalignment of indentation when there is a mix of spaces and tabs (and I've never seen a tab-only source file in my 16+ years of professional experience).

If you want to observe the effect of this, just click through into the source on Java libraries and you'll observe a complete jumble of indentation.

On Tue, Mar 20, 2012 at 10:27 AM, Alex Kravets <kra...@gmail.com> wrote:
> It's a very small thing, but simply restricting valid spacing to only the
> space character would, IMHO, be very beneficial.
>
> It would end all the space-vs-tabs-vs-mix wars and make the source
> indentation much more regular.

One is tempted to observe that this is also how people propose
periodically to end real wars (the complete extermination of the
Other) with generally unpleasant results. I'm not a big fan of tabs
either, but it seems unwise to martyr the tabbies.

This looks like the unfolding I've been assuming in my comments. I'll need to read further to see how you handle higher-rank universals, but it looks like kinds are just encoded directly. Did you prove the soundness of using this encoding for let-bound polymorphism, or is that assumed?

On Tue, Mar 20, 2012 at 09:03, Chris Marshall <oxbow...@gmail.com> wrote:> I still don't get SPI-18. If I start typing...>> macro def

Technically,

def ident[TP](args: T): T = macro ...

I found it confusing at first, but the distinction is interesting andI'm very much in agreement with. The definition is exactly the same asevery other in Scala: it takes some parameters and produces a result,all according to whatever types you specify. A macro does not changethe definition: if you are taking two strings and returning a boolean,you are taking two strings and returning a boolean, period. The*implementation* of said definition that is *produced* by a macro.

Off topic, I know, but I like nipping misconceptions at the bud (allsorts of sp?).

>> ...I'm writing a macro. Why is my IDE/paperclip saying "it looks like you> are writing a macro - want me to import language.macroDefs?" adding> anything? Why is it being any more explicit about whether macros are being> used in the program than the program, um, containing macros?>> Having said all that; very interesting update on the direction of scala>> Chris>> On Tue, Mar 20, 2012 at 11:29 AM, martin odersky <martin....@epfl.ch>> wrote:>>>> I am certain there is no way we can let macros and dynamic types>> into the language without such a provision.>>>> Cheers>>>> - Martin>>

first of all thanks to the whole team for all the hard work. Sometimes I read my stuff again and realize that I'm sounding completely negative, when I'm in fact excited and thankful for all the work you put into Scala. I appreciate it very much, even if I'm only mentioning the parts I don't like.

I am a bit more optimistic than you. Java has published a roadmap and some of the changes might well be breaking (I would not see how else to introduce reified types). Now we all know that roadmaps like that are tentative and if the breakage is too serious one won't be able to do it. But I believe it's better to be clear about the directions of _work on the language_ without already being explicit about releases.

Ok, that's great. Sometimes I think it should be possible to take more control of Scala's fate on the JVM, like Charles Nutter does for JRuby quite effectively (at least it looks like it does). But maybe most of it happens behind the scenes and Charles Nutter's public comments just build up some expectations towards the JVM team.

I do not see how you will be able to do this without complicating greatly the use of collections. If you want to convince me otherwise, write an alternative implementation and convince more then 10% of Scala programmers to use it. Then, and only then, I will give it a serious look.

You're right, it makes no sense to have an discussion without a working proposal. Btw, is there any decision yet about the Traversable/Iterable merge?

I am very sympathetic to avoid overloading but do not see how we can do that and maintain Java compatibility.

import lang.overloading? But yes, people on the Ceylon list talk how hard to implement/figure out that decision is (leaving out overloading).

> No nullable reference types by default.

We'd need to add non-null types. Not convinced it is worth it because Option fulfills this role. I'm sitting on the fence on this one, but my gut feeling is it's better to improve Option.

What about making something like “Foo with Null” work? Or union types which Adriaan mentioned in some other scenario? This would be especially nice for enums/pattern matching/exhaustiveness checks, e. g. having something like type Foo = Bar|Baz|Bad where Bar/Baz/Bad are not necessarily in some subtyping relationship.

> Further unification of AnyVal/AnyRef.

Will hopefully happen in 2.10. See value classes SIP.

Yes, although on an unrelated note, I'm very concerned about the feature overlap between implicits and value classes. The whole topic of implicit conversions to value types will get very interesting.

> No any2StringAdd.

I believe once we have string interpolation in 2.10 (needs a vote, but I believe this one will be accepted), we can deprecate any2StringAdd afterwards. Maybe even deprecate in 2.10.1 if we want to go fast, otherwise 2.11.

Yes, looking forward to that, although my main concern about the current situation is about the + and the clashes it produces. Using something different , maybe even . or .. might be enough.

> No unsafe implicit conversions for primitive types.

Well, it was a design decision of Scala to keep Java expressions as-is, and I believe it was a good one. Maybe at some point in the future we want to revise that. But right now I prefer we keep it.

Octal numbers and fp literals like “123.” are Java legacy, too, but are now going away.

We can do it for the pattern matcher. There is simply no way to do it for the core type system, without maintaining two different compilers at the same time.

My idea was basically to introduce the heavy stuff first, so that when we arrive at 3 there are no big compatibility issues to expect.But then of course if it won't work it doesn't sound too great :-/ Although maintaining Scala 2 _and_ Scala 3 cannot be avoided for some time either, right?

No, I think that would got dropped. We have to consider it againbefore freezing for 2.10.

Yes, but I guess they are used much more rarely. One thing I do notunderstand. Why outlaw conversions from Long to Float? I mean, we knowFloat is a lossy approximation no matter what you do, so why is bitloss in the conversion a problem?

Cheers

- Martin

>> We can do it for the pattern matcher. There is simply no way to do it> for the core type system, without maintaining two different compilers> at the same time.>> My idea was basically to introduce the heavy stuff first, so that when we> arrive at 3 there are no big compatibility issues to expect.> But then of course if it won't work it doesn't sound too great :-/ Although> maintaining Scala 2 _and_ Scala 3 cannot be avoided for some time either,> right?>>> Thanks and bye,>>> Simon

On Tue, Mar 20, 2012 at 18:06, Alex Kravets <kra...@gmail.com> wrote:>> Using spaces vs. tabs is not entirely a matter of taste or personal> preference, there's actually a valid, IMHO, reason to avoid tabs: Tab> rendering is not standardized across all OS's, IDE's etc.

Well, and spaces have poor indentation on fonts of non-fixed size.But, by all means, do bring it up on scala-debate, and once consensusis formed, bring it back to scala-language. Until then, _please_ donot inject discussions about tabs vs spaces on threads about theevolution of Scala's type system. Let me end this with a quote byJames Iry: "1940s - Various "computers" are "programmed" using directwiring and switches. Engineers do this in order to avoid the tabs vsspaces debate."

>> Therefore sometimes tabs are rendered as 2, 4, 6 or 8 times the width of a> space and sometimes even a fractional number, this causes misalignment> of indentation when there is a mix of spaces and tabs (and I've never seen a> tab-only source file in my 16+ years of professional experience).>> If you want to observe the effect of this, just click through into the> source on Java libraries and you'll observe a complete jumble of> indentation.>> Cheers...>>>> On Tue, Mar 20, 2012 at 10:41 AM, Paul Phillips <pa...@improving.org> wrote:>>>> On Tue, Mar 20, 2012 at 10:27 AM, Alex Kravets <kra...@gmail.com> wrote:>> > It's a very small thing, but simply restricting valid spacing to only>> > the>> > space character would, IMHO, be very beneficial.>> >>> > It would end all the space-vs-tabs-vs-mix wars and make the source>> > indentation much more regular.>>>> One is tempted to observe that this is also how people propose>> periodically to end real wars (the complete extermination of the>> Other) with generally unpleasant results. I'm not a big fan of tabs>> either, but it seems unwise to martyr the tabbies.>>>>> --> Alex Kravets def redPill = 'Scala> [[ brutal honesty is the best policy ]]>

On Tue, Mar 20, 2012 at 11:43:29PM +0100, martin odersky wrote:> Yes, but I guess they are used much more rarely. One thing I do not> understand. Why outlaw conversions from Long to Float? I mean, we know> Float is a lossy approximation no matter what you do, so why is bit> loss in the conversion a problem?

I can't think of the exact problem I ran into when working on Spire [1]but the unsafe conversions did give me problems, and I would also liketo be rid of them.

The thing that is galling is that Int/Long are precise, and the usershould need to be explicit about an action that will move to anapproximate type (unless the operation could only be done with anapproximate type). There are valid reasons to prefer pow(Double) topow(Long) in some cases (it's a bit faster) but it's easy for a user toget this wrong, and often you really do want pow(Long), which Scaladoesn't provide.

In general I would like it Scala supported more arithmetic operationson all the numeric types (rather than relying on implicit conversions).

It's hardly about precision, it's about losing type safety. Thatnumbers all support about the same operations don't make type mistakesright, just make them more difficult to catch. At least, that's my ownreasons.

>> Cheers>> - Martin>>>>> We can do it for the pattern matcher. There is simply no way to do it>> for the core type system, without maintaining two different compilers>> at the same time.>>>> My idea was basically to introduce the heavy stuff first, so that when we>> arrive at 3 there are no big compatibility issues to expect.>> But then of course if it won't work it doesn't sound too great :-/ Although>> maintaining Scala 2 _and_ Scala 3 cannot be avoided for some time either,>> right?>>>>>> Thanks and bye,>>>>>> Simon>>>> --> Martin Odersky> Prof., EPFL and Chairman, Typesafe> PSED, 1015 Lausanne, Switzerland> Tel. EPFL: +41 21 693 6863> Tel. Typesafe: +41 21 691 4967

> You're right, it makes no sense to have an discussion without a working> proposal. Btw, is there any decision yet about the Traversable/Iterable> merge?No, I think that would got dropped. We have to consider it againbefore freezing for 2.10.

Sorry, I'm stupid. What is being dropped? The decision, the merge, the differentiation?

> > No unsafe implicit conversions for primitive types.

One thing I do not understand. Why outlaw conversions from Long to Float?

I mean, we know Float is a lossy approximation no matter what you do, so why is bit loss in the conversion a problem?

Because it is often not visible. E.g. when integer types are used in arguments to a method accepting floating point values.Another probably more severe example isscala> (123456789).roundres0: Int = 123456792

When I learned about implicits the big rule was that implicits shouldn't be used for unsafe operations, but only for stuff where it is sure that it won't go wrong for all inputs.

On Wed, Mar 21, 2012 at 12:05 AM, Simon Ochsenreither<simon.och...@googlemail.com> wrote:> Hi,>>>> > You're right, it makes no sense to have an discussion without a working>> > proposal. Btw, is there any decision yet about the Traversable/Iterable>> > merge?>> No, I think that would got dropped. We have to consider it again>> before freezing for 2.10.>> Sorry, I'm stupid. What is being dropped? The decision, the merge, the> differentiation?

>>> > > No unsafe implicit conversions for primitive types.>>>>>> One thing I do not understand. Why outlaw conversions from Long to Float?>>>> I mean, we know Float is a lossy approximation no matter what you do, so>> why is bit loss in the conversion a problem?>> Because it is often not visible. E.g. when integer types are used in> arguments to a method accepting floating point values.> Another probably more severe example is> scala> (123456789).round> res0: Int = 123456792>> When I learned about implicits the big rule was that implicits shouldn't be> used for unsafe operations, but only for stuff where it is sure that it> won't go wrong for all inputs.>>> Thanks and bye,>>> Simon

> It's a very small thing, but simply restricting valid spacing to only the> space character would, IMHO, be very beneficial.> > It would end all the space-vs-tabs-vs-mix wars and make the source> indentation much more regular.>

> Is there any chance that tabs can be prohibited from source (outside of> comment blocks) in any future version of Scala ?

I didn't expect that I'd have to start a post to a Scala mailing listlike that again, but here it goes:

I sincerely hope this proposal is a joke. Formatting is certainly *not*a compiler issue. Furthermore, declaring your opinion as the end of suchan issue is presumptuous to say the least.

There's just no way not to break one. If foreach is concrete, thefirst breaks. If it is abstract, the second breaks.

This is probably resolvable by introducing one or more new types, butagain, it's unappealing to touch it again unless I intend to merge itthe minute it works. So we have to agree on everything up front, incontrast to my usual "write all the code and only then think aboutwhat I'm writing" approach.

My ambition for the next 2-4 years is that we can find furthersimplifications and unifications and arrive at a stage where Scala isso obviously compact in its design that any accusations of it being acomplex language would be met with incredulity. That will be the best counter-argument to the naysayers

I don't personally believe it's a particularly complex language. There are a few surface-level things I find clumsy, like methods whose terminal symbol is a ':' being right-biased when used infix, or the '_*" type ascription to keep varargs as varargs, but all in all I find it much more uniform in its design discipline than most other languages I've worked with. I like Scala, I like the change it's helping bring to our industry, I like the fact that it evolves over time in reaction to how people use it, and I frankly believe you're overly concerned about these "naysayers."

Regardless how internally consistent and small the core language becomes, there will always be the fearful few who perceive Scala and functional programming in general as overkill. No problem; those people will always have Java, which will never evolve at a rate faster than the rate they're willing to learn new things. Remember PoPL '97 and OOPSLA '98? Now remember Java 1.5 (2004)? Yeah, that was a long time...

Anyway, I appreciate that you're hoping to drive adoption by reducing the perceived complexity you find so vexing, but two comments on that:

- First, "perceived" is the key word there. One thing I've learned in this business is that marketing dictates truth. I believe we have a wealth of good marketing available to us as a community that, if we choose to use it, will drown out the occasional whining with an overwhelming chorus of "look at this amazing thing we were able to build, how fast and scalable it is, how few lines of code, how well covered by tests," and so on...

First, while we might be able to remove complexities in the definitionof the Scala language, it's not so clear that we can removecomplexities in the code that people write. The curse of a verypowerful and regular language is that it provides no barriers againstover-abstraction.

At the same time, providing a powerful language seems to be the goal! I certainly recognize that nothing about SIP-18, or anything else you've proposed recently, would reduce Scala's expressive power. And quoting earlier in your original post here:

it will be a big help for the people writing advanced software systems inScala. Their job will be easier because they will work with fewer butmore powerful concepts.

So it seems you propose to offer a very powerful tool for modularizing software, while at the same time warning emphatically against its use! I've said it before, and I'll say it again: telling novices that certain aspects of Scala are too advanced or dangerous for them is self-fulfilling. Better simply to offer an awesome language and teach people how to use it well.

Neat! Are there any papers or preliminary things published about this idea?

Now if we do that then we have suddenly gained the essentialfunctionality of higher-kinded types and existential types for free! Ahigher-kinded type is simply a type where some parameters are leftuninstantiated.

Also note that the "type lambda trick" for partially applying type constructors to type parameters becomes unnecessary. Your proposed syntax for constructing types with named type parameters (analogous to the syntax for invoking methods with named value parameters) is nice.

Now clearly, something must get lost in a scheme that unifies higher-kinded andexistential types by eliminating both. The main thing that does getlost is early checking of kind-correctness. Nobody will complain thatyou have left out type parameters of a type, because the result willbe legal. At the latest, you will get an error when you try toinstantiate a value of the problematic type.

Yes, and this seems to somewhat parallel the tradeoffs with declaration-site vs. use-site variance annotations. I'm specifically worried about what the compiler error messages might look like with this new scheme... For example, let's take everybody's second-favorite typeclass:

trait Functor[F[_]] {

def map[A, B](f: A => B)(fa: F[A]): F[B]

}

In the new kind-free ("unkind" ???) world, that looks instead like "trait Functor[F]", right? So I can then declare something like:

Would that even be considered a valid override? Or should it typecheck without the additional 'X' parameter and then fail elsewhere (e.g. in type inference at map()'s call sites)? In either case, how would the compiler tell me I screwed up? That's my only concern about this idea, which otherwise strikes me as very elegant.

I might not have stated it clearly enough. The motivation stated inthe roadmap has nothing to do with _perceived_ complexity. If that wasall, then probably better to not to talk about it at all and do somemarketing fluff that papers over it.

It's rather that, when it comes to complexity, I want to set the barvery high. I want to develop Scala into a language that's trulysimple, not to placate or convince the naysayers but because I thinkit will improve the language.

On Tue, Mar 20, 2012 at 4:26 PM, Daniel Spiewak <djsp...@gmail.com> wrote:> However, you have no way to encode the type forall a . a -> a. Naively, one> might want to try something like this:>> type Id = { type A; def apply(a: A): A }>> Unfortunately, that falls over immediately:>> def foo(id: Id) = id(42) // error!!>> The reason this falls over is A is not any type, it is some type.

I'm sure Adriaan will correct me if I've got this wrong, but I thinkthe idea is to add a concept of type "un-members" which preciselycapture the universally quantified aspect that you're missing.

Will this proposal have an impact on compilation time when options are
restricted? I think it would be really pleasant when Scala could in
principal be restricted to a language subset that compiles really fast
for code emission on runtime.

On Wed, Mar 21, 2012 at 3:37 AM, Paul Phillips <pa...@improving.org> wrote:> On Tue, Mar 20, 2012 at 4:08 PM, martin odersky <martin....@epfl.ch> wrote:>> Fingers typed too fast to keep meaning. I meant, the issue got dropped.>> Not entirely. I did a lot of the Iterable/Traversable blend, but it> was tedious and I could tell it was going to be squandered to drift if> I didn't merge it immediately. Also, I ran into this.>> http://www.scala-lang.org/node/11957>> As best I recall, I saw no way to do it in a backward compatible way.> As detailed at the above:>> In Traversable, foreach is abstract> in Iterable, foreach is concrete, iterator is abstract> Iterable extends Traversable>> So here are two lines which could exist in the wild now:>> new Traversable[Int] { def foreach[T](f: Int => Unit): Unit = ??? }> new Iterable[Int] { def iterator = ??? ; override def foreach[T](f:> Int => Unit): Unit = super.foreach(f) }>> There's just no way not to break one. If foreach is concrete, the> first breaks. If it is abstract, the second breaks.

Oh yes, it's all coming back to me now. Thanks for paging it back in.I don't have a solution for it either I'm afraid.

- Martin

>> This is probably resolvable by introducing one or more new types, but> again, it's unappealing to touch it again unless I intend to merge it> the minute it works. So we have to agree on everything up front, in> contrast to my usual "write all the code and only then think about> what I'm writing" approach.

Anyway, I appreciate that you're hoping to drive adoption by reducing the perceived complexity you find so vexing, but two comments on that:

- First, "perceived" is the key word there. One thing I've learned in this business is that marketing dictates truth.

Hear, hear. My favorite example of this is Ada. Ada '95 was actually a pretty good language. (Disclaimer: I sat two offices away from one of the leads, and I wrote the first-ever IDE for the language. Still, I found it fairly elegant and powerful.) But nobody outside of government circles picked it up because it was "too complicated": it never overcame the reputation of the too-far-ahead-of-its-time Ada '83. So I was amused, a few years later, to realize that the language spec for C++ was *longer* than that of Ada, and for good reason: there were a lot more nooks and crannies to it. C++ was in many ways a good deal more complicated than Ada '95, but reputation won out.

The thing I love about Scala is that it is genuinely intuitive, in the sense that I often say, "it seems like this should work" and it actually *does*. I rarely find the compiler preventing me from doing things that seem logical. That's more unusual than people tend to credit, and a testimony in favor of the driving philosophy of being as consistent as is feasible. It means that, while the language *spec* is long and complex, you can pick up concepts and *use* them consistently, as opposed to having to learn all the inconsistencies of many languages.

(Indeed, this whole conversation reminds me of the people who insist on bespoke scripting languages because they are "less complicated", and quietly ignore the fact that those languages sometimes have 900-page reference manuals describing all of their little idiosyncracies...)

so there's at least an argument that Long -> Double automatic conversion is sometimes the right thing to do, whereas with Float it's _never_ the right thing to do unless you didn't mean to have a Long in the first place.

Their job will be easier because they will work with fewer butmore powerful concepts.

I believe that's exactly the right vision and you should pursue this consistently and relentlessly. The language ought to give you very few and powerful abstractions, and moreover those abstraction should be highly integrated and form a unified whole. E.g. higher-kinded types are not at all complex, but their implementation in Scala is a little bit of a second-class citizen. They're not fully integrated into the language and so they feel tacked on, which in turn feels complicated because there is a disconnect between the model of kinds in your head and the model of kinds in Scala. Another example is from the library, where there are interfaces that provide "map" and "flatMap" methods but sometimes you really want to reach for the highly general and powerful abstraction "Monad". Some people perceive such abstractions as complex, but they actually greatly simplify development and make our job easier because we have a concept under which we can unify a great number of different data types.

The potential breakthroughidea here is to unify type parameters and abstract type members. Theidea would be to treat the following two types as equivalent

trait Seq[type Elem] and trait Seq { type Elem }

This is very interesting. It does seem like it would be possible if somewhat strange. Are the abstract type members ordered? If you partially apply, how do you know which one you're applying? I am also afraid it would make it difficult to work with polymorphic functions of rank 2 or higher. I'm curious how the following would work. Right now in Scala we can model a universally quantified type as follows:

trait Forall[F[_]] { def apply[A]: F[A]}

Then a type like∀x. F(x)

is encoded asForall[({type λ[x] = F[x]})#λ]

An existentially quantified type is currently modeled this way:

trait Exists[F[_]] { type A def apply: F[A]}

Then the type∃x. F(x)

becomesExists[({type λ[x] = F[x]})#λ]

How would this be modeled in the new scheme? Something like this?

trait Forall { type F type A def apply: F[A]}

and then Exists[F = F] ?

It seems likely that F would need a kind annotation here in order for things to remain sane.

new HashMap[String, List[Int]] with SynchronizedMap

I can see how that would work if the type names agree, but what if they don't? Seems like you would need type-level operators to rename, project, basically all the usual tuple calculus suspects. Let me suggest this as an alternative:

new (HashMap with SyncrhonizedMap)[String, List[Int]]

The main thing that does getlost is early checking of kind-correctness. Nobody will complain thatyou have left out type parameters of a type, because the result willbe legal. At the latest, you will get an error when you try to

instantiate a value of the problematic type. So type-checking will bedelayed. Everything will still be done at compile-time. But some ofthe checks that used to raise errors at the declaration site will nowraise errors at the use site.

Delaying kind checking until the typer is very much like delaying type checking until runtime. It basically makes the type-level language untyped (or, to borrow from "dynamic" language parlance, it would be dynamically kinded). That is, you could construct all kinds of crazy type-level things that make no sense whatsoever, and you would never know they don't make sense until you try to instantiate a value of an unsound type. Basically every poorly kinded type would just be uninhabited, i.e. equivalent to Nothing.

I think that this might be a price too high to pay. I would rather see a step in the other direction, introducing an actual kind system complete with polymorphic kinds. This would greatly simplify library development.

eliminate what I consider the worst part of the Scala compiler. Itturns out that the internal representation of higher-kinded types inthe Scala compiler is the same as the internal representation of rawtypes in Java (there are good reasons for both representationchoices).

Yeah, this is definitely a problem. But maybe the solution is not dynamically kinded types, but a proper polymorphic kind system. The proposed simplification is not necessarily incompatible with that.

Since we're discussing complexity, and a roadmap for Scala 3, there's
something else I'd like to throw into the mix - this seems like an
opportunity to get rid of a bit of badness that has deviled me ever
since my first week using Scala.

Can we please change the language such that PartialFunction[A, B] no
longer subclasses Function1[A, B], and perhaps provide higher-arity
PartialFunction instances? The inheritance hierarchy as it presently
stands is completely upside-down, with surprising results; for
example:

It would be much better if PartialFunction[A, B] and Function1[A, B]
did not share a subtyping relationship at all, but in a pinch it would
be acceptable for FunctionN[...] to extend PartialFunctionN[...], with
an optional (because it is of course unsafe) implicit conversion that
could be imported to allow promotion of PartialFunctionN instances to
FunctionN.

This was considered a long time ago but rejected.The point is, a partial function is a subtype of Function1 because ithas more capabilities: It also supports the isDefinedAt method. Theconfusion comes probably from the name. One thinks that a Function1would then be a total function. But that, of course, is wrong.Function1 can be undefined for some arguments just as PartialFunctioncan. It's just that it won't let you ask about it.

On Mar 21, 3:56 pm, martin odersky <martin.oder...@epfl.ch> wrote:
> This was considered a long time ago but rejected.
> The point is, a partial function is a subtype of Function1 because it
> has more capabilities: It also supports the isDefinedAt method. The
> confusion comes probably from the name. One thinks that a Function1
> would then be a total function. But that, of course, is wrong.
> Function1 can be undefined for some arguments just as PartialFunction
> can. It's just that it won't let you ask about it.

The problem is not the naming. The problem is that the relationship
between the types implies that every PartialFunction is total. There
should actually be no subtyping relationship between them at all.
However, in the case where Function1 might extend PartialFunction, the
implementation of isDefinedAt is simply true.

The fact that Function1 may not be total seems irrelevant to me; you
can get nontermination anywhere. That's no reason for the types to
lie.

On Wed, Mar 21, 2012 at 7:20 PM, Runar Bjarnason <runar...@gmail.com> wrote:>>> On Tuesday, March 20, 2012 7:29:05 AM UTC-4, martin wrote:>>>> Their job will be easier because they will work with fewer but>> more powerful concepts.>>> I believe that's exactly the right vision and you should pursue this> consistently and relentlessly. The language ought to give you very few and> powerful abstractions, and moreover those abstraction should be highly> integrated and form a unified whole. E.g. higher-kinded types are not at all> complex, but their implementation in Scala is a little bit of a second-class> citizen. They're not fully integrated into the language and so they feel> tacked on, which in turn feels complicated because there is a disconnect> between the model of kinds in your head and the model of kinds in Scala.> Another example is from the library, where there are interfaces that provide> "map" and "flatMap" methods but sometimes you really want to reach for the> highly general and powerful abstraction "Monad". Some people perceive such> abstractions as complex, but they actually greatly simplify development and> make our job easier because we have a concept under which we can unify a> great number of different data types.>>>>> The potential breakthrough>> idea here is to unify type parameters and abstract type members. The>> idea would be to treat the following two types as equivalent>>>> trait Seq[type Elem] and trait Seq { type Elem }>>> This is very interesting. It does seem like it would be possible if somewhat> strange. Are the abstract type members ordered?

We'd have to assume an ordering. Not sure yet exactly which one tochoose. One possibility is that they would be ordered if defined withparameter notation, but not if defined as members.

Given the current state of research I don't have definite answers tothese. But they are good use cases to keep in mind!

> I can see how that would work if the type names agree, but what if they> don't? Seems like you would need type-level operators to rename, project,> basically all the usual tuple calculus suspects. Let me suggest this as an> alternative:>> new (HashMap with SyncrhonizedMap)[String, List[Int]]

That's interesting!

>>>> The main thing that does get>> lost is early checking of kind-correctness. Nobody will complain that>> you have left out type parameters of a type, because the result will>> be legal. At the latest, you will get an error when you try to>> instantiate a value of the problematic type. So type-checking will be>> delayed. Everything will still be done at compile-time. But some of>> the checks that used to raise errors at the declaration site will now>> raise errors at the use site.>>> Delaying kind checking until the typer is very much like delaying type> checking until runtime. It basically makes the type-level language untyped> (or, to borrow from "dynamic" language parlance, it would be dynamically> kinded). That is, you could construct all kinds of crazy type-level things> that make no sense whatsoever, and you would never know they don't make> sense until you try to instantiate a value of an unsound type. Basically> every poorly kinded type would just be uninhabited, i.e. equivalent to> Nothing.>> I think that this might be a price too high to pay. I would rather see a> step in the other direction, introducing an actual kind system complete with> polymorphic kinds. This would greatly simplify library development.>

I agree it's a tradeoff. There are some thoughts from Adriaan's sideto regain kind checking by distinguishing input and output membertypes. I see that as similar in spirit to the progression from Prologto Mercury, say.

There are many languages be made after Scala yet,such as Kotlin(JetBrains)、Xtend(Eclipse)、Ceylon(Redhat) etc,It shows that programmer community and industry aspire a more simpler language and still powerful.

> There are many languages be made after Scala yet,such as> Kotlin(JetBrains)、Xtend(Eclipse)、Ceylon(Redhat) etc,It shows that> programmer community and industry aspire a more simpler language and still> powerful.

Assumes facts not in evidence. It shows that humans in general andprogrammers in particular are imbued with infinite confidence thatthey can do it better than the other guy. (Which is great, becausesometimes they're right.) And that starting something is easy, andthat everything made by decent programmers starts out high on eleganceand low on tradeoffs. But the woods are lovely dark and deep, andthey have miles to go before they sleep.

Not that I disagree that people want "simpler", or think they do. Ofcourse simple plus powerful implies many degrees of freedom, anotherthing which people appear not to want (at least when it comes to theirco-workers.) Eventually the time comes to pick something and make ityour own.

Qihui is absolutely right, IMHO, that innovations and refinements in programming languages over the last couple of decades show that we're just not satisfied with the ease of expressive power we're able to achieve with current languages. Before or after irregardless, the adoption of features such as generics, type inference, etc. in languages like Java, Scala, C#, etc. can make it more natural to say what we mean.

However, glomming features onto a language can make it hard for new adepts to be effective quickly. SIP-18, after thinking about this for a few days, IMHO gives an orderly way of approaching features which may or may not have immediate applicability to the teams, the projects, and the state of the particular implementation.

There are many languages be made after Scala yet,such as Kotlin(JetBrains)、Xtend(Eclipse)、Ceylon(Redhat) etc,It shows that programmer community and industry aspire a more simpler language and still powerful.

I have yet to see the same power but simpler.

-- Viktor Klang

Akka Tech Lead

Typesafe- The software stack for applications that scaleTwitter: @viktorklang

> Qihui is absolutely right, IMHO, that innovations and refinements in
> programming languages over the last couple of decades show that we're just
> not satisfied with the ease of expressive power we're able to achieve with
> current languages. Before or after irregardless, the adoption of features
> such as generics, type inference, etc. in languages like Java, Scala, C#,
> etc. can make it more natural to say what we mean.

Those is all good and expressive power is essential of things which
lead to decision to move our internal development to Scala. And ofc
refinement of language core is good move, but what about control? From
practice standpoint knowledge of price for using concrete features is
helpful, sometimes - required. Performance, instantiation, memory
footprint, control flow - for most usage cases those is hollow sound,
everything works just good enough. Not when you should justify changes
to code of some class with 5*10^11 instances over live cluster. And
you know - we too wish expressive power, we too wish being natural to
code what we mean - with additional feat to being able to know what
exactly we mean =)

I have no language building expertise to insist on concrete things but
keep hope to see evolution of Scala with code transparency and better
avoidance of unnecessary pessimization in mind.

On 27 mrt, 18:05, Qihui Sun <qihui....@gmail.com> wrote:
> Welcome to simplify Scala.
> There are many languages be made after Scala yet,such as
> Kotlin(JetBrains)、Xtend(Eclipse)、Ceylon(Redhat) etc,It shows that
> programmer community and industry aspire a more simpler language and still
> powerful.

I agree, but usually these new languages have only one language
feature and/or paradigm in which they are excellent (i.e. the feature
the language designers were frustrated about), but the others are less
developed or not available which is too bad. And also, nothing is
known about performance (e.g. http://shootout.alioth.debian.org/ ).
Scala is well-balanced between its paradigms (imperative, functional-
style, object-oriented, meta-programming, language-oriented/DSL and
sequential vs parallel) combined with strong statically type checking
and type inferred succinct DRY-as-possible source code and high
performance bytecode. It is also proven in the enterprise.
So as a general purpose multi-paradigm programming language targeting
different platforms Scala is the best around.
I aspire simplicity by automation, not by removing choices.

On 27 mrt, 18:05, Qihui Sun <qihui....@gmail.com> wrote:
> Welcome to simplify Scala.
> There are many languages be made after Scala yet,such as
> Kotlin(JetBrains)、Xtend(Eclipse)、Ceylon(Redhat) etc,It shows that
> programmer community and industry aspire a more simpler language and still
> powerful.
>

>
> >>> This was considered a long time ago but rejected.
> >>> The point is, a partial function is a subtype of Function1 because it
> >>> has more capabilities: It also supports the isDefinedAt method. The
> >>> confusion comes probably from the name. One thinks that a Function1
> >>> would then be a total function. But that, of course, is wrong.
> >>> Function1 can be undefined for some arguments just as PartialFunction
> >>> can. It's just that it won't let you ask about it.
>
> >> The problem is not the naming. The problem is that the relationship
> >> between the types implies that every PartialFunction is total. There
> >> should actually be no subtyping relationship between them at all.
> >> However, in the case where Function1 might extend PartialFunction, the
> >> implementation of isDefinedAt is simply true.
>
> >> The fact that Function1 may not be total seems irrelevant to me; you
> >> can get nontermination anywhere. That's no reason for the types to
> >> lie.
>
> >> Kris
>
> --
> Solomon

Just to hijack this thread somewhat... Given 'extends AnyVal', is it any more feasible today to revisit the old alchemists' dream of transmuting Some(x) to x, and None to null? (i.e. an unboxed Option)

Every time I see a performance issue due to option boxing, I think to myself "Down with Option! Long live Option!". Option needs to die, in the sense that it is an instantiated object in the JVM. It must live because null checks in user code are evil. Maybe as long as Some(null) is disallowed (or hidden from the user, allowing only an Option(val: T) where a None is produced if val is null), Option could live in the compiler and not on the runtime. I am probably wrong.

Perhaps even interop with Java can be satisfied with such a thing, making nulls disappear, replaced with None. The nested cases may still require an instantiated object however, e.g. Some[Some[T]] and Some[None]. Again, I am probably wrong, having not dug very deep here.

On 20/03/2012 22:06, Alex Kravets wrote:
> Using spaces vs. tabs is not entirely a matter of taste or personal preference, there's
> actually a valid, IMHO, reason to avoid tabs: Tab rendering is not standardized across all
> OS's, IDE's etc.

That's what is nice with tabs: just change a setting, and you have indentation fitting
your preferences, from 1 to 8 (or more!) units, without even changing the source code.

> Therefore sometimes tabs are rendered as 2, 4, 6 or 8 times the width of a space and
> sometimes even a fractional number, this causes misalignment of indentation when there is
> a mix of spaces and tabs (and I've never seen a tab-only source file in my 16+ years of
> professional experience).

You won't see mixed spaces and tabs in my sources, they are pure tabs, and I am happy with
this... I am OK with space-only indentation, that's what we use at work (with three
spaces...), but I still find them less convenient.
Both ways are OK, as long as they are consistently use, the real evil is indeed in mixing
spaces and tabs.

> If you want to observe the effect of this, just click through into the source on Java
> libraries and you'll observe a complete jumble of indentation.

Sigh, yes, it is nightmarish, I admit it. But don't put the blame on tabs, put the blame
on lack of tools to enforce any policy they could have chosen. Hey, you can open a file
with 3 spaces indentation, have your editor remaining in default 4 spaces (if that's your
default) and introduce inconsistent indentation in the file without noticing. In general,
not in the same function (might be too visible) but perhaps in a new function, a new
class, etc.

To please everybody, you can make the compiler to reject such mixes of spaces and tabs,
instead...

Note: I am also adept of aligned braces
{
}
again, a question of taste. I was first shocked to see that the Go language actually
forces to use the K&R style. But, well, somehow it is a good way to enforce a policy,
instead of the Java _conventions_...

But such work belong more to a tool like Checkstyle, actually. Or perhaps a compiler
plug-in, for those not using an IDE. Or a VCS plug-in... Or an Ant/Maven/Gradle/SBT/<you
name your favorite build tool> plug-in.

It appears to me that all counterexamples to preservation (which prevent a simple proof that DOT is sound), derive from the generative essence that abstract (a.k.a. virtual) types are allowed to be carried around at runtime. It is not the choice of representation as parameteric or abstract types which causes the problem (since given both the ability to encode the self type, they are rationally equivalent); rather that hidden abstract types can be refined at runtime. All the syntactical sugar that prefers one over the other is apparently irrelevant to the issue which is allowing refinement to not be checked at compile-time, e.g. val x : Animal instead of val x : Animal { Food = Grass | Meat } equivalent to some parameter representation val x : Animal[Food] instead of val x : Animal[Grass | Meat]

Thus as expected (see my quote below) the problem with DOT as currently formulated is premature optimization of abstraction at compile-time. Rather than allowing inversion-of-control and injection of the abstraction at compile-time, it optimizing by pushing abstraction to run-time.

The argument against this inversion-of-control was stated by Martin Odersky, "You could parameterize class Animal with the kind of food it eats. But in practice, when you do that with many different things, it leads to an explosion of parameters, and usually, what's more, in bounds of parameters. At the 1998 ECOOP, Kim Bruce, Phil Wadler, and I had a paper where we showed that as you increase the number of things you don't know, the typical program will grow quadratically.":

My response to Martin is that is why we must invert the control at every method per the quotes below, so that quadric explosion is shifted inside-out.

I therefor posit that DOT is broken and needs to be reformated in a new holistic model per my quotes below and the examples in the other thread.

P.S. this idea about inversion-of-control at the call site with automated assistance from the compiler is one I had in my head for a couple of years now since I was brainstorming one day with the author of Kotlin. He went with some very simple form of solution which didn't embody what I was driving for.

From the thread "Re: [scala-language] The cake’s problem, dotty design and the approach to modularity.":

On Sunday, July 26, 2015 at 12:28:53 AM UTC+8, Shelby wrote:

This is yet another example of premature optimization (declaring the data structure in the self type) and my idea for a solution being an inversion-of-control, where the mixin injects a method into the constructor instead of prematuring declaring itself as a constructor.

I am starting to get the strong intuition that this concept of inversion-of-control needs to be proliferated throughout Scala 3 if we want to make a huge paradigm shift win on modularity. I am studying now the DOT calculus in detail and I am hoping I can apply such concepts so that type preservation can be recovered.

On Saturday, July 25, 2015 at 1:13:13 PM UTC+8, Shelby wrote:

I believe perhaps the ideas I have presented for injection of interface (relying on DOT) are a complete solution (and more generalized) to the reasons given for needing to represent family polymorphism by tracking types in the instance (which appears to be a less general form of dependency injection):

If a set of types share a set of methods (perhaps implemented as typeclass rather than virtual inheritance so the dictionary can be injected with an object), then the disjunction of those types is the conjunction (and the conjunction of those types is the disjunction) of the implementations of that interface. But note that A ∧ A = A ∨ A, so thus both disjunction and conjunction can be operated upon if they share an interface A.

That was the point of my prior post.

On Saturday, July 18, 2015 at 9:51:18 PM UTC+8, Shelby wrote:

I believe I show herein the fundamental importance of objects (as in "OOP"), that subclassing (but not subtyping) is fundamentally an anti-pattern, and that the new DOT calculus is essential.

For the goal of completely solving the Expression Problem, I believe the requirement for a "global vtable" which I pondered upthread, is implicitly fulfilled by the injection of inversion-of-control I had proposed.

Objects are passed around as the vtable, which I believe is a form of the extensible modularity

...

Perhaps the Dotty compiler could automatically generate the implicit object `Drawable[Line ∨ Box]`. Thus we retain subtyping (i.e. `Line` and `Box` are subtypes of `Line ∨ Box`) while eliminating subclassing (i.e. there is no nominal type which is the supertype of `Line ∨ Box` or at least `Any` should only occur with a cast since I've shown it discards extensible static typing).

...

Another benefit of deprecating subsumption via subclassing in favor of subtyped disjunction, is distinct invariantly parametrized types can be added to the same List: