What I really liked about this talk was how it focused on the delightfulness of code synthesis versus all of the other claims people make about static typing. It made it more approachable and meant Brady had to spend less time convincing us versus dazzling us.

I used to work in academia, and this is an argument that I had many times. “Teaching programming” is really about teaching symbolic logic and algorithmic thinking, and any number of languages can do that without the baggage and complexity of C++. I think, if I was in a similar position again, I’d probably argue for Scheme and use The Little Schemer as the class text.

This is called computational thinking. I’ve found the topic to be contentious in universities, where many people are exposed to programming for the first time. Idealists will want to focus on intangible, fundamental skills with languages that have a simple core, like scheme, while pragmatists will want to give students more marketable skills (e.g. python/java/matlab modeling). Students also get frustrated (understandably) at learning “some niche language” instead of the languages requested on job postings.

Regardless, I think we can all agree C++ is indeed a terrible first language to learn.

Ironically, if you’d asked me ten years ago I would’ve said Python. I suppose I’ve become more idealist over time: I think those intangible, fundamental skills are the necessary ingredients for a successful programmer. I’ve worked with a lot of people who “knew Python” but couldn’t think their way through a problem at all; I’ve had to whiteboard for someone why their contradictory boolean condition would never work. Logic and algorithms matter a lot.

I think python is a nice compromise. The syntax and semantics are simple enough that you can focus on the fundamentals, and at the same time it gives a base for students to explore more practical aspects of they want.

Students also get frustrated (understandably) at learning “some niche language” instead of the languages requested on job postings.

Yeah, I feel like universities could do a better job at setting the stage for this stuff. They should explain why the “niche language” is being used, and help the students understand that this will give them a long term competitive advantage over people who have just been chasing the latest fads based on the whims of industry.

Then there is also the additional problem of industry pressuring universities into becoming job training institutions, rather than places for fostering far-looking, independent thinkers, with a deep understanding of theory and history. :/

I’ve been thinking about this a bit lately, because I’m teaching an intro programming languages course in Spring ‘19 (not intro to programming, but a 2nd year course that’s supposed to survey programming paradigms and fundamental concepts). I have some scope to revise the curriculum, and want to balance giving a survey of what I think of as fundamentals with picking specific languages to do assignments in that students will perceive as relevant, and ideally can even put on their resumes as something they have intro-level experience in.

I think it might be getting easier than it has been in a while to square this circle though. For some language families at least, you can find a flavor that has some kind of modern relevance that students & employers will respect. Clojure is more mainstream than any Lisp has been in decades, for example. I may personally prefer CL or Scheme, but most of what I’d teach in those I can teach in Clojure. Or another one: I took a course that used SML in the early 2000s, and liked it, but it was very much not an “industry” language at the time. Nowadays ReasonML is from Facebook, so is hard to dismiss as purely ivory tower, and OCaml on a resume is something that increasingly gets respect. Even for things that haven’t quite been picked up in industry, there are modernish communities around some, e.g. Factor is an up-to-date take on stack languages.

I one way you can look at it is: understanding how to analyse the syntax and semantics of programming languages can help you a great deal when learning new languages, and even in learning new frameworks (Rails, RSpec, Ember, React, NumPy, Regex, Query builders, etc. could all be seen as domain specific PLs embedded in a host language). Often they have weird behaviours, but it really helps to have a mental framework to quickly understand new language concepts.

Note that I wouldn’t recommend this as a beginner programming language course - indeed I’d probably go with TypeScript, because if all else fails they’ll have learned something that can work in many places, and sets them on the path of using types early on. From the teaching languages Pyret looks good too, but you’d have to prevent it from being rejected. But as soon as possible I think it’s important to get them onto something like Coursera’s Programming Languages course (which goes from SML -> Racket -> Ruby, and shows them how to pick up new languages quickly).

I started college in 1998, and our intro CS class was in Scheme. At the time, I already had done BASIC, Pascal, and C++, and was (over)confident in all of them, and I hated doing Scheme. It was different, it was impractical, I saw no use in learning it. By my sophomore year I was telling everyone who would listen that we should just do intro in Perl, because you can do useful things in it!

Boy howdy, was I wrong, and not just about Perl. I didn’t appreciate it at the time, and I didn’t actually appreciate it until years later. It just sorta percolated up as, “Holy crap, this stuff is in my brain and it’s useful.”

I hear this reasoning, about teaching tangible skills, but even one two or three quarters for Python is not enough for a job, at least it shouldn’t be. If it is, then employers are totally ok with extremely shallow knowledge.

Seems like it’s mostly just “NumPy happened”. And people started building things on top of NumPy, and then things on top of these things…

Also, machine learning doesn’t need types as much as something like compilers, web frameworks or GUI apps. The only type that matters for ML is matrices of floats, they don’t really have complex objects that need properties and relationships expressed.

Types can be used for more than just stating that layers are matrices - have a look at the Grenade Haskell library which lets you fully specify the shape of the network in types, and you get compile time guarantees that the layers fit together so you don’t get to the end of a few days of training to to find your network never made sense.

Sadly Idris is not great wrt. unboxed data types. Lots of the dependent stuff it implements involves lots of boxing and pointer chasing… not the greatest for high performance computing. That’s not inherent to dependent types, but it’s something language designers need to tackle in the future if they want to meet the needs of high performance computing.

Ah well yes, I was thinking more about an API layer for stuff like Tensorflow or Torch, where the Idris type system validates a DAG of operations at compile time and then it’s all translated with the bindings.

The exascale, project languages like Chapel were my guess since they (a) make parallelism way easier for many hardware targets and (b) were advertised to researchers in HPC labs. Didn’t happen. Still potential, there, as multicore gets more heterogenous.

I guess it’s inherent to the modern hardware. The reason for this deep learning hype explosion is that processors (fully programmable GPUs, SIMD extensions in CPUs, now also more specialized hardware) have gotten very good at doing lots of matrix math in parallel, and someone rediscovered old neural network papers and realized that with these processors, we can make the networks bigger and feed them “big data” and the result is pretty good classifiers

On top of cheaper. You can get what used to be an SGI Origin or Onyx2 worth of CPU’s and RAM for new car prices instead of new, mini-mansion prices. Moores law with commodity clusters lowered barrier to entry a lot.

It is inherent to problems that can be represented in linear algebra. But many problems have different representations, like decision tres for example. Regression and neural networks can be written as matrix operations mostly.

I concede that matrices are the most fundamental and optimizable representation for ML. They are literal grids of values, after all; you can’t get much denser than that! However, is it still possible that they do not always lend themselves to higher-level modeling?

For instance, any useful general-purpose computation boils down to some equivalent of machine code—or a Turing machine, for the theoretically-minded. Despite this, we purposefully code in languages that abstract away from this fundamental, optimizable representation. We make some sacrifices in efficiency* in order to enables us to more effectively perform higher-order reasoning. Could (or should) the same be done for ML?

(*Note: sometimes, by letting in abstractions, we actually find new optimizations we hadn’t thought of before, as they require a higher-level environment to conceive of and implement conveniently & reasonably. See parallelism-by-default and lazy streams, as in Haskell. Parsing is yet another example of something that used to be done on a low-level, but that is now done more efficiently & productively due to the advent of higher-level tools.)

ML is not limited to neural networks. Other ML models use different representations.

Matrices are an abstraction as well. It doesn’t really say that they are represented as dense arrays. In fact many libraries can use sparse arrays as needed. And performance comes not only from denser representation but from other effects like locality and less overhead compared to all the boxing/unboxing of higher level type systems or method dispatching from most OOP languages.

There is more abstraction at various levels. Some libraries allow the user to specify a neural network in terms of layers. Also matrices are algebraically manipulated as symbolic variables and that makes formulas look simpler.

I guess a few libraries support some kind of dataflow-ish programming by connecting boxes in a graph and having variables propagate as in a circuit. That is very close to the algebraic representation if you think of the formulas as abstract syntax trees for example.

Maybe more abstraction could be useful in defining not only the models but all the data ingestion, training policies, and production/operations as well.

Your complaints about those mean old trolls are well founded enough, but I think they are not so much about “the Linux community” (whatever that might be) and more about Reddit. With that in mind, please take some time to acculturate. This place is very specifically not Reddit, nor HN.

That was my read as well, and also my own experience with reddit (and /r/programming even, which is supposed to be one of the better subreddits).

Not sure what to conclude other than once a community gets a certain size, and once the long tail that ends up being very loud and caustic grows enough to be problematic do you start to see these kinds of things.

It is, but there’s still a low background hum of snarky and mean comments, in spite of the overall excellent tone. Some very valued members of the community avoid Reddit in favor of other places, like the Discourse instance. I think it’s fair to say that /r/rust is the exception that proves the rule, and will be effectively shuttering /r/xi_editor (not that it was ever very active).

/r/ProgrammingLanguages/ is also a pretty good one. But it has to be set up like that from the start, with strong moderation. It always depresses me when anything gender equality related takes a quick dive to the bottom on /r/programming/… :(

It occurs outside Reddit [1], Reddit is just one of the worst exponents of it. There is a subset of the Linux community who feel like they on a crusade against Microsoft, Apple, proprietary software, or whatever. I think Linux is attractive to some of these groups because of strong some ideological tendencies in Linux (GNU, ‘the true UNIX philosophy’, etc.). There is nothing wrong with these ideologies — most people are emphatic towards others and understand that Unix is not practical for everyone, nor that running a fully free system is not practical for everyone. But once you take away the empathy, you get pointless ideological and/or turf wars.

It is not productive to engage in these discussions. What is a point of a discussion where no one wants to be convinced or is interested in the other person’s perspective?

[1] I have been yelled at on some Linux forums because I also use a Mac, by people who never contributed a single line of code or documentation to a FLOSS project.

As far as I’m concerned, these people are not part of the Linux community and they’re not wanted. If you feel it is appropriate to belittle someone, or tell someone to “go kill themselves” for their choice of software, operating system, or their opinions, do us all a favour and f**k off.

When you quoted me there, you left out the point I was trying to make. To clarify, I do not believe there is such a thing as “the Linux community”, any more than there is a “community” of Toyota drivers or people with colorful tatoos. Sure, the Linux kernel developers are a community of sorts: they have to work together. Distro maintainers have communities; maybe even the user base could be included, for some smaller and quirkier distros. But mere consumer choice does not grant anyone community membership status, under any sane definition of the word.

Another thing: if you wander into any collection of people and start mocking their shared values, you can reasonably expect some hostility. Some will express it in a professional and mature fashion, others not so much. This age-old fact tells you nothing about Linux, Reddit, or even the Internet.

Yeah, many things in Ceylon I like more semantics wise, especially when it comes to the type system. Just wish I could use more of Kotlin’s syntax in Ceylon… sadly the C-style type syntax was the main thing I couldn’t stomach. Seems strange that a type-rich language like Celyon went with it :/

Yeah, it was disheartening to that they couldn’t realize that Type ident works very poorly with Generics, despite it being readily apparent in their own language.

These problems are the reason why most modern language go with ident: Type instead.

I guess the other changes felt to many people as just an useless hurdle to jump over (like actual and formal).

Plus the whole nonsense around wanting all modifiers to be annotations and introducing mandatory ; to work around the issues. Don’t take me wrong, I think it’s a good idea to get rid of the arbitrary distinction between syntax for modifiers and syntax for annotations, but trading in semicolon inference for it is a very bad deal.

That was my impression when all three languages were young years ago. I can’t speak at all for “modern” Rust, though I have used Go fairly extensively.

(Rust seems to have changed a lot in that time too, which struck me as…unfortunate? I was never a Rustician, but I gave up even cursorily following the language during the times of great change. Go, by comparison, changed little – though it’s a smaller language.)

Ceylon’s type system allows you to express things in a much more natural fashion for the problems that actually come up a lot, IMHO. The way union/intersection typing and flow typing is handled are quite elegant. Mypy, of all places, is starting to kinda match up (though the underpinnings are not as clear) and is really pleasant to use.

There’s also the nice part that everything in Ceylon is a class: there are no primitive types. Additionally, all of the fundamental classes are derivable within the type system, thanks to things like enumerated classes. That means that even classes like Integer are, in principle, derivable within the type system itself.

The emphasis on declarative data and syntax support for it is likewise nice. I know that Rust’s macro system allows you to do even more, but it seems more fiddly to me (at least from my cursory glances). Go’s module-level init code let’s you do basically anything, but it’s not as “declarative”.

Go ticks a whole lot of my boxes: I’ve always loved Rob Pike’s work on concurrency and the Squeak papers are go-tos for when I need something interesting to reread, and I’ve always loved Oberon…and Go is concurrent Oberon with some C thrown in. I’ve written some fairly hefty production Go code and contributed patches to Google’s gopacket library. It’s a great language but I still often end up writing in C for the low-level stuff and Python for the high-level stuff…maybe it’s just because those are the languages I’m most fluent in, though, I don’t know.

Rust seems very…large to me. I get why, and I understand and I’m not complaining, but it just feels “big”. Like C++. I will readily admit that I’ve not looked at Rust in years and never wrote any real code with it so I will happily change my mind. I’ve got a copy of The Rust Programming Language sitting in my Amazon cart and will give it a read when things die down at work.

Of course, the three languages target different things: Ceylon is an application language, Rust is a systems language, and Go is somewhere in between.

(Rust seems to have changed a lot in that time too, which struck me as…unfortunate? I was never a Rustician, but I gave up even cursorily following the language during the times of great change. Go, by comparison, changed little – though it’s a smaller language.)

Rust was always a very open project and was announced at the same time as Go, but in a very different state: as a research project. The massive changes stem from the fact that it wasn’t even remotely done back then.

I find that great, of the languages you mention, it’s the only one that was very open to contributions and all the decisions and changes can be traced.

That did make it unusable for people wanting a stable language for that time, I agree with that.

I’m a bit uncomfortable with React-powered front-ends at the moment. I’ve looked, however, at Hugo and Eleventy. Hugo is crazy fast, but I’m Go-illiterate so I have little chance of extending it in case it’s needed. Eleventy is, for its principles, very close to my heart, but I haven’t had the chance to try it out yet. (I had something similar to it in the works), but I’m not sure when I’ll be able to pick it up again.

Docusaurus and Vuepress, etc. are server side rendered, so you don’t pay the overhead on the browser front. The advantage is that you get to hook into other front end tooling like KaTeX that is harder if going with a non-front-end language. This was one of the main issues I had when looking into Rust solutions (and if I’m honest, I’d rather be writing JS/Typescript than suffer writing Go). Also, if you do eventually need dynamic content, having React on hand is a big plus. Curious to know why you are uncomfortable with it :)

I don’t know exactly, it’s kind of inexplicable because I otherwise love React :-) I guess it’s skepticism/reluctance about batteries-included generators which throw every feature at you at once; not that this applies to Docusaurus specifically, but I think my first interaction with Gatsby — with its GraphQL and what-not — kind of solidified this position.

I look briefly at the source and saw that it compiles to scheme, I couldn’t look more carefully because I am on my phone, does it compile to scheme only? Is this a permanent target or just to speed up development? I thought it was very interesting to compile to scheme, very cool! Did Idris compile to scheme too?

I’m guessing it’s probably to speed up dev. Idris was always easy to port to new platforms, and it seems the Chez Scheme backend was faster than Idris 1.0 backend, so it seems like it would make sense to leverage it for the time being.

Little type inference (getting better though with var). Nullable by default. No support for defining tagged unions types with pattern matching and exhaustiveness checking. Verbose way of defining record types. Clunky lambda types. I could go into more things, but fixing those things would at least get you closer to Elm’s level.

My hope for the future of Go is that it will continue to embrace simplicity in the face of cries for complexity.

I agree. I used to be a large proponent of adding generics to Go. But since Rust has taken off, I’d like it if Go and Rust explore two different approaches: Go with a drastic approach to simplicity and GC, Rust with parametric polymorphism + memory safety without GC. So with that in mind, I wouldn’t mind if Go did not add generics.

I am using Rust as my primary language, but I would like it if there was a fall-back option in case Rust or Rust libraries become over-engineered. This is just personal opinion and I don’t want to be controversial, but I think Haskell went from a simple functional language to type wizardry. Scala followed a similar path, except that it already started as a more complex language by aiming to be a functional language, while simultaneously implementing many OO concepts. Type wizardry can be fun, but it typically results in libraries that are hard to use by diverse teams or set of contributors. Rust libraries are generally ok in this respect, but there is definitely a lot of room of over-engineering with generics and traits.

The thing that I miss the most in Go, besides less repetitive error handling, is deterministic destruction. Especially when binding C. There is no guarantee when/if finalizers run and asking that callers call or defer a Close() function is a bit annoying. Destructors and GC are not mutually exclusive. A minor shortcoming is the lack of more powerful sum types.

Outside the language, it would be nice to have a slower compiler that inlines and optimizes more aggressively. As far as I understand, gccgo performance is currently not much better yet than the native Go compiler.

But everyone probably has a different set of wishes and if they were all added go Go, we’d end up with something that is not Go. So, it’s probably best for Go’s designers to be conservative ;).

I’m getting more and more disenchanted with Rust’s memory safety claims. Seems like every week now we are hearing about another overflow in a Rust library due to unsafe. If you build a footgun, someone will use it.

That is a good sign. The bad things you hear in the news are those that are rare and thus not dangerous. The really dangerous stuff is not reported in the news because these bad things happen all time.

For a comparison, statistically you will probably die from some form of cancer in a hospital. That does not get reported in the news. Being killed from a plane crashing into a skyscraper. That is newsworthy, but nothing to be afraid of.

When was the last time an overflow in a C program was on the lobsters frontpage?

Compared to the status quo (C, C++), Rust is in a far better position, and far easy to audit. It’s great that some of these other bugs are being sorted out, but it would be far worse in a language with no separation between safe and unsafe code. Yes, I would love to see a systems language that has been formally verified from the ground up, with dependent types that we can prove properties about some of the more tricky low level stuff, but this is hard to do and will take more years, and even then it’d be prudent to install escape hatches for practicalities sake.

i think even before that, ocaml has been making a steady if gradual comeback over the last few years. opam, for instance, has been a pretty big boost for it (never underestimate the value of a good package manager in growing an ecosystem!), jane street’s dune is really exciting snce build systems have always been a bit of a weak spot, and more recenrly, bucklescript has been attracting a lot of attention among the webdev crowd even before reason came along (and now, of course, reason/bucklescript integration is a pretty big thing)

Sadly I’ve never been able to figure out Opam… it seems to mutate the global package environment every time you want to build something, just like Cabal does (although they are fixing it with the cabal new-* commands). This is a massive pain if you want to work on different projects at once, and makes it hard to remember what state your build is in. Wish they would learn a bit from Cargo and Yarn and make this it more user friendly - so much good stuff in OCaml and Coq that I’d love to play around with, but it’s wrapped up in a painful user experience. :(

are you using opam to develop projects foo and bar simultaneously, and installing foo’s git repo via opam so that bar picks it up? that is indeed a pain; i asked about it on the ocaml mailing list once and was recommended to use jbuilder (now dune) instead, which works nicely.

“Now if you’ve done a cursory search for Baader-Meinhof, you might be a little confused, because the phenomenon isn’t named for the linguist that researched it, or anything sensible like that. Instead, it’s named for a militant West German terrorist group, active in the 1970s. The St. Paul Minnesota Pioneer Press online commenting board was the unlikely source of the name. In 1994, a commenter dubbed the frequency illusion “the Baader-Meinhof phenomenon” after randomly hearing two references to Baader-Meinhof within 24 hours. The phenomenon has nothing to do with the gang, in other words. But don’t be surprised if the name starts popping up everywhere you turn [sources: BBC, Pacific Standard].”

I also seem to see it more often recently. Either Baader-Meinhof too (umm; or “reverse Baader-Meinhof”? seems I’m getting pulled in thanks to recent exposition?), or I have a slight suspicion that ReasonML may have contributed to some increase in OCaml public awareness. But maybe also Elm, and maybe F# too?

Is it just me or are other people also bothered by the over use of emoticons and the low quality of this writing? I wish that style of writing would stay confined to SMS and not bleed into technical articles.

Maybe it’s just me but when I see “Oh 💩, it compiles to JavaScript.” I just have to think of trying-to-be-cool parents, which I just find tiering. And that’s setting aside that I don’t believe vulgar language should be used at all in written documents.

I am reacting to the over-use of emoji as a way to get personal. It’s good, but companies tends to use it to promote a product, so I have gotten allergic.
Made with :love: by $BigCorp.
Put a :tiger: in your :engine:
…

On the other hand, I see no reason against putting emoji on one’s own blog, readers are not pushed straight onto the posts after all…

Didn’t bother me since it was at least a different style. I like seeing a mix of styles. If it annoyed you, I think you’ll like some of his comments on the HN thread which get right to the point. Specifically, he as a list of what’s bad and what’s great.

Indeed, these are more substantial. I might be a little bit burned out by the “code ninjas” out there and the impression that software engineering is a dying art. Now you can do a two month bootcamp on React and VSCode and get a job. Even Google stopped asking for CS degrees.

Careful. In my day job I work on implementing dependent type systems, with an eye for improving low level binary format parsing by leveraging formal verification. And yet I dropped out of CS and I use VS Code. Opening up other pathways to people getting into programming does not mean that we have to discount the importance of a high quality CS education. We would also be wise to not assume that a CS degree correlates with a good aptitude for programming.

Yes, you’re absolutely right, and I’d really like a wider range of people get into software engineering, CS degree or not. However, from my anectodal experience, I find there is a growing gap in knowledge and values, and am wondering why it seems so.

I find in fact that many university courses are actually doing more harm than good, pedalling decades old software engineering practices (like the gospel of Java, OOP, imperative programming and UML) rather than teaching core principles of programming languages, mathematics, and algorithms that age more slowly, and are critical to encouraging and inspiring the next generation of CS researchers. This is partly industry’s fault, and partly the fault of universities.

I see industrial programming as more a vocational trade, and employers should shoulder more of the burden of teaching up-to-date best practices. Let the universities do what they do well: theory, and don’t expect CS graduates to be excellent programmers from day one, but do expect them to be able to eventually become much more effective and nimble in the long run than a entry level boot-camp employee (depending on that eployee’s desire for self-education). By the same token I think universities should not get caught up in chasing the treadmill of the latest technology, and be up front to prospective students about that.

This is really spot-on. I studied in France, where the curriculum is much more theoretical (hence less dependent on the technology of the day), but living and working in North America I see that a lot of CS graduates trained to specific tools & technologies without a good understanding of the underlying principles. This makes new recruits ready to use technicians if you use the technology of they day, but if you use anything exotic both parties are facing a lot of pain.

The best we can do is try to swallow our sadness and frustration, and do a best to inspire and excite the next generation of programmers. I find new programmers are often far more receptive to more interesting ideas, including formal verification, rich type systems, and functional programming. I always treat a new programmer as a great opportunity: we have the power to shape their future directions through the doorways we open to them, and the opportunities we provide.

Note that Epigram has been dead for a while, Idris is its spiritual successor (I believe it actually evolved from an attempt to build an Epigram compiler). Idris is explicitly aiming to be a “real” programming language; Agda is very similar, but is more often used from a mathematical/logical side of Curry-Howard, rather than the programming side.

Neither Idris or Agda have the 2D syntax of Epigram, but they both have powerful Emacs modes which can fill in pieces of code (Haskell’s “typed holes” is the same idea, but (as this paper demonstrates) Haskell’s types are less informative).

Indeed, I suppose it’s that Idris evolved from what was intended to be the back end of Epigram. It certainly owes a lot to Conor McBride and James McKinna’s work on Epigram. I don’t know if “real programming language” is exactly the right way to put it, though, so much as being the language I wanted to have to explore the software development potential of dependent types. Maybe “real” will come one day :).

Do you have a writeup about it? I’m wondering why you’re replacing Idris which is somewhat established already, I mean that probably is the reason you’re replacing it, but still I wonder what concretely necessitated a whole new language instead of a 2.0

It isn’t a whole new language, it’s a reimplementation in Idris with some changes that experience suggests will be a good idea. So it’s an evolution of Idris 1. I’ll call it Idris 2 at some point, if it’s successful. It’s promising so far - code type checks significantly faster than in Idris 1, and compiled code runs a bit faster too.

Also, I’ve tried to keep the core language (which is internally called ‘TTImp’ for ‘type theory with implicits’) and the surface language cleanly separated. This is because I occasionally have ideas for alternative surface languages (e.g. taking effects seriously, or typestate, or maybe even an imperative language using linear types internally) and it’ll be much easier to try this if I don’t have to reimplement a dependent type checker every time. I don’t know if I’ll ever get around to trying this sort of thing, but maybe someone else will…

I started this because the Idris implementation has a number of annoying problems (I’ll go into this some other time…) that can only be fixed with some pretty serious reengineering of the core. So I thought, rather than reengineer the core, it would be more fun to see (a) if it was good enough to implement itself, and (b) if dependent types would help in any way.

The answer to (a) turned out to be “not really, but at least we can make it good enough” and to (b) very much so, especially when it comes to name manipulation in the core language, which is tricky to get right but much much easier if you have a type system telling you what to do.

I don’t have any writeup on any of this yet. It’ll happen eventually. (It has to, really - firstly because nobody ever made anything worthwhile on their own so a writeup is important for getting people involved, and secondly because it’s kind of what my job is :))

Fixed! This is mine: https://github.com/pikelet-lang/pikelet - scratching my itch of Rust not being enough like Idris, and Idris being not designed with low level systems programming in mind. Probably won’t amount to much (it’s rather ambitious), but it’s been fun playing around, learning how dependent type checkers work! I still need to learn more about what Epigram and Idris do, but it takes passes of deepening to really get a handle on all the stuff they learned. I’m probably making a bunch of mistakes that I don’t know about yet!

Nice. I’m starting to realize how I wasn’t the only one to have thought “wouldn’t it be nice to have a purely functional systems language with cool types”:D

What I wanted to make was very similar to Idris, but I would’ve put way more focus on lower-level stuff. Honestly, my way of combining it was likely misguided as I was a total rookie back then (still am, but comparatively, I at least know how much I didn’t know…)

Thinking about how to do imports in Pikelet! I recently merged support for universe ‘shifting’ in Pikelet too, but I’m still not happy with my documentation on it. My technical writing still needs some practice before it meets my own expectations…

I would argue that message passing, with the objects deciding what to do about a message received at runtime captures the essence of dynamic binding.

However, I agree with your point. OO is somewhat amorphous at this point, and one may very well implement a language without late binding and call it OO.

Edit: To clarify what I meant; dynamic binding is where the execution path rather than the lexical ordering determines how a method or variable is resolved. When an object receives a message, the message resolution is determined by the execution path that involved that object. That is, one can not predict in advance using lexical knowledge alone what method will be called.

With regards to blocking discussion, I think the logic is something like this:

The core developers have a roadmap for Elm development and they want to stick to it

They tried including more developers but haven’t found an effective way to deal with more contributors

Therefore, they have limited time

They can spend this time rehashing the same arguments and debating half-baked ideas, or they can spend this time following their roadmap, but not both.

I would prefer that the discussions weren’t removed or locked, but on the other hand, it’s got to be grating to deal with the same entitled, uninformed or complaining comments all the time. I’ve read most of these discussions, and other than people venting, nothing is ever achieved in them. My reflexive reaction is to be uncomfortable (like a lot of other people) but then, there is also a certain clarity when people just say that they will not engage in a discussion.

With regards to insufficient communication, I think the main things to understand is that Elm is an experiment in doing things differently, and it’s causing a clash with conventional understanding. Elm is about getting off the upgrade treadmill. So, for example, when a new release like Elm 0.19 comes out, it happens without a public alpha and beta phases, and it’s not actually the point where you go and immediately migrate your production code to it! It’s only the point to start experimenting with it, it’s the point where library and tool authors can upgrade and so on. (There was quite a bit of activity prior to release anyway, it just wasn’t advertised publicly.)

Finally, the most contentious example of a “feature” getting removed is the so called native modules (which basically means the ability to have impure functions written in JS in your Elm code base). As far as I can tell (having followed Elm since 0.16), native modules were always an internal implementation detail and their use was never encouraged. Nevertheless, some people started using them as a shortcut anyway. However, they were a barrier to enabling function-level dead code elimination which is the main feature of the 0.19 release, so the loophole was finally closed. Sure, it’s inconvenient for people who used them, but does anyone complain when, say, Apple removes an internal API?

Ultimately, Elm is just an open source project and the core maintainers don’t really owe anybody anything - no contracts are entered into and no funds are exchanged. They can do whatever they want.

Of course, there is a question of the long term effects this approach is going to have on the community. Will it alienate too many people and cause Elm to wither? Will Elm remain a niche language for a narrow class of applications? That remains to be seen.

but on the other hand, it’s got to be grating to deal with the same entitled, uninformed or complaining comments all the time.

Over the years, I have come to believe this is a vital part of building a community. Using draconian tactics to stomp out annoying comments is using power unwisely and worse yet – cripples your community in multiple ways.

The first thing to remember is that when a comment (entitled, uninformed or otherwise) comes up repeatedly – that is a failure of the community to provide a resource to answer/counter/assist with that comment. That resource can be a meme, a link, an image, a FAQ, a full on detailed spec document, whatever. This type of thing is part of how a community gets a personality. I think a lot of the reason there are a bunch of dead discourse servers for projects is too stringent policing. You should have a place for people to goof off and you have to let the community self police and become a real community. Not entirely, obviously, but on relevant topics.

This constant repetition of questions/comments is healthy, normal, it is the entrance of new people to the community. More importantly, if gives people who are just slightly deeper in the community someone to help, someone to police, someone to create resources for, even to a degree someone to mock (reminding them they aren’t THAT green anymore) – a way to be useful! This is a way to encourage growth, each “generation” of people helps the one that comes after them – and it is VITAL for building up a healthy community. In a healthy community the elders will only wade in occasionally and sporadically to set the tone and will focus on the more high minded, reusable solutions that move the project forward. Leave the minor stuff be done by the minor players, let them shine!

Beyond being vital to build the community – it is a signal of where newcomers are hurting. Now if documentation fixes the problem, or a meme… terrific! But if it doesn’t, and if it persists … that is a pain point to look at – that is a metric – that is worth knowing.

Yeah, each one of these people gives you a chance to improve how well you communicate, and to strengthen your message. But shutting down those voices then run the risk of surrounding yourself with ‘yes people’ who don’t challenge your preconceptions. Now, it’s entirely up to the Elm people to do this, but I think they are going to find it harder to be mainstream with this style of community.

Note that I’m perfectly fine with blocking and sidelining people who violate a CoC, or are posting hurtful, nonconstructive comments. You do have to tread a fine line in your moderation though. Being overly zealous in ‘controlling the message’ can backfire in unpredictable ways.

Anyway, I continue to follow Elm because I think the designer has some excellent ideas and approaches, even if I do disagree with some of the ways the way the community is managed.

even if I do disagree with some of the ways the way the community is managed.

I don’t think the two jobs (managing the community and managing the project) should necessarily be done by the same person. I actually think it probably shouldn’t. Each job is phenomenally challenging on its own – trying to do both is too much.

But, they do it on his behalf? This policy of locking and shutting down discussions comes from somewhere. That person directly or indirectly is the person who “manages” the community, the person who sets the policies/tone around such things.

I’ll add the perspective of someone who loved Elm and will never touch it again. We’re rewriting in PureScript right now :) I’m happy I learned Elm, it was a nice way of doing things while it lasted.

In Elm you may eventually hit a case where you can’t easily wrap your functionality in ports, the alternative to native modules. We did, many times. The response on the forum and other places is often to shut down your message, to give you a partial version of that functionality that isn’t quite what you need, to tell you to wait until that functionality is ready in Elm (a schedule that might be years!), or until recently to point you at native modules. This isn’t very nice. It’s actually very curious how nice the Elm community is unless you’re talking about this feature, in which case it feels pretty hostile. But that’s how open source rolls.

Look at the response to message linked in the story: “We recently used a custom element to replace a native modules dealing with inputs in production at NoRedInk. I can’t link to it because it’s in a proprietary code base but I’ll be writing an speaking about it over the next couple months.”

This is great! But I can’t wait months in the hope that someone will talk about a solution to a problem I have today. Never mind releasing one.

Many people did not see native modules as a shortcut or a secret internal API. They were an escape valve. You would hit something that was impossible without large efforts that would make you give up on Elm as not being viable. Then you would overcome the issues using native modules which many people in the community made clear was the only alternative. Now, after you invest effort you’re told that there’s actually no way to work around any of these issues without “doing them the right way” which turns out to be so complicated that companies keep them proprietary. :(

I feel like many people are negative about this change because it was part of how Elm was sold to people. “We’re not there yet, but here, if we’re falling short in any way you can rely on this thing. So keep using Elm.”

That being said, it feels like people are treating this like an apocalypse, probably because they got emotionally invested in something they like and they feel like it’s being changed in a way that excludes them.

You’re right though. Maybe in the long term this will help the language. Maybe it will not. Some people will enjoy the change because it does lead to a cleaner ecosystem and it will push people to develop libraries to round out missing functionality. In the short term, I have to get things done. The two perspectives often aren’t compatible.

I’m personally more worried about what will happen with the next major change where Elm decides to jettison part of its community. I don’t want to be around for that.

If people encouraged you to use native modules, then that was unfortunate.

I’m not sure I understand the issue with custom elements. Sure, they’re a bit complicated and half baked but it certainly doesn’t require a research lab to use them (in fact, I’ve just implemented one now).

I would agree, however, that the Elm developers have a bit of a hardline approach to backward compatibility. Perhaps there is a misunderstanding around the state of Elm - ie whether it’s still an experiment that can break compatibility or a stable system that shouldn’t.

I’m not sure how I feel about backward compatibility. As a user, it’s very convenient. As a developer, it’s so easy to drown in the resulting complexity.

I would prefer that the discussions weren’t removed or locked, but on the other hand, it’s got to be grating to deal with the same entitled, uninformed or complaining comments all the time. I’ve read most of these discussions, and other than people venting, nothing is ever achieved in them. My reflexive reaction is to be uncomfortable (like a lot of other people) but then, there is also a certain clarity when people just say that they will not engage in a discussion.

I’ll go one further and say I’m quite glad those discussions get locked. Once the core team has made a decision, there’s no point in having angry developers fill the communication channels the community uses with unproductive venting. I like the decisions the core team is making, and if those threads didn’t get locked, I’d feel semi-obligated to respond and say that I’m in favor of the decision, or I’d feel guilty not supporting the core devs because I have other obligations. I’m glad I don’t have to wade through that stuff. FWIW, it seems like the community is really good at saying “We’re not going to re-hash this decision a million times, but if you create a thread about a specific problem you’re trying to solve, we’ll help you find an approach that works” and they follow through on that.

I don’t have a lot of sympathy for folks who are unhappy with the removal of the ability to compile packages that include direct JS bindings to the Elm runtime. For as long as I’ve been using Elm the messaging around that has consistently been that it’s not supported, it’s just an accidental side effect of how things are built, and you shouldn’t do it or you’re going to have a bad time. Now it’s broken and they’re having a bad time. This should not be a surprise. I also think it’s good decision to actively prohibit it. If people started using that approach widely, it would cause a lot of headaches for both the community and hamstring the core team’s ability to evolve the language.

Do you believe your perspective would change if you didn’t agree with the developers decisions? Obviously I have a different perspective but I am curious if think you would still have this perspective if you were on the other side?

Additionally, just because the core team has “made a decision” doesn’t mean it wasn’t a mistake, nor that it is permanent. Software projects make mistakes all the time, and sometimes the only way to really realize the mistake is the hear the howls of your users.

I’m pretty confident I wouldn’t change my position on this if I wasn’t in agreement with the core team’s choices. I might switch to PureScript or ReasonML, if I think the trade-offs are worth it, but I can’t see myself continuing to complain/vent after the decision has been made. I think appropriate user input is “I have this specific case, here’s what the code look like, here’s the specific challenge with any suggested alternative” If the core team decides to go another way after seeing their use cases, it’s clear we don’t have the same perspective on the trade-offs for those decisions. I can live with that. I don’t expect everybody to share my opinion on every single technical decision.

As an example, I use Clojure extensively at work, and I very much disagree with Rich Hickey’s opinions about type systems, but it’s pretty clear he’s thought through his position and random folks on the internet screaming differently isn’t going to change it, it’ll just make his job more difficult. I can’t imagine ever wanting to do that to someone.

sometimes the only way to really realize the mistake is the hear the howls of your users

It’s been my experience that the folks who can provide helpful feedback about mistaken technical decisions rarely howl. They can usually speak pretty clearly about how decisions impact their work and are able to move on when it’s clear there’s a fundamental difference in goals.

I think what bothers me the most about the core team’s approach to features is not that they keep removing them, but that for some they do not provide a valid alternative.

They’ll take away the current implementation of native modules, but coming up with a replacement is too hard, so even though the core libraries can use native code, us peasants will have to do without.

They won’t add a mechanism for higher rank polymorphism because coming up with a good way to do it is hard, so even though the base library has a few magic typeclasses for its convenience, us peasants will have to make do with mountains of almost duplicated code and maybe some code generation tool.

So where does that leave Elm right now? Should it be considered a production-ready tool just by virtue of not having very frequent releases? Or should it be regarded as an incomplete toy language, because of all the breaking changes between releases, all the things that haven’t been figured out yet, and how the response to requests for ways to do things that are necessary in real code is either “you don’t need that”, which I can live with most of the time, or “deal with it for the moment”, which is unacceptable.

I think Elm should make it more clear that it’s ostensibly an unfinished project.

They’ll take away the current implementation of native modules, but coming up with a replacement is too hard

They won’t add a mechanism for higher rank polymorphism because coming up with a good way to do it is hard

I don’t think this is a fair characterization of the core team’s reasons for not supporting those features. I’ve read/watched/listened to a lot of the posts/videos/podcasts where Evan and other folks discuss these issues, and I don’t think I’ve ever heard anyone say “We can’t do it because it’s too difficult.” There’s almost always a pretty clear position about the trade-offs and motivations behind those decisions. You might not agree with those motivations, or weigh the trade-offs the same way, but it’s disingenuous to characterize them as “it’s too hard”

I exaggerate in my comment, but what I understood from the discussions around rank n polymorphism I’ve followed is basically that Evan doesn’t think any of the existing solutions fit Elm.

I understand that language design, especially involving more complex features like this, is a hard issue, and I’m sure Evan and the core team have thought long and hard about this and have good reasons for not having a good solution yet, but the problem remains that hard things are hard and in the meantime the compiler can take an escape hatch and the users cannot.

Should it be considered a production-ready tool just by virtue of not having very frequent releases? Or should it be regarded as an incomplete toy language

I always struggle with this line of questioning because “incomplete and broken” describes pretty much all of the web platform in the sense that whenever you do non-trivial things, you’re going to run into framework limitations, bugs, browser incompatibilities and so on.

All you can do is evaluate particular technologies in the context of your specific projects. For certain classes of problems, Elm works well and is indeed better than other options. For others, you’ll have to implement workarounds with various degrees of effort. But again, I can say the same thing for any language and framework.

Is it good that it’s so easy to bump up against bugs and limitations? No. But at least Elm is no worse than anything else.

Taking a tangent, the main problem is that Elm is being built on top of the horrifically complex and broken foundation that is the web platform. It’s mostly amazing to me that anything works at all.

Is it good that it’s so easy to bump up against bugs and limitations? No. But at least Elm is no worse than anything else.

Having worked with ClojureScript on the front-end for the past 3 years, I strongly disagree with this statement. My team has built a number of large applications using Reagent and whenever new versions of ClojureScript or Reagent come out all we’ve had to do was bump up the versions. We haven’t had to rewrite any code to accommodate the language or Reagent updates. My experience is that it’s perfectly possible to build robust and stable tools on top of the web platform despite its shortcomings.

I have the opposite experience. Team at day job has some large CLJS projects (also 2-3 years old) on Reagent and Re-Frame. We’re stuck on older versions because we can’t update without breaking things, and by nature of the language it’s hard to change things with much confidence that we aren’t also inadvertently breaking things.

These projects are also far more needlessly complex than their Elm equivalents, and also take far longer to compile so development is a real chore.

Could you explain what specifically breaks things in your project, or what makes it more complex than the Elm equivalent. Reagent API had no regressions in it that I’m aware of, and re-frame had a single breaking change where the original reg-sub was renamed to reg-sub-raw in v 0.7 as I recall. I’m also baffled by your point regarding compiling. The way you develop ClojureScript is by having Figwheel or shadow-cljs running in the background and hotloading code as you change it. The changes are reflected instantly as you make them. Pretty much the only time you need to recompile the whole project is when you change dependencies. The projects we have at work are around 50K lines of ClojureScript on average, and we’ve not experienced the problems you’re describing.

I think the ease of upgrades is a different discussion. There is a tool called elm-upgrade which provides automated code modifications where possible. That’s pretty nice, I haven’t seen a lot of languages with similar assistance.

My point was, you cannot escape the problems of the web platform when building web applications. Does ClojureScript fully insulate you from the web platform while providing all of its functionality? Do you never run into cross-browser issues? Do you never have to interoperate with JavaScript libraries? Genuinely asking - I don’t know anything about ClojureScript.

My experience is that vast majority of issues I had with the web platform went away when my team started using ClojureScript. We run into cross-browser issues now and then, but it’s not all that common since React and Google Closure do a good job handling cross-browser compatibility. Typically, most of the issues that we run into are CSS related.

We interoperate with Js libraries where it makes sense, however the interop is generally kept at the edges and wrapped into libraries providing idiomatic data driven APIs. For example, we have a widgets library that provides all kinds of controls like data pickers, charts, etc. The API for the library looks similar to this to our internal widgets API.

Let me clarify my thinking a bit. For a certain class of problems, Elm is like that as well. But it certainly has limitations - not a huge number of libraries etc.

However, I think that pretty much everything web related is like that - limitations are everywhere, and they’re much tighter than I’d like. For example, every time I needed to add a date picker, it was complicated, no matter the language/framework. But perhaps your widgets library has finally solved it - that would be cool!

So I researched Elm and got a feel for it’s limitations, and then I could apply it (or not) appropriately.

I would agree, however, that the Elm developers have a bit of a hardline approach to backward compatibility. Perhaps there is a misunderstanding around the state of Elm - ie whether it’s still an experiment that can break compatibility or a stable system that shouldn’t.

I’m not sure how I feel about backward compatibility. As a user, it’s very convenient. As a developer, it’s so easy to drown in the resulting complexity.

Yeah, I agree that the main question is around the state of Elm. If the message is that Elm isn’t finished, and don’t invest into it unless you’re prepared to invest time into keeping up, that’s perfectly fine. However, if people are being sold on a production ready language that just works there appears to be a bit of a disconnect there.

It’s obviously important to get things right up front, and if something turns out not to work well it’s better to change it before people get attached to it. On the other hand, if you’re a user of a platform then stability is really important. You’re trying to deliver a solution to your customers, and any breaking changes can become a serious cost to your business.

I also think it is important to be pragmatic when it comes to API design. The language should guide you to do things the intended way, but it also needs to accommodate you when you have to do something different. Interop is incredibly important for a young language that’s leveraging a large existing ecosystem, and removing the ability for people to use native modules in their own projects without an alternative is a bit bewildering to me.

To me the problem is that Elm is not conceptually complete. I listed those issues specifically because they’re both things that the compiler and the core libraries can do internally, but the users of the language cannot.

But at least Elm is no worse than anything else.

No, Elm is a language, and not being able to do things in a language with so few metaprogramming capabilities is a pretty big deal compared to a missing feature in a library or a framework, which can easily be added in your own code or worked around.

But how is this different from any other ecosystem? The compiler always has more freedom internally. There are always internal functions that platform APIs can use but your library cannot. Following your logic, we should condemn the Apple core APIs and Windows APIs too.

No, what I meant is that the core libraries use their “blessed” status to solve those problems only for themselves, thus recognizing that those problems effectively exist, but the users aren’t given any way to deal with them.

Ports are very limiting and require much more work to set up than a normal library, and I haven’t used custom elements so I can’t speak for those.

There’s also no workaround for the lack of ad-hoc polymorphism. One of the complaints I hear the most about Elm is that writing json encoders and decoders is tedious and that they quickly become monstrously big and hard to maintain; often the json deserialization modules end up being the biggest modules in an Elm project.

This is clearly a feature the language needs (and already uses with some compiler magic, in the form of comparable, appendable, and so on).

I’ve found bidirectional type checking is indeed a very handy technique for making expressive type systems quickly. I haven’t yet mastered how to marry it with constraint based inference (for implicit arguments) but it proved to be very useful when starting out on building Pikelet.

After writing Go for 5 years, I’d recommend Rust for C developers. It’s more complicated than Go for sure, but also has more to offer. The lack of garbage collection and support of generics are definitely a plus compared to Go.

Go is a better language for junior devs, but I wouldn’t call C programmers junior. They should be able to digest Rust’s complexity.

Rust can definitely get overly complex if the developers show no constraint (i.e. type golf), but the control afforded by manual memory management makes up for it, IMHO. Unless it’s a one-run project, performance will eventually matter, and fixing bad allocation practices after the fact is a lot harder than doing it right from the beginning.

Couldn’t they just start with a C-like subset of Rust adding from there to their arsenal what extra features they like? It’s what I was going to recommend to those trying it for safety-critical use since they likely know C.

I think it’s rather difficult to write rust in a C like manner. This contrasts with go, where you can basically write C code and move the type declarations around and end up with somewhat unidiomatic but working go.

I think C++ as a better C works because you still have libc besides the STL, etc. The Rust standard library uses generics, traits, etc. quite heavily and type parameters and lifetime parameters tend to percolate to downstream users.

Though I think a lot of value in Rust is in concepts that may initially add some complexity, such the borrow checker rules.

The problem with C++ is its complexity at the language level. I have little hope of teams of people porting various tools for static analysis, verification, and refactoring to it that C and Java already have. Certifying compilers either. C itself is a rough language but smaller. The massive bandwagon behind it caused lots of tooling to be built, esp FOSS. So, I now push for low-level stuff either safer C or something that ties into C’s ecosystem.

You could argue the same for C++ (start with C and add extra features). Complexity comes with the whole ecosystem from platform support (OS, arch), compiler complexity (and hence subtle difference in feature implementations) to the language itself (C++ templates, rust macros). It’s challenging to limit oneself to a very specific subset on a single person project, it’s exponentially harder for larger teams to agree on a subset and adhere to it. I guess I just want a safer C not a new C++ replacement which seems to be the target for newer languages (like D & Rust).

It’s challenging to limit oneself to a very specific subset on a single person project, it’s exponentially harder for larger teams to agree on a subset and adhere to it.

I see your overall point. It could be tricky. It would probably stay niche. I will note that, in the C and Java worlds, there’s tools that check source code for compliance with coding standards. That could work for a Rust subset as well.

“I guess I just want a safer C not a new C++ replacement which seems to be the target for newer languages (like D & Rust).”

I can’t remember if I asked you what you thought about Cyclone. So, I’m curious about that plus what you or other C programmers would change about such a proposal.

I was thinking something like it with Rust’s affine types and/or reference counting when borrow-checking sucks too much with performance acceptable. Also, unsafe stuff if necessary with the module prefixed with that like Wirth would do. Some kind of module system or linking types to avoid linker errors, too. Seemless use of existing C libraries. Then, an interpreter or REPL for the productivity boost. Extracts to C to use its optimizing and certifying compilers. I’m unsure of what I’d default with on error handling and concurrency. First round at error handling might be error codes since I saw a design for statically checking their correct usage.

I can’t remember if I asked you what you thought about Cyclone. So, I’m curious about that plus what you or other C programmers would change about such a proposal.

I looked at it in the past and it felt like a language built on top of C similar to what a checker tool with annotations would do. It felt geared too much towards research versus use and the site itself states:

Cyclone is no longer supported; the core research project has finished and the developers have moved on to other things. (Several of Cyclone’s ideas have made their way into Rust.) Cyclone’s code can be made to work with some effort, but it will not build out of the box on modern (64 bit) platforms).

However if I had to change Cyclone I would at least drop exceptions from it.

I am keeping an eye on zig and that’s closest to how I imagine a potentially successful C replacement - assuming it takes up enough community drive and gets some people developing interesting software with it.

That’s something Go had nailed down really well. The whole standard library (especially their crypto and http libs) being implemented from scratch in Go instead of being bindings were a strong value signal.

I cared about those things, as a junior. I am not sure why juniors wouldn’t care, although I suppose it depends on what kind of software they’re interested in writing. It’s hard to get away with not caring, for a lot of things. Regarding education, I am self-taught, FWIW.

Map, reduce and filter are easily implemented in Go. Managing memory manually, while keeping the GC running, is fully possible. Turning off the GC is also possible. Soft realtime is achievable, depending on your definition of soft realtime.

Implementing one Map function per type is often good enough. There is some duplication of code, but the required functionality is present. There are many theoretical needs that don’t always show up in practice.

When people say “type safe map/filter/reduce/fold” or “map, reduce, filter, and generics” they are generally referring to the ability to define those functions in a way that is polymorphic, type safe, transparently handled by the compiler and doesn’t sacrifice runtime overhead compared to their monomorphic analogs.

Whether you believe such facilities are useful or not is a completely different and orthogonal question. But no, they are certainly not achievable in Go and this is not a controversial claim. It is by design.

The implementation of generics in C++ also works by generating the code per required type.

But they are not really comparable. In C++, when a library defines a generic type or function, it will work with any conforming data type. Since the Go compiler does not know about generics, with go generate one can only generate ‘monomorphized’ types for a set of predefined data types that are defined an upstream package. If you want different monomorphized types, you have to import the generic definitions and run go generate for your specific types.

unless you consider code generation

By that definition, any language is a generic language, there’s always Bourne shell/make/sed for code generation ;).

I see your point, but “go generate” is provided by the go compiler, by default. I guess it doesn’t qualify as transparent since you have to type “go generate” or place that command in a build file of some sort?

My larger point here really isn’t a technicality. My point is that communication is hard and not everyone spells out every point is precise detail, but it’s usually possible to infer the meaning based on context.

I think the even larger point is that for a wide range of applications, “proper” and “transparent” generics might not even be needed in the first place. It would help, yes, but the Go community currently thrives without it, with no lack of results to show for.