A. High-level modules should not depend on low-level modules. Both should depend on abstractions.
B. Abstractions should not depend on details. Details should depend on abstractions.

You state that if DA is applied to on a “one layer above”, it is equivalent to DI. I think that this is based on a false assumption that the locality aspect (“high level” vs. “low level”) is a thing worth considering. The locality aspect (“high level” vs. “low level”) is not that relevant here, since it should not matter where the interdependent models are. The only thing that matters is that the dependency is not concrete, but abstract, i.e. typically an interface in most languages.

Thus, any definition of “dependency abstraction” is more or less dependency inversion. The basis is that there is no run-time dependency to the concrete implementation, i.e. something that receives a MysqlUserRepository only depends on a UserRepository, and the dependency is at compile-time when the dependency is provided in the dependency injection phase (i.e. something like new UserController(new MysqlUserRepository(...)).

I also think the wording of DA is rather ambiguous: what is external? Is the difference meaningful whether it is internal, for example, in the same binary; external if it’s in the same system, network or process? I think you could say instead:

instead of depending on something concrete, depend on something abstract.

If both are on the same layer then “inverting” the dependency is not helpful because it does not decouple the two components any more than before. You need the adapter on the layer above in addition and that is not mentioned in the context of dependency inversion. Also, DI implies an “implements” relationship from Y to Iy which does not exit in DA. There XAdapter implements Iy instead.

The adapter pattern alone is not enough to decouple two components.

Thus, dependency abstraction is neither dependency inversion nor an adapter. It somewhat combines them and modifies DI a little.

Where I can agree with you is that DA and DI described in one sentence are practically identical. I still argue that they differ in significant details.

I’m using “above” in the sense of a layered architecture. For example, the user interface layer is above the business logic layer. The data model layer is above the SQL database layer. The application is above the operating system.

The adapter pattern is an obvious solution to the problem. It does require an existing interface though and you need to “apply half a dependency inversion” to create it.

Maybe we could phrase it like this: Let the DI principle be “depend on abstractions not on concretions”. Then there is the DI pattern which turns “X uses Y” into “X uses Iy which implements Y”. In contrast, the DA pattern turns “X uses Y” into “X uses Iy and YAdapter inherits from Y and implements Iy”. Now DI and DA patterns are clearly different. Both realize the DI principle.

It’s always possible to overengineer a simple CRUD app using in a two-tier React/Elm/Vue/Angular architecture. By the time you’ve written your state management and routing logic you would’ve already had a functioning Rails/Django app that solved the problem you were trying to solve in the first place.

SPA architecture is incredibly hard to justify. If your goal is speed and UI responsiveness, there are other ways.

Before the 2000’s, software development was mostly done in a Waterfall approach.

This viewpoint that has become a commonplace and fashionable today, is largely inaccurate. Before 2000, most complex problems in business, commerce and industry were undertaken using incremental and iterative approaches.

A single linear analysis and design, then development, then testing methodology had fallen from favor for general software development long before the “agile” movement.

It saddens me somewhat that history has been re-written for many young developers to be that “enlightened” agile replaced “evil” waterfall.

You’re absolutely correct. I really dislike the way “waterfall” is used left and right these days to describe how things were, because it’s inaccurate: the term waterfall was coined as a description of something that should be avoided. It was an observation of a failing development model, never did its author recommend it as a practice.

I suspect authors use the “Waterfall” canard as both a rhetorical tool to bolster their claims and just to be lazy. Getting the nuance of the evolution of software development right is tough, and it’s especially tough to distill it down to a paragraph with reasonable accuracy.

I’d be a lot happier if it was just left out of these kinds of pieces, even when they say useful things (like I think this article does). You are absolutely correct: this characterization of software practices is more or less historical negationism.

QB and VB6 here. It was confusing way for me to try to learn C/C++. However, what it (plus Common LISP) did teach me is crash/hack-prone software that compiles slowly breaking mental flow was just bad design in C/C++. That kept me looking at alternatives, like Delphi, that preserved lightening-fast, less-problems development with great, GUI support. The 4GL’s, more BASIC-like, also taught me many problems should be solvable with compact, readable code that auto-generates boilerplate.

These things I learned are still true with three languages… Go, Nim, and Julia… preserving some of these traits with appeal to mainstream audiences. Go already has massive adoption, too. So, the kind of thinking behind languages like BASIC and Pascal just failed for a while before some people, including a C inventor, helped them redo and promote that. :)

Delphi was amazing. I think it took Microsoft many years for Visual Studio to catch up, I think only 2003 with C# could you approach the speed that Delphi boasted in 1996.

Funnily enough, most of modern front-end web development still struggles to reach Delphi levels of accessibility and power. Though one could say the context, technology and environments were vastly different, Windows in the late 90s was just as ubiquitous as the web is today, and we’re still doing the same shitty applications with buttons, images, event handlers and forms.

edit: worth noting that C# came from the designer of Delphi, Anders Hejlsberg

In the Programming 101 course that I give, I start with Racket (at that level there’s no essential difference to Scheme) and then switch to Python. This gives the student two approaches to compare; functional and imperative.

I can share a similar story having been conscripted to the Finnish Air Forces due to compulsory military service. Before the start of service, six months or so before, there’s always an interview where they ask “so what would you like to do, son?” and after about two picoseconds of thinking I said, “A pilot!”. They, a group of middle-aged military men, nodded and encouraged me to apply.

Many physical and mental exams ensued. I passed every one with flying colours. Unfortunately, back then, this was in -07, the pilot training program required you to have perfect eyesight. Which I didn’t have. I had near perfect eyesight, but this wasn’t enough. Surgical correction was out of the question, as at the ripe age of 19 I didn’t have the means to do so, but it wouldn’t have helped: the FAF rejected, and still rejects, anyone with surgically corrected vision.

So I was rejected, and got to serve the usual kind of military service within the FAF. I wasn’t that bothered by the rejection, as my eyesight was’t exactly something I could have influenced, compared to say, physical fitness.

Surprisingly, in 2013, the FAF relaxed their requirements when it came to visual acuity and began accepting candidates with glasses or otherwise sub-optimal vision. I checked the chart and my eyesight is now way above the minimum. Had I been born six years later and undergone the same qualification process in 2013, chances are I would have been accepted. Instead of a programmer, I would have eventually become a professional military pilot, as many conscript pilots continue into the professional Air Force Academy!

To add to the point in the post, even if the job description lists impossible barriers, such as “perfect eyesight” or “perfect hearing”, the requirements might still evolve over time. With altered requirements I would have been accepted. And no matter the description, like the author says, one should still give it a try, even if the requirements seem impossible to meet. They either are or might be in the future. Never hurts to check in later after a rejection!

Interestingly languages with affine types (like Rust and Pony, many others) can solve this natively. The semantics of once the request is dispatched, it cannot be changed can be guaranteed by the type system.

I am interested in personal opinions about CHICKEN vs Racket. I want to get into one of them but I am not sure which one. I am looking at them from the point of view.of someone who likes developing web and apps. Can anyone share some of their experiences with me?

Racket is a kitchen-sink/batteries-included kind of Scheme that compiles to bytecode that runs in a virtual machine. It’s got the largest Scheme community and ecosystem by far. It seems to excel in GUI in particular. It also has its own varieties like Typed Racket and Lazy Racket, which are quite neat. (You could argue that Racket is a separate dialect of Scheme at this point, as it doesn’t exactly follow the RnRS.)

CHICKEN is a much more minimal Scheme dialect that compiles to C. It’s fast and portable, and the compiled applications are very easy to deploy elsewhere, given you bundle libchicken.so with the executable (or statically link it). It has a very clean C FFI. It implements most of R5RS with growing R7RS support.

Honestly, if you like developing web apps, I’d personally recommend Racket since it has a sizable and mature codebase for web dev, mostly using a sublanguage called Insta.

Life is many things, but it is some times about tolerating things. Random stupidity. Rules that make no sense, that seem to just be, and you can’t do anything about it.

To me, university was like a microcosm of the arbitrary and insane, a boot camp for the real world that, just like university, makes no sense at times.

Of course it was a vault of knowledge like no other. The depth offered by its courses took me beyond imagination. While computer science is not rocket surgery, the scholarly methods I learnt I still cherish to this day. When I lacked motivation, the school gave me a deadline. When I was out of my depth, it gave help.

I know I could have taught myself most of it, but it would have been under my direction, and I know for certain that university educators have a better sense of direction than I. I would have, most likely, studied myself into a corner.

I see the author begrudge university for the same abject senselessness in its rules and values. To me, that apparent senselessness, alongside the possibility to learn so many things, is priceless.

Though they might not have realized it just yet I think they too have learnt this lesson.

When I was young, I dreamed of building beautiful cathedrals of software. But if I pay too much attention to tech, it can feel like everyone obsesses over building the crappiest backyard sheds to power barely-thought-out predatory business models. And I’m supposed to be excited about the narrow possibility of accruing disproportionate financial gains.

I view the actual craft of programming as almost orthogonal to tech itself. Tech headlines are so preoccupied with what other people are doing: who is buying whom, how many Github stars does this have, what OSS product should we be obsessed with from $MEGACORP, how much do you really love JavaScript, etc. I don’t really care about that stuff, that’s celebrity gossip at best. As a result, I don’t pay much attention to tech. The orange website is permanently blocked in my hosts file, I’d block it at the router level if I could.

Since I have a family to support, I’ll continue to do great work (and get paid decently!) for something I generally like. However, I’ve had to accept the fact that so many devs and non-devs want to make a commodity of something that I value more as a craft, and just sort of let the idea that it could be an industry driven by craft more than commodity go. FWIW, the more we try to commoditize development, the worse everything seems to get; e.g. having a near-fully declarative UI has not fixed the difficulty of creating reliable UIs. Thus, there is still a high skill floor and ceiling to programming, and I’ll likely always be able to find work.

My own future projects will probably be more art than intended for end users, because devs only seem to be able to adopt whatever is pushed to them by those with massive marketing budgets.

It’s not surprising since software is a commodity these days. I suppose it will become similar to automobile mechanics: it requires training and apprenticeship, but is not extremely difficult (compared to say, college level STEM), and is a necessary profession, as long as there are cars.

The corollary is that while it may no longer be that unique to be a software engineer, if you work in a prestigious position you could be developing something really interesting that could be one day be used by millions of other engineers.

When I first read the title, I thought it was going to be more of a beef than the chronic it turned out to be. In any case, it actually surprises me that after ten years using modal editing he actually says that:

There’s a steep learning curve in Vim and seeing all those modern IDEs become better at understanding the user’s intent, editing text became way easier and faster in general.

I did not find vim to have a learning curve that steep: it can be painful at first, but you are probably fine the second week already, and being productive after a single month. And even if it is easier at first to use an IDE, I have never seen anyone be faster working in PyCharm than someone in vim, for example.

Being productive after a single month of using Vim? It is, or might be true. But how much productive? After 3 years of using Vim (ime), I think I’m nowhere productive as I would be in perhaps 7 more years of using it. It’s not that Vim has a steep learning curve, but rather it offers so much that even with 10 years usage, you do not fully understand it’s power. And that is what the author is talking about.

Absolutely, after all practice makes perfect, especially in something like vim where muscular memory is key. What I meant when I said you can be productive in a month is that you can actually use it in your workflow: in my experience, after a month using Emacs you are probably still overwhelmed and cannot fully integrate it in your workflow (imho has a much steeper learning curve).

I find this… weird. Docker is a packaging mechanism, not an orchestration system. You need an actual orchestration system on top of it to work reliably. The author coming from a JVM world knows that you can’t just scp product-catalog.jar production3:/apps/pc/ and then expect stability from java -jar /apps/pc/product-catalog.jar … application servers that supervise and orchestrate the systems have existed for decades.

Or did I misunderstand the article? Is he arguing that Docker is a bad packaging mechanism? I thought he is arguing that docker run --restart=blahlbah my-application -p 123:123 ... is not a reliable way to run applications in production. If that is what he is saying I agree with him.

But I thought it’s fairly obvious that docker run isn’t, and hasn’t ever been, the only thing you need to do to run applications in production. It’s nowhere near stable enough to be practical or reliable. Maybe Docker (the company) likes to pretend it is, but the way I see it, you always have to bolt things like k8s/marathon/nomad on top of it.

I have tried several approaches to modelling errors with effect types in Scala, and they all stink a little bit.

The first one is the one the Bifunctor IO is a counterpoint to, which is using plain IO[A], which is MonadError[A, Throwable]. To represent my error states I use sum types that at the top extend from java.lang.Exception. This is practical, because if I’m writing a web server, I can verify if it was a known exception (like BusinessLogicRejection) or some other error. Most if not all of these ...Rejection types are recoverable and non-fatal, and produce a 4xx HTTP code. Any other Exception is most likely a 5xx error or a 4xx error.

This approach stinks because I have to have an “open” model of errors: by extending Exception I have to deal with all Exceptions and distinguish the ones in my error hierarchy from other exceptions. On the other hand, this model is really, really easy to use since I can either IO.raiseError(IdempotencyBusinessBlahBlahRejection) or call some arcane JVM crap and that error gets handled seamlessly. But, at heart, it’s dynamically typed. Having dynamically typed errors gives me these sudden flashbacks, the stuff from nightmares where everything is on fire and you’re so, so alone, that I would probably sleep better when my errors are “closed” in the sense that everything I know is either recoverable, represented by my custom error ADT, or fatal, like ...Error in JVM lingo.

The second approach is to either have IO[Either[E, A]] where E is the recoverable error (I often use the word rejection for these), but this is requires using monad transformers and they are cumbersome and have an inherent performance hit, because monad transformers don’t work nicely on the JVM. So while this is completely typed, it’s annoying to use, and slow. It stinks too!

So the bifunctor IO essentially solves this problem by merging these two. On one hand, I am forced to have a “closed” error model, but I don’t have to use monad transformers. Woot!

Time will tell if cats adopts this approach or whether they continue with the current Throwable approach. Seems like the cats-effect community is divided on this, exhibit A and exhibit B. It is certain that Scalaz 8 will have bifunctor IO, but that library isn’t released yet. Until then, this will be very interesting to watch!

Thanks @jdegoes for all your work in improving the functional programming experience in Scalaz!

LastPass, have used it since forever. Works well enough for being a free service. Use it with MFA and change my master every year, have had no security troubles ever. It’s easy to use and it integrates seamlessly with all browsers.

I happen to have a bunch of bluetooth jam box little speakers I picked up for super cheap, as well as various exercise gear that all claims to be bluetooth compatible. I have the dream of being able to get everything talking together. :(

Wireless headphones rule. I can never go back. I frequently stand up and walk around while working, and keeping my headphones on throughout has been heavenly.

For anyone looking to get into wireless headphones, I highly recommend the Sony MDR-1000X. Top notch sound quality, noise cancelling, 20 hour battery life, compact carrying case, optional 3.5mm input for non-Bluetooth devices, and you can buy manufacturer refurbished on eBay for $200. That’s what I did, my set came indistinguishable from new. Same experience from several of my coworkers who tried mine and bought their own.

That’s a great price for quality headphones. I bought my Audio-Technica ATH-M50 for $150, and for $50 more my 1000X beats the M50 in comfort and sound quality (with noise cancelling). The noise cancelling alone is worth $50, even if you never use them wirelessly. Truly phenomenal product.

I’m not a fan of wireless anything tbh (except wifi). I’ve always found the inconvenience isn’t worth it. For most peripherals (e.g. mouse, keyboard, headphones), I only ever use them within 3 ft of my desk. The occasional interference doesn’t add anything, and the batteries always seem to fail at the worst times.

With wired headphones you can interchange your Amp whenever you need to, and you use a standard connector with extremely wide support (except if you’re using a newer apple device). I try to avoid bluetooth in general because of its history of security problems.

They’re something I always talk about when OpenBSD fans make disingenuous remarks about the relevance of wireless technology in general. I get it, OpenBSD devs weren’t satisfied with their implementation of Bluetooth, so they axed it out out of security and sanitary concerns. I just find the attitude of “nobody needs Bluetooth” rather annoying. It is actually preventing me from seriously considering OpenBSD as a desktop OS. Why? Because wireless headphones are goddamn amazing.

Perhaps you could use a headphone jack to Bluetooth transmitter device? They look like they’re around £15 and seem to have good reviews.

Personally I listen to music ‘on’ my computer by keeping my AirPods connected to my iPhone and using Spotify on the laptop, remotely controlling Spotify on the phone. This works really well, rather surprisingly.

Antoine, please excuse my trolling. I’m sincerely sorry. Wireless headphones are amazingly convenient, that’s true. OpenBSD doesn’t support Bluetooth, that’s also true. We may not like the combination of those facts, of course.

I really like all core features of OpenBSD: it’s simple, well documented, consistent, reliable, has sane defaults, etc. Obviously OS can’t do everything and stay as simple as it is. We all know that resources of the project are extremely limited.

What we can do about it? Contribute patches, sponsor the project, help with testing, etc. That’s the way it works for OpenBSD. A pretty fare and straightforward way, I’d say.

Not a language, but a language feature: in Elixir, there’s a capture operator & that will wrap functions in an anonymous function and also can be used to refer to the nth argument of a function. For example:

I get the author’s point about the Z component being broken. If the library behaves incorrectly but the dependent program uses the incorrect behavior to get functionality, once the incorrect behavior is fixed in the library, the program will stop working. But the library will now be working correctly!

I think semver is not able to solve this issue, but it can mitigate against it: thorough testing and quality analysis before a 1.0.0 release is made is necessary, and careful review of anything that comes afterward.

If strictly adhering to SemVer, wouldn’t the correct approach be to change the default behaviour, while still providing a fallback for the old incorrect behaviour? You could then provide a deprecation notice and actually remove the old incorrect behaviour with the next major version.

I think the problem is that libraries rarely do this (especially for “trivial” fixes) because it’s a PITA. But that’s not really SemVer’s fault.

But that doesn’t solve the problem: dependents upgrade to Z+1 and their stuff breaks, which is expressly not what should happen when doing that under semver. Semver in this case tells you to bump the major version. I don’t mind, it works and it does satisfy the semver specification. I don’t have a problem with stupidly high major versions, since it’s all meaningless anyway, only the differentials are meaningful. Fundamentally going from 98 to 101 is the same as going from major version 3 to 6.

Yeah, I think we’re on the same page. Either you figure out a way to fix the bug in a manner that’s backwards compatible, or you bump the major version. In practice people rarely do this for Z level fixes, but that’s more of a problem with how people interpret SemVer than with the philosophy itself.

I don’t think the analogy holds either. An unattended garden will most likely die or become something different.

But software doesn’t rot. It can run forever. Last week I was contacted by my past employer about a small server I wrote in Perl that hooked into the employer AD. They told me they shut it down since it hadn’t been used for five years, but the server it ran on was being retired to a VPS.

So I logged in on the machine and sure, the last time I or anyone had modified the init script that kept it running, was in March 2007, a few months before I left that employer. So it had been running for 11 years. It could have run for another 13, 26 or even 100 years, had it been initially put on some virtual machine.

If you want to check out a practical gradually-typed language, I’ve been using Typed Racket.

It’s very convenient to use untyped code early-on when the design of the program is unclear(or when porting code from a different language), and to switch individual modules to typed code later to reduce bugs.

Based on reading https://docs.perl6.org/type/Cool, kinda? Although it also looks to me as if this is at once broader than what Perl 5 does (e.g. 123.substr(1, 2), or how Array is also a Cool type) and also a bit more formal, typing-wise, since each of those invocations makes clear that it needs a Cool in its Numeric or String form, for example.

Yes, Typed Racket is gradual typing, but for example, the current version of Typed Clojure is not. The premise is that gradual typing must support being used by dynamic typing, to simplify a little bit.