This is a frustrating and overly-exasperated post which reaches conclusions that have always been part of the Go canon. APIs should be designed synchronously, and the callers should orchestrate concurrency if they choose -- yes! Channels are useful in some circumstances, but if you just want to synchronize access to shared memory (like the author's example) then you should just use a mutex -- yes! These are well understood truths.

Neither the referenced performance characteristics via Tyler Treat, nor the FUD surrounding channel-based program architecture, invalidate channels generally. One does have to think carefully about ownership hierarchies: only one goroutine gets to close the channel. And if it's in a hot loop, a channel will always perform worse than a mutex: channels use mutexes internally. But plenty of problems are solved very elegantly with channel-based CSP-style message passing.

It's unfortunate that articles like this are written and gain traction. The explicit instruction to [new] Go programmers is that they should avoid channels, even that they are badly implemented, and both of those things are false.

> APIs should be designed synchronously, and the callers should orchestrate concurrency if they choose

Wait, why would you say that?

In general, if "orchestrating concurrency" involves guarding access to shared mutable state, then you can't orchestrate it at the callers site. It would be a massive encapsulation leak, because synchronization is not composable, requires special knowledge, plus you don't necessarily know how to synchronize unless you have knowledge about internals. Furthermore, because it is after the fact, your only choice of handling it is by means of mutexes, which has really terrible performance characteristics. Even if you could do ordering by other means, you end up paying the price of LOCK XCHG or whatever mutexes translate to, not to mention that you'll have problems if you want (soft) real-time behavior, because now you can end up with both dead-locks and live-locks.

And this brings us to another problem. If you end up doing such synchronization in Go, then Go's M:N multi-threading ends up doing more harm than good, because if you need such synchronization, you also need to fine tune your thread-pools and at this point 1:1 would be better. On top of 1:1 platforms you can build M:N solutions, but it doesn't work well in the other direction.

> Novices to the language have a tendency to overuse channels

Novices to software development have a tendency to overuse mutexes as well.

"In general, if "orchestrating concurrency" involves guarding access to shared mutable state, then you can't orchestrate it at the callers site."

What this generally means in Go is that it is an antipattern for your library to provide something like "a method that makes an HTTP request in a goroutine". In Go, you should simply provide code that "makes an HTTP request", and it's up to the user to decide whether they want to run that in a goroutine.

The rest of what you're talking about is a completely different issue.

Channels are smelly in an API. IIRC in the entire standard library there's less than 10 functions/methods that return a channel. But the use case does occasionally arise.

Reading this... only makes me gladder that I'm pursuing work in the Erlang/Elixir space, where messaging "just works" and concurrency "just works" and immutability "just works" (and new processes take a microsecond to spin up) and tearing anything down is basically a nonissue as failure is embraced and logged at every turn and cascading teardowns simply happen automatically depending on how the processes are linked

and all this turns out to be a really amazing system of paradigms when designing apps to work in the real world

Dialyzer will fail the type check until you remove [-1, 0, 0.1] from the list. Not with a particularly helpful error, but it does fail it nonetheless.

The code itself is a valid program that runs, but it produces incorrect output, because 0 rem 15 =:= 0, so you get <<"FizzBuzz">> where you'd expect to get a 0 in the list. By running Dialyzer in my build chain I can catch that my implementation doesn't match my constraints at compile-time. In a way that I otherwise would have only found at runtime.

Though while creating this little pointless example one thing I'm not super clear on is why Dialyzer fails to notice that my return type from

fizzbuzzer(Number) ->
Number.

if I change it to

fizzbuzzer(Number) ->
-Number.

will return a neg_integer() and fail to satisfy the return spec. Despite that I've told it the input must be a be a pos_integer(). Unless I enable the -Wspecdiffs flag, in which case it notices the problem.

Restricting input based on type hierarchies can reduce a certain class of bugs, yes, but careful use of guards as well as typespecs and unit test coverage (which you should have, anyway) can accomplish much of what type restrictions can

> you end up paying the price of LOCK XCHG or whatever mutexes translate to

But channels use locks internally. The choice of channels vs. mutexes is one of design, not implementation. Also, mutexes are blocking; LOCK XCHG isn't. Sure, mutexes also use LOCK XCHG (but so do channels, and nearly all concurrent data sctructures), but they also block (as do channels).

> your only choice of handling it is by means of mutexes, which has really terrible performance characteristics

That's just not true. There is a way to translate any locking algorithm to a non-blocing one (in fact, wait-free, which is the "strongest" non-blocking guarantee), yet only only a handful of wait-free algorithms are used in practice. Why? Because it's hard to make them more efficient than locks in the general case.

> not to mention that you'll have problems if you want (soft) real-time behavior, because now you can end up with both dead-locks and live-locks.

Again, channels are blocking data structures.

> If you end up doing such synchronization in Go, then Go's M:N multi-threading ends up doing more harm than good, because if you need such synchronization, you also need to fine tune your thread-pools and at this point 1:1 would be better.

The question of which concurrency mechanism should be used is a difficult one (and in general, more than one is necessary; even Erlang has shared, synchronized mutable state with its ETS tables), but you are very misinformed about how concurrency constructs are built and about their performance behavior.

This post was written primarily as a response to http://www.informit.com/articles/article.aspx?p=2359758, which, when it came out last June, frustrated me to no end. It then sat in my drafts folder for months until patiently attempting to bring another experienced programmer, new-to-Go, up to speed on best practices.

If it truly is the accepted best practice for novices to avoid channels, then that PR campaign has been tried and found lacking. EDIT: whoops, read parent wrong.

In addition, the author points out existing go libraries that people use that use channels when they shouldn't, so apparently the go language community needs more people pointing out that this is a bad idea.

(I decided to make a new comment rather than edit my existing comment)

Read Tyler's original article for a less FUDdy take on it. Channels are always slower than mutexes, which is obvious when you understand their implementation. They are definitely not badly implemented as a general rule.

The api definitely is badly implemented and makes them hard to use. That's the point of the post. There are design decisions around channels (sends panicing, close panicing, nil channels blocking) that make them hard to understand, follow, and compose concurrent solutions.

I'm sorry, but I don't agree with any of your assertions. The constraints on channels are there not as an accident of a bad implementation, but as deliberate decisions to enforce a certain set of design contracts. Panics on invalid channel operations enforce those contracts. That nil channels block is actually an incredibly handy feature: see e.g. https://github.com/streadway/handy/blob/b8cb168/breaker/brea...

Without exception, hitting one of these corner cases exposes an error in design, from Go's perspective on CSP. You can disagree with that perspective on a subjective basis ("hard to understand") -- but you can't lift that opinion to objective fact, and you certainly can't claim these artifacts of design as evidence of incompetence or neglect.

It's great that panics happen when you violate those contracts. That is a deliberate design decision and I agree with it. However, the contracts that they enforce cause real problems evidenced by the article. Small additions might make those contracts more general and make channels more applicable. In my opinion, you should be able to attempt to send on a channel that could be closed in the same way that you are allowed to check if an interface contains a specific concrete type without panicing. In my experience, this would allow for a number of useful patterns that are very hard to express right now.

Nil channels blocking is definitely a deliberate design decision and has valid use cases. I use them frequently when I have a channel based design. It also isn't the thing that most people first expect since they have the opposite analog for using anything else that is nil: panics. The article which I assume you read makes only this point.

I never attempted to lift statements that are obviously opinion based (anything that has a judgement of something good or bad) as objective fact.

While I agree all of this is subjective, I would argue that something being composed of "deliberate decisions to enforce a certain set of design contracts" doesn't mean those decisions nor the design contracts are good. Nor does it automagically make a good implementation.

In addition, making bad design decisions that you think are good is actually one of the best types of evidence for incompetence (though not neglect, in this case).

I don't personally have enough data to have a strong opinion on where Go channels falls here, but I don't think any of your arguments here have any bearing on the idea that Go's channel implementation is bad.

But using something in the way it was not intended to be used, and then complaining that it works badly, is evidence of incompetence on the part of the user, not the designer.

> I don't personally have enough data to have a strong opinion on where Go channels falls here, but I don't think any of your arguments here have any bearing on the idea that Go's channel implementation is bad.

If sagichmal is correct, zeeboo is trying to use channels in a way that they were explicitly not designed to be used. That makes zeeboo's criticism very likely to be invalid. (It is the one who uses them as they were designed to be used who knows what the actual problems with the design are.)

My criticism is that the design limits the places where they are valid. I'm not trying to use a hammer where a screwdriver is required, I'm saying that if the hammer was designed differently, we'd be able to use it in more situations appropriately.

It's as if someone created a gun that fired backwards and I said "hey, it might be better if the gun fired forwards. we'd be able to use it in more situations." and people responded with "you shouldn't use a gun that fires backwards if you want to fire forwards." I totally agree, but it's missing the point.

Hammers make poor screw drivers. Mutexes, atomic integer operations, and channels (buffered and unbuffered) all have their place. You will think any of these is "badly implemented" if you choose the wrong tool for the job.

There's more to it than I can really describe here, but in effect it allows you to treat a thread as an object with methods; calling a method on the object sends a message to the thread. The thread's main code can, at any point, block and wait for a message, or combination of messages.

The handling code looks like this:

...some code...
accept DoSomething(value: in out integer) do
...some code here...
end
...some more code...

That causes the thread to block and wait for the message. When the message is sent, the caller blocks, the receiver runs the handler, then the caller resumes.

The beauty here is that inside the message handler, you know that the caller is blocked... which means it's safe to pass parameters by pointer[]. Everywhere the parameter's in scope, the parameter is safe to use. The type system won't let the thread store the pointer anywhere without copying the contents first, so you get zero-copy messaging and* it's failsafe.

You can also do really cool stuff with timeouts, guards, automatic thread termination, etc. Here's a simple multithreaded queue (syntax and logic not guaranteed, it's been a while):

Multithreaded! Blocks the client automatically if they pop while the queue's empty or push while it's full! Automatically terminates the thread when the last connection goes away and the thread leaves scope! Thread safe! Readable!

I'd love to be able to do this in a more mainstream language.

[*] This is a simplification. Ada's pointers are not like other language's pointers.

The older I get, the clearer it is that Ada was the answer to the last 2 decades worth of problems (fast, able to go low-level when you need to, very type safe, easy concurrency primitives) and we all just ignored it because it wasn't fashionable.

Agreed on this. Ada has many solutions that I wish I had access to in C and C++.

In regards to efficiency, Ada as a language can be optimized to a greater extent than C/C++. It avoids the aliasing problem all together, ALIASED is a keyword in Ada that must be explicitly used, by default the compiler prevents aliasing! Everything else in the language is very unambiguous, a lot of checks are done at compile time, and if needed for performance, run time checks can be turned off on a selective basis.

Combined with the optional but enabled by default since-you-are-going-to-write-them-anyone bounds checking on parameters, and a type/subtype system that lets me ACTUALLY DEFINE the ranges of every parameter going in and out of my function calls, well, whenever I look at a bug fix, I do a mental check of "would this even be possible to do wrong in Ada?" and for about 30% of bugs, I'd say no.

Ada's main disadvantage from an embedded point of view is the hoops it makes people go through to do bit manipulation. It is understandable why, bit manipulation breaks the entire type system in every possible way, but a lot of embedded development requires it. At some point it'd be nice if the language had a keyword that just said "this variable can be abused, let it live dangerously."

It also has proper tagged types and built in syntax for circular arrays. Two bits of code I am sick and tired of writing again and again in C, and then having to explain to people what a tagged type is.

I would love to have a modernised Ada. With case sensitivity. And garbage collection (a lot of the language semantics are obviously intended to be based around having a garbage collector. I'm very surprised that it never seemed to get one). And a less wacky OO system (invisible syntax, ugh).

But those are quibbles, and at it's heart it's still an excellent, excellent language. And there are Ada compilers in Debian, it's still being maintained, it compiles really quickly into excellent code, it interoperates beautifully with C...

Didn't Ada 2005 fix the OO system to give it the CLASS syntax everyone is used to?

Ada's usual syntax and declaring class inheritance are isomorphic with each other, the transformation a compilers does are the same, but non-JS programmers are used to class inheritance syntax.

I've always wondered if JS programmers would actually pick up on Ada's object system faster, just because they wouldn't mind the lack of an explicit inherits quite so much.

As for GC, I thought it was optional in Ada, just never implemented. For most of Ada's target audience though, heap allocators are already verboten, so GC isn't needed. :)

I'd really like some of Rust's ownership semantics along with Ada's already well developed feature set. Pointer ownership is still a gnarly problem, I don't recall what, if anything, Ada does to help out with it.

Okay, so I can't duplicate the exact OO syntax issues I was having before. But, from memory, I was finding that by putting the wrong kind of statement between the type definition and the method declaration, I could thoroughly upset the compiler --- there was invisible syntax connecting the two together, and if you I put the wrong thing in between, then things stopped working.

But as I can't duplicate it it's entirely possible I was just hallucinating.

In general I find the OO syntax desperately confusing. It feels like it's been shoehorned in on top of the existing record and procedure syntax, and it's never clear exactly what anything does. e.g. you need to suffix the object type with 'class in methods in order to make them non dispatching, but you need to suffix the object type with 'class in variable types if you want them to dynamically dispatch? That's not a happy choice.

(Case in point: I've just spent 20 minutes trying to refresh my memory by making this code snippet work. And failing. What am I doing wrong? http://ideone.com/6iPdYF)

Incidentally, re getadanow.com: that's really nice! And it's not pointing at the Adacore compilers, either; beware of these, as their standard library is GPL, not LGPL, which means you can't distribute binaries built with them. (The standard GNAT version is fine.)

> But as I can't duplicate it it's entirely possible I was just hallucinating.

There's a thing where if you declare a type A, and then a derived type B, methods on A have to be declared before type B gets declared, because B's declaration "freezes" A. I think it's mostly a single-pass optimization that might have made sense 20 years ago but is meaningless in an era of gigabytes of RAM.

> (Case in point: I've just spent 20 minutes trying to refresh my memory by making this code snippet work. And failing. What am I doing wrong? http://ideone.com/6iPdYF)

The specific error message is: you declared a package, which is basically a header in C parlance. You declare signatures in it, not method bodies. Method bodies go in package bodies. You were conflating the package and the package body.

The .Foo selector isn't found; changing it to Foo(object) reports that apparently Foo isn't a dispatching method on MyClass1... which makes no sense, because this is the same code as you had. My suspicion is that there's something magic about declaring classes in packages?

> which makes no sense, because this is the same code as you had. My suspicion is that there's something magic about declaring classes in packages?

Yeah.

Dispatching methods on a type consist of the type's "primitive operations". The Ada 95 Rationale spells it out: "Just as in Ada 83, derived types inherit the operations which "belong" to the parent type - these are called primitive operations in Ada 95. User-written subprograms are classed as primitive operations if they are declared in the same package specification as the type and have the type as parameter or result."

It seems like a wart that you're not in an "anonymous" package in situations like your example, but I also guess it probably doesn't come up much in "real" programs.

This made message queues that would pause if there was a pop on an empty queue (for long-polling), supports removing everything and if a new 'client' connects while another is waiting for an item sends an error message to the original client. I'm sure there's a neater way of doing it but this sat and ran for quite a while for me and didn't take long to write :)

Generally, the loops are achieved by making an infinitely recursive function call, and you can therefore switch between major behaviours by having multiple functions.

For a quick syntax thing, sending a message is "address ! message" and what I think the "accept" in your code is equivalent to a 'receive' in mine.

You won't have the same type safety, but the general pattern of just blocking and waiting safely is there. It's a fun language, and people seem to be pretty happy with elixr these days too (built on top).

It's a little bit different. In Ada it's a real rendezvous. Either the "client" or the "server" task is running. In Erlang the mailbox is asynchronous, which means the server can't make any assumptions about what state the client is in while it works on processing the message and sending it back and the client can't assume that the server is directly working on the message after it put it in his mailbox.

The author points out that channel teardown is hard. He's right. Figuring out how to shut down your Go program cleanly can be difficult, especially since calling "close" on a closed channel causes a panic. You have to send an EOF on each channel so the receiver knows to stop. When you have a pair of channels going in opposite directions between two goroutines, and either end can potentially initiate shutdown, it gets messy.

At least in the original implementation, "select" for more than one option was really slow and complex. The single-input case was handled efficiently with generated code, but for N > 1, a very general library mechanism with several heap allocations for each message was used. This means having both a wait for data and a timeout in a select puts you through the slow path. Not good. Someone did an analysis of existing programs and found that N=1 was most common, N=2 was reasonably common, and N>2 was rare. N=2 needs special case support.

QNX interprocess messaging has a similar architecture. But they don't have the panic on close problem, and timeout is handled efficiently. So you can generally shut things down by closing something. As each process is notified about the close, it closes any channels with which it is involved, even if some other process has already closed them. The closes thus cascade and everything shuts down cleanly. Processes that time out at a message receive check to see if the rest of the system is still running, and shut down if it's not.

Go's "share by communicating" would be more useful if Go had Rust's borrow checker, so you could share data without races. Yes, Go has a run-time race detector, but that's only useful if races are common enough that they occur during testing.

"When you have a pair of channels going in opposite directions between two goroutines, and either end can potentially initiate shutdown, it gets messy."

It does get messy to do it correctly, but I've found in the end it comes out less messy to have a channel communicating back to the sender that can be closed if you want the recipient to be able to close channel. I haven't needed it very often, but it happens. It still ends up simpler than hacking around the problem by trying to "close" the channel from the wrong end and the resulting panic handling.

For concreteness, at least from what I've experienced, the "messiness" is that if you close one of these channels, you may have to "drain" the other channel lest you let the other side block. If the other side is only using the channel in a "select" block with other options you may not need to but if it ever does a "bare" send you need to wait for the other end to send its close. This can be particularly complicated if for some reason the "draining" process has to do something other than drop the messages on the floor.

The panic when calling close on a closed channel is a bit annoying. Recently I've been using x/net/context to signal goroutines instead of closing a channel. The CancelContext allows you to call cancel multiple times.

This was a well-written and entertaining post. It represents the kind of self-reflection every programming community should encourage. Too often are devs zealously supportive of their language of choice without considering thoughtful critiques that could make their chosen language even better, and/or present an alternate way of looking at things that makes one better at programming in general.

I'm not sure making an absolute statement ("do not...") followed by "... actually do, sometimes" is helpful. How is this different to any other language that gives you a toolbox of synchronisation primitives?

It's not quite the same thing but recent JVMs can translate synchronised blocks into Intel TSX transactions, which means multiple threads can run inside the lock at once, with rollback and retry if interference is detected at the hardware (cache line) level. So yeah .... almost. But it's fancy and cutting edge stuff.

It's conceivable, if you made mutexes compiler/language intrinsic, but as long as you're calling pthread_mutex_lock, the compiler has to assume that that pthread library, which is linked dynamically, is interchangeable, and can do anything it likes to memory. That includes mutating x

But I think InterlockedIncrement is just 'lock xaddl x 1', so using InterlockedIncrement would be to do it manually.

I'm asking if any compiler can take a statement which uses a high level, general purpose lock and increments a variable inside it using conventional language expressions, and convert it to use 'lock xaddl x 1' (perhaps via InterlockedIncrement or whatever other intrinsics you have) instead.

I only know Java well, not .NET, but I'm pretty sure no Java compiler does it.

Does Go not have finalizers? These are mostly solved problems since the Smalltalk era. Haven't learned Go yet and from what I read I'd be better off with Rust or something that would stretch my brain more like Haskell. When I read about it I get the sense that we are reinventing stuff from the 90s. But hey, it's hip.

I didn't get the point of example with Game and Player. The code behaves exactly how it's told to. If you need some logic to handle conditions where all players have been disconnected - you should implement it, no matter how. Maybe you want to wait for some time for a new players and teardown only after this timeout. Or, maybe, you want to reuse this game object, moving to some kind of pool (like sync.Pool). Or, perhaps, you really want to wait forever for returning players. It's not 'mutex vs channels' example in any way.

Next, channels are slow, really? Send-receive operation on unbuffered channel typically takes around 300ns. Nanoseconds. 300 nanosecond in exchange of nice and safe way to express concurrent things - I wouldn't even call it a tradeoff. It's not slow at all in vast majority of cases.
Of course, if you write that software that do care about nanoseconds and channels becomes your bottleneck - congratulations, you're doing great, and you probably have to switch to C++, Rust or even Assembler.

But, please, don't mislead people telling them, that channels are slow. They could be slow for your exact case, but it's not the same.

I don't really get the tone and arguments of the article. Some of the points are totally valid, but they easily falls into the 'hey folks, be careful about this small thing you may misunderstand at the beginning' category. Pity.

Of course, if you write that software that do care about nanoseconds and channels becomes your bottleneck - congratulations, you're doing great, and you probably have to switch to C++, Rust or even Assembler.

Why not profile to identify which channels are a bottleneck and just replace them with a Disruptor?

>* Of course, if you write that software that do care about nanoseconds and channels becomes your bottleneck - congratulations, you're doing great, and you probably have to switch to C++, Rust or even Assembler.*

Thats ridiculous. I could switch my entire language... or I could just use a lock?

First off, looking at Tyler's post, he measured unbuffered channels taking 2200ns vs 400ns for the lock solution - a 5x speed up. That is a large gain, especially when dealing a program that may have high lock contention. Switch from Go to C++ or Rust my not even gain you that much in terms of throughput - they are both compiled code and moving to either language will only mainly alleviate of magic STW pauses - acquiring a lock likely won't be any faster.

Second, in the point of Game and Player, the logic to handle conditions where players disconnect is still simpler to implement with locks - its 2 lines, and there is no need to bring in sync.Pool, or introduce arbitrary timeouts.

Channels are slower than locks. In more complex applications, channels are easier to reason about than locks, but those tends to be in cases where you care more about message passing than state synchronization.

Funny timing for me -- last Friday I rewrote some code from channels to traditional sync primitives (to the code's improvement), and I was musing in my head that while everyone always says "don't communicate by sharing, share by communicating, yada yada," it doesn't seem to work out that way in practice.

I think the article is well-written, and clearly comes from a place of deep experience and understanding. Good stuff.

I had a similar experience in the opposite direction. Two weeks ago I moved some code from a mutex-based design (including a map of string to mutex, which itself needed to be behind a mutex) to channels, and I love it, though the result seemed about 10% slower.

I guess the message is: everything has its place; don't make a straight-jacket for yourself.

I see channels as an architectural option when it comes to structuring the communication between components of my software. Mutexes are another option that are more effective in situations where multiple threads may access the interface of a single structure. E.g I use channels to distribute os.Signals througout my software and a mutex for making a "context" structure thread safe. Right tool for the right job

Even when that's the case, it's rare that fixed-size buffered or unbuffered channels are really the best option for communication between different components of your software. A simple mutex-guarded queue is easier to begin with and easier to evolve when requirements change. You can prioritize queued work trivially and transparently; you can add batch processing, monitoring, and resolve other production issues without any undue refactoring: it can all be encapsulated behind your mutex-guarded queue.

It's really quite a pity that Go's channel syntax treats channels as unique snowflakes, rather just being sugar for calls into an interface that would allow the underlying channel implementation to differ based on software needs.

That's an excellent example (in a long list) of things that would be possible with generics, or even parameterized packages. They could have provided an interface Channel[T] with syntax sugar if desirable. But as it is, everything in Go that can handle multiple types has snowflake status.

Offtopic: that animated image is literally nauseating. Consider removing it, or making it animate just once and then halt. It was meant to be "fun" or whatever but, seriously, I wasn't able to read the text when it looped over and over in the corner of the eye.

I think I've just come to accept that sychronization is the pain point in any language. It's callbacks, promises, and the single event loop in nodejs. It's channels in golang.

No one can come up with a single abstraction for synchronization without it failing in some regard. I code in go quite a bit and I just try to avoid synchronization like the plague. Are there gripes I have with the language? Sure, CS theory states that a thread safe hash table can perform just about as well as a none-thread safe, so why don't we have one in go? However...

Coming up with a valid case where a language's synchronization primitive fails and then flaming it as an anti-pattern (for the clicks and the attention, I presume) is trolling and stupid.

Because concurrency is hard. You can't reason about concurrent programs the way you can about sequential ones, and no abstraction is going to completely fix that.

After having worked with it a fair bit, however, I'm beginning to really like Promises + async/await (as in ES7, Python 3.4, and C#). It manages to keep most of the concurrency explicit while still letting you use language mechanisms like semicolons, local variables, and try/catch for sequencing. If you make sure your promises are pure, you can also avoid the race conditions & composability problems of shared state + mutexes. (Although that requirement is easier said than done...it'll be interesting to see what Rust's single-writer multiple-reader ownership system brings to the mix.)

I've been bitten by the fact that Erlang lacks a channel-like primitive. You've got half-a-dozen "pool" abstractions on github because it's actually sorta hard to run a pool on pure asynchronous messages when there is absolutely no way to send a message out to "somebody", the way Go channels can have multiple listeners. I know that would only work on a local node but there's already a couple of functions that have already penetrated that abstraction anyhow.

You also have to deal with mailboxes filling up, still have problems with single processes becoming bottlenecks, and the whole system is pervasively dynamically typed which is fine until it isn't.

It is pretty good, but it's not the best possible. (Neither is Go. I still like Erlang's default of async messages better in a lot of ways. I wish there was a way to get synchronous messages to multiple possible listeners somehow in Erlang, but I still think async is the better default.)

> You've got half-a-dozen "pool" abstractions on github because it's actually sorta hard to run a pool on pure asynchronous messages when there is absolutely no way to send a message out to "somebody"

You can store receivers in ets table and implement any type of selection algorithm you want or have some process which selects workers. There is no default method, because one default method is not good for everyone and people will complain that it's not good for them. Implementing pools is easy in erlang, I've done tailored implementations for several projects.

> You also have to deal with mailboxes filling up

Yeah, unless you implement back-pressure mechanism like waiting for confirmation of receiving. In ALL systems you have to deal with filling queues.

> I wish there was a way to get synchronous messages to multiple possible listeners somehow in Erlang

You can implement receiver which waits for messages and exits when all are received or after timeout, it's trivial in erlang but I haven't needed it yet. Here is a simple example:

"You can store receivers in ets table and implement any type of selection algorithm you want or have some process which selects workers."

Your process that selects workers has no mechanism for telling which are already busy.

It is easy to implement a pool in Erlang where you may accidentally select a busy worker when there's a free one available. Unfortunately, due to the nature of the network and the way computations work at scale, that's actually worse than it sounds; if one of the pool members gets tied up, legitimately or otherwise, in a long request, it will keep getting requests that it ignores until done, unnecessarily upping the latency of those other requests, possibly past the tolerance of the rest of the system.

"You can implement receiver which waits for messages and exits when all are received or after timeout, it's trivial in erlang but I haven't needed it yet."

That's the opposite of the direction I was talking about. You can't turn that around trivially. You can fling N messages out to N listeners, you can fling a message out to what always boils down to a random selection of N listeners (any attempt to be more clever requires coordination which requires creating a one-process bottleneck), but there is no way to say "Here's a message, let the first one of these N processes that gets to it take it".

You wouldn't have so many pool implementations if they weren't trying to get around this problem. It would actually be relatively easy to solve in the runtime but you can't bodge it in at the Erlang level; you simply lack the necessary primitives.

Then it's even easier, pool selector just hands out free workers and deletes them from queue. When worker is free, it just sends a message "I'm free" and it gets added to "free" pool. Yes, it will be "one master process is a choke point" but it's only a problem when your tasks are so short that sending messages is slower than doing the work. But then probably sending messages is the wrong way to do those tasks. There are so many pool implementations because there are many possible solutions depending on what exact problem you have.

"Yes, it will be "one master process is a choke point" but it's only a problem when your tasks are so short that sending messages is slower than doing the work."

You're simply reiterating my point now, while still sounding like you think you're disagreeing. Yes, if you drop some of the requirements, the problem gets a lot easier. Unfortunately these are not such bizarre requirements, and Erlang tends to be positioned in exactly the spaces where they are most likely to come up.

"But then probably sending messages is the wrong way to do those tasks."

That translates to "Erlang is the wrong solution if that's your problem". Since my entire point all along here has been that Erlang is not the magic silver bullet, that's not a big problem for me.

message sending has backpressure built in. as a mailbox's size increases it gets more and more expensive (in reductions, the currency erlang uses for scheduling processes) for a process to send a message to it

I'm not saying Erlang isn't great, but if you need to pass a large datastructure around between Erlang processes then copy message passing starts to be a lot and you need to share memory. You can do it in Erlang, but I'd hardly call it great, and you're avoiding the sync primitive that Erlang offers.

What if you want multiple readers at once, and a writer thrown in once in a while?

Edit:

Okay, my point was that the sync primitives of most languages alone can't save you and you're using RWLock in your example, so clearly ownership by itself doesn't solve everything, right? That's the point I'm trying to make.

Edit2:

Hmm, I'll have to check that out. I don't know that I would call Rust's ownership model super easy to reason about, but it is nice that the compiler prevents you from doing so much stupid $#^&.

> Okay, my point was that the sync primitives of most languages alone can't save you and you're using RWLock in your example, so clearly ownership by itself doesn't solve everything, right?

The thing is that Rust ensures that you take the locks properly. It's an compile-time error to forget to take the lock or to forget to release the lock†. You can't access the guarded data without doing that.

† For lock release, it's technically possible to hold onto a lock forever by intentionally creating cycles and leaking, but you really have to go out of your way to do so and it never happens in practice.

By the way, on a related note, data races themselves are easier to reproduce than the visible negative consequences of those races on the execution of that program. That's the basis of tools like the "Helgrind" tool in Valgrind. That is to say, we can determine that some data is being accessed without a consistently held lock even when that access is working fine by dumb luck. We don't need an accident to prove that racing was going on, in other words. :)

> I think I've just come to accept that sychronization is the pain point in any language.

No, it's not. Everything is easier with event loops, because everything is always synchronized. And since it is, there is no need for concurrent hash tables, locks, channels, you name it. There is also no more shutdown and cancellation problems, you get them for free and easier than anything. The only thing left is a __consistent__ API with callbacks. But as long as you go with higher order functions you are not going to have any problems.

What if you need to do a compute intensive task on a large data structure? You know you might need to take advantage of more than one core and sharing memory between the threads will be difficult. Assuming you're talking about nodeJS, nodeJS serializes and deserializes objects in and out of C++ land in order to do compute intensive tasks. Hardly a catch all!

Are event loops good at some things? Of course! Are the good at everything. Are you high?

Either your event handlers are going to be called in a nondeterministic order, or they won't.

If they are going to be called in a nondeterministic order, you still have access control issues and can get yourself into all sorts of concurrency-style problems.

If they aren't going to be called in a nondeterministic order, perhaps because you just have a single cascade of events (open socket, write this, get that, close socket), then in a language like Go you just write the "synchronous"-looking code, and you don't have to write the code as if it's evented. You have only marginally more sharing problems than the event loop.

Raw usage of event loops are a false path. They solve very few problems and introduce far more.

> Either your event handlers are going to be called in a nondeterministic order, or they won't.

The order is not going to be completely deterministic, but your whole program operates on explicitly deterministic units of computation that never implicitly execute in parallel (event handlers). This eliminates all of those issues with concurrent memory access.

Writing "synchronous" looking code cannot be a substitute, since it makes these units of computation implicit. After which it's no longer possible to distinguish which function call is going to yield, therefore dealing with concurrent memory access is going to be needed, just like in any multithreaded program.

So, no, event loops are superior to multithreaded model in almost every way.

While polling for i/o may be common the next most common problem in computers is solving computationally complex tasks. Why is Intel making all these cores? I guess no one actually needs them, they just think they do.

The article presents very similar arguments to those that I read in a book from 1982 or so. It discussed channels in Ada and pointed out that without super smart compilers that would turn channels into mutex operations the code using channels would be slower and more complex due to the need to create an extra threads.

Base on that I can predict that in 2050 I will also read an article discussing channels in yet another language and advocating using mutexes instead...

I am not a Go veteran, but can see where this article is not helpful. Yes, channels are not a solve-everything. That is, why the Go library also contains mutexes etc. The game serving example could have been fixed by adding a channel to signalize that the game is finished. The game runner function should listen on the "scores" and the "done" channel with a select. Or, not use a channel at all. The channels are great, when you just want a completely safe method of communicating between goroutines, as long as the communication reasonably falls in the "streaming" behavior of the channel model.

I'm so tired of reading negative comments about entirely subjective (others might appreciate the gifs) and totally skippable if one doesn't like them (you can also ignore them) elements of a good post.

But saying "I hate articles riddled with gifs" is far from Marshall McLuhan and Edward Tufte.

Especially since it's not some shallow Buzzfeed post, but a detailed technical explanation of a programming-related issue that the author took time and effort to write -- which makes complaining about its presentation petty.

The author obviously wanted to lighten it up and add some fun elements. And he provided his opinion and expertise for free. These kind of comments can mainly serve to discourage him from writing more, not get him to "improve" his communication.

The gifs were actually causing Firefox to periodically freeze for me. For some reason it worked in reader mode, even though the gifs were still shown. This makes no sense to me, but in the end whatever was going on with the gifs initially caused the article to not only be unreadable but to negatively affect my entire browser. As such, I think it's reasonable to point this out in this case.

In non-critical things (not important to execution speed), is it still acceptable to use go channels? I'm always weary of using a mutex because then I have to spend a much larger amount of time checking to see if it will lock.

It's not actually a leak. It's a program explicitly doing 'run goroutine and don't care of it anymore'. If the program logic wants this - it's ok. If author wants it to finish on some condition, but didn't write the condition code (like in this article) - it's a leak, but it's purely author's mistake.

Synchronizing access to a memory address isn't really the use-case for channels. I think that's fairly well understood by the Go programmers I work with. This example demonstrates why, but it prefaces the discussion by implying this is the standard practice, which I think is misleading.

I perhaps communicated poorly, but the point of that section was to try and explain that the CSP model (only using channels) was untenable in Go (even though it doesn't necessarily have to be in general), and that you'd almost certainly end up not just using channels in a real program, which it seems you agree with.

> Can someone explain the initial goroutine leak that is being addressed?

The "for score := range g.scores {" loop runs forever, since nothing ever closes the g.scores channel. I.e., the "range" only terminates when the channel is explicitly closed. Even if there are no current senders on the channel, and even if nobody else holds a reference to the channel (and thus nobody else could potentially create a new sender and start sending on that channel), Go doesn't realize it (garbage collection doesn't help here). The "range" waits forever.

Thus, all goroutines that run this code (via calls to NewGame(), via "go g.run()") will run forever, and leak, as long as something else in the program is running. When the rest of the program is done, Go will correctly detect that all these leaked goroutines are blocked and thus it's a deadlock, leading Go to panic ("no goroutines can proceed") and terminate.

You start a game, and that starts a goroutine that goes round in a loop getting scores from a channel. You have players which also have references to the channel and who put scores onto it.

When all the players have left the only thing that has access to the channel is the game's goroutine. It's not consuming CPU itself because it's simply waiting for something to be put on its channel, but it does still have its stack and other resources, and it now has no way to exit.

You can get this sort of resource leak in lots of ways in concurrent systems, and they all essentially boil down to the same thing, a thread or goroutine, or whatever, is waiting on a resource that nothing else has a reference to anymore, and there is no other way end it.

I enjoyed the article and nodded along as I read it. But after, I felt like it was overstating its case a little. It puts up a toy implementation that kinda works, and then explains that to make it act correctly in the real world you have to add uglier code. I can't really see blaming the language constructs for that... show me a language where that doesn't happen!

I do appreciate that the article tries to deflate some of the hype about channels that you see when first investigating Go (I know I bought into it at first). After a little experience, I settled into a pattern of using channels for large-scale and orderly pipelines, and more typical mutexes and conditions for everything else. They have strengths and weaknesses, like all tools.

There is probably a large number of developers who think "OMG my Go code doesn't have any channels and goroutines. Am I doing this right?" If you try to force a solution that isn't quite right for the given problem, then well, have fun. Case presented by the author I would naturally program with Mutexes, as me thinks using channels / goroutines is an overkill for this task.

Eh, I don't mind click bait titles as long as the article delivers, and the title isn't too egregious in its manipulation. In this case, I think it's pretty well understood by most that the title is poking fun, since taking it truthfully is fairly ridiculous.