So when every yahoo on your segment fires up BitTorrent your VoIP stops working? No thank you.

Of course not. When every yahoo on my segment fires up bittorrent, everyone's available bandwidth gets limited to the total available bandwidth divided by the number of people. As long as I am using less bandwidth than that number, my traffic outprioritizes any and all data by users that exceed it. No content-based prioritization required.

As a service, you may prioritize MY voip traffic against MY torrents. But under no circumstances can there be tradeoffs between MY torrents and YOUR voip traffic -- that tradeoff is based on your and my traffic in general, without caring about the type.

Doing something like prioritizing VOIP packets over FTP, for instance, is perfectly acceptable,

Is it? I'm not sure I agree.

If my connection is saturated while I am using both VOIP and FTP, it is entirely acceptable to me that my ISP prioritizes my VOIP traffic over my FTP traffic.

If my ISP's total uplink connection is saturated (whether or not this should happen is another discussion), it is entirely acceptable to me that the ISP throttles its users that are currently using the highest amounts of bandwidth. Ideally, they throttle every user using more than X amounts of bandwidth down to X, where X is the highest number that they can sustain; and not do anything for all the users using less than X. This done without looking at the type of traffic of the different users, only the total bandwidth use. Of course, within the scope of a given user's such-throttled bandwidth, that's user's VOIP traffic may be prioritized over that user's FTP traffic, per the above clause.

But it is not acceptable to me if your VOIP traffic is prioritized over my FTP traffic independent of our total bandwidth usages. If I am trying to use 100 Mbit/s of FTP and you are trying to use 100 Mbit/s of VOIP and the ISP can only sustain 120 Mbit/s total, then it can throttle us both down to 60 Mbit/s, but it must not throttle me down to 20 Mbit/s instead because VOIP outprioritizes FTP. And when I am trying to use 50 Mbit/s with my FTP and you are trying to use 100 Mbit/s with your VOIP, then you go down to 70 Mbit/s, while my bandwidth stays intact.

If your move would repeat the previous board position, you must play somewhere else.

Then I highly doubt they calculated all legal positions in the game. They probably calculated all legal positions of the board, but that's a different thing.

The number of possible positions of the board is upper-bounded by 3^(19^2), with 19^2 positions that each can hold a black stone, a white stone, or no stone at all. This exact number was probably computed by this research.

But the possible positions of the game include not just the current board position, but also the set of all previous board positions; after all, the same board position can admit different future games, and be won by different players, depending on the previously-seen board positions. Thus, the possible positions of the game is vastly huger than the number of possible positions of the board, upper bounded by 2^(3^(19^2)).

I don't think anyone will dispute the claim that (for instance) platform independence is an important sysvinit feature that systemd has sacrificed, and that (say) being a single point of failure for dozens of mostly-independent subsystems is a significant architectural downside of systemd. I make no claims as to the relative importance most people attach to those downsides versus the real upsides of systemd, but downsides they be.

Most people will agree that systemd adds a number of important features to GNU/Linux that the old alternatives didn't offer.

This is very true. Most people will also agree that it accomplishes this at the cost of significant downsides inherent to the design of systemd, and sacrificing important features that the old alternatives do offer. The controversy is about whether the upsides are worth the downsides.

Well, division by zero should never happen, but you want it to be handled gracefully in case it does.

You are aware that segfaults are there specifically as a graceful handling of error conditions, right? We could just have every invalid memory access return 17 if we preferred. You seem to be underestimating just how nongraceful not aborting would be. The alternative to a segfault is a program that could go do absolutely anything, unpredictably.

Nobody wants the autopilot in charge of a barge train to segfault.

I would much prefer that over the autopilot deciding that its current speed is [broken computation... division by zero... "zero"] and the desired speed is 50km/h, so hit the accelerator until the division by zero situation resolves itself.

While C++ happens to be useful for cross platform mobile development, that's not because of C++ itself is better at cross platform development.

Yes it is. Well-written C++ code will run on any platform, whereas even the best java code only runs on the java platform. This makes C++ much more suitable for cross platform development than java.

Is this sophistry? I don't think so. Java is not a cross-platform system, java *is* a platform. And I think that no matter what the initial intentions may have been, time has shown that languages that compile to any platform, while less convenient than languages that bring their own platform, are actually the more flexible and practical for cross platform development of the two designs.

In my mind, this comes down to whether we want a better functioning OS or an OS that adheres to the mindset that I think attracted many of us to Linux in the first place.

In my mind, it comes down to streamlining the common use cases for a given system, while throwing under the bus everyone who wants to do something with their system that Lennart didn't think of or doesn't care to support.

What we really need is some kind of standardized identity management system-- like you know how you can sign onto various sites using either your Facebook or Google+ sign-on? Like that, but standardized. We need a true single-sign-on solution that is easy to manage, hard to screw up and lose your identity permanently, and usable everywhere.

Is there any particular reason why we shouldn't just use public key authentication as the standard authentication method to use absolutely everywhere, optionally delegated to some remote single-signon service of your choice which is not in any way visible to the service you're authenticating against? This seems like the obviously correct solution to me, but for some reason I never see it mentioned in threads about replacing passwords as an authentication scheme.

If an activity is safe for a hobbyist to perform, why is it suddenly dangerous and in need of regulation when a professional does it?

Because "commercial" is really code for "on a large scale", and "hobbyist" is code for "on a small scale". What's safe on a small scale need not be safe on a large scale.

Of course, "commercial" is only a poor approximation of "on a large scale", but it's measurable and hard to game and does a pretty good job as an approximation in practice, so that's what the law will say.