This tangent is interesting given I found an empirical study last night about whether it’s best to teach with text or video. I haven’t done more than look at abstract, though, so I’ll submit it this week once I see what’s in the content. I always prefer text, too, since I can read more good ideas faster with bad ones wasting less time.

Have you looked at SD Erlang? http://www.dcs.gla.ac.uk/research/sd-erlang/
It avoids having a large, fully connected network by adding the concept of an “s_group”, essentially a cluster where only one node may message other s_groups.

if your goal is simply to accomplish some task, rather than spending too much time fretting about finding the perfect tools to accomplish the task with.

To a maximizer, it’s not about finding the perfect tool to perform a task. It’s about finding the Right Way™ to do it. Rob Pike truly nails it when he says:

C++ programmers don’t come to Go because they have fought hard to gain exquisite control of their programming domain, and don’t want to surrender any of it. To them, software isn’t just about getting the job done, it’s about doing it a certain way.

STL, at least for me, represents the only way programming is possible. It is, indeed, quite different from C++ programming as it was presented and still is presented in most textbooks. But, you see, I was not trying to program in C++, I was trying to find the right way to deal with software.

Back to your post:

In some instances, actually reducing people’s choices makes them happier. Python’s “one good way to do it” makes everyone happy, because both maximizers and satisficers know that they’re doing things right, and don’t have room to worry.

To a maximizer, this is only as good as the extent to which they agree with the “one good way to do it” that Python suggests. Otherwise, they will have a lot to worry about.

I think the survey as constructed overlooks the demographic of people like me who knew several languages, used ruby, and then went back to mostly using other languages that we already knew before learning ruby.

I was a haskeller who learned ruby mostly because of metasploit, and realized it was a fine language for quick scripts, and I still pick it up now and again, but I’ve gone back to mostly using Haskell because I liked it much better.

I tried to balance many things while aiming to still keep it short & sweet. Before I “set the survey free” I was adding a sentence about also checking the boxes if you did something before and then went back to it/renewed interest in it. I decided it might clutter it too much and lots of people don’t really read the text.

So yeah, definitely - maybe/hopefully I find another/better way next time.

This is me also, sort of. I never started a real project in Ruby, but have contributed to Ruby projects. The reason I never did much else with it is that it isn’t a viable option for the things I enjoy doing.

Actually…. there has been some progress. On Android, it lets me flip between English and Italian depending on what I’ve been typing, and it just figures it out, rather than having to reach for a setting. Like someone else mentions, the voice directions are atrocious if you happen to be in the wrong spot: my wife and I always got a kick out of hearing how it’d mangle names in Italy when reading them in English.

This is a good and important move for him. People are more important than things. Treating people with dignity when they make mistakes is how we encourage open source development. I think too, behind most angry outbursts is a gap in understanding that isn’t be adequately communicated.

Email is a pretty bad communication channel in general, even if you don’t factor in stuff like language and cultural differences. Who knows what the effects of communicating almost all the time via email does to the human psyche…

Well treating people with dignity should be possible even through email. The important part is that he apologized, and explained why he was upset, and what he thought the community could do better to prevent this kind of issue in the future. It’s a billion times more effective than exploding. If you are about to write an explosive email to a colleague, maybe hit save as draft and come back a few hours later.

My rule: read it out loud as if I were saying it to someone’s face. Does it work? If it doesn’t, then it’s probably not ok. This still allows for strong dissent, but keeps it polite (for most normal people at least).

If anything, we’d be better off if we found that Intel’s ME was total garbage. It lets an alternative supplier differentiate on more secure software to get some sales. Then, Intel will either try to get people to ignore them with their other advantages, improve the security of their software, or buy the competitor to get their solution. Currently, as license allows, Intel just freeloaded off a bunch of work taxpayers in Europe paid for with some free labor by Tannenbaum et al to solve their problem. The ME stack is still garbage per recent threads.

Alternatively, they could’ve just paid a RTOS vendor for a stack. The going rate for those targeting robustness with networking and filesystems was $50,000 OEM last I checked. After they acquired Wind River, they’d have access to highly-reliable OS that’s been used in all kinds of things. Also, a separation kernel (VxWorks MILS) with carefully-crafted networking plus NSA pentesting. So, they do have both paid and free alternatives that are better than Minix 3 if they didn’t prefer freeloading off others’ work to save fifty grand or so on a project that nets them billions. I’m starting to lean back toward GPLing or AGPLing everything with dual-licensing to reduce this. They can pay to remove the copyleft.

Not really. I should’ve said freeloading like parasites. I wonder, though, about what motivates people to freely work for companies under a license that insures mainly the companies benefit versus one where they contribute something back. I originally liked the BSD licenses to increase the amount of high-quality code the companies might be using to make stuff better in general. I’m not so sure we should do that now seeing how (a) that creates bad incentives for the companies to constantly freeload versus GPL/APGL projects and (b) they keep modifying that stuff into insecure or seemingly-malicious software like Intel did.

The folks aren’t doing anything great by giving them the code. They’re just helping monopolists and oligopolists further ensure the status quo that damages users, developers, and hobbyists while minimizing their operational costs for benefit of owners or shareholders. They also use their fortunes to pay lobbyists to reduce our rights in areas such as copyright and patent law. That phrasing depicts what actually goes on versus the public good people sold me on long ago with BSD/MIT licenses. I wonder how many BSD/MIT contributors that wanted corporate uptake would stick to it if they saw that as the ultimate goal of their contributions. Also, were told the companies often change the code to defeat its flexibility, reliability, or security benefits.

I’m sure plenty would stay in the game but I am curious how many would switch licenses. Also, which would they prefer switching to for balancing widespread uptake and maximizing contributions.

People use BSD-alikes because their goal isn’t to coerce people into opening their sources, their goal is to make using their software as easy a possible. They’re not working for rewards from future would-be customers, they’re working because they feel some software which does not exist, should.

Sure, and a subset of those people are interested in keeping their work from people who don’t “deserve it”, but not everybody is - and those who aren’t, usually choose a non-viral license because they want more people using their stuff.

It’s definitely garbage. I’m setting up something broader than just Intel where I want them to show what their proprietary stuff is worth, users to find out, and a better alternative to potentially show up. Those can be vetted proprietary (eg shared-source) or FOSS.

I could be really wrong but I think AMD is missing a golden opportunity to differentiate on security or trustworthiness of CPU’s like Blackberry and then Apple tried to do in smartphones. Two lines of products, one without management and one with enterprise-controllable version, might push those losses back a little bit esp from foreign sales. They could let third parties of different jurisdictions inspect the management code or its loader since high-performance, legacy-compatible x86 is a patent minefield for competitors anyway. My hypothetical alternatives would have to make some kind of sacrifice in performance, cost, or both. AMD could charge right in.

I could be really wrong but I think AMD is missing a golden opportunity to differentiate on security or trustworthiness of CPU

I doubt AMD has a choice in the matter. It really doesn’t make sense for Intel to have it in all their CPUs; in the consumer CPUs where no user will ever use the management engine, it’s just a bunch of extra hardware on the die, wasting space and increasing complexity and cost. The only reason I can think of would be that someone forced their hand, and I can imagine the NSA wouldn’t hate having a backdoor into every single Intel (or AMD) CPU in the world with ring -3 access.

They have several, possible benefits to having that enterprise technology in their chips:

The functionality for providing security enhancements is the same in each. Enterprise and repair shops also wanted management benefits.

The DRM capabilities the entertainment industry wanted and might have paid for.

The backdoors the NSA might have demanded or paid for.

The common technique for saving on mask costs (millions) by merging I.P. from several use cases into fewer mask layers.

Ok. The original release on Intel’s side was vPro which had all kinds of benefits for enterprises, esp security. The Trusted Computing Group, of which Intel was part, also wanted to use that stuff for DRM for movies and MP3’s. They probably had financial incentives which might likewise be used to make them go more private again. The NSA is an unknown here where they might have promised them something for money or defense contracts. I know the ME’s weren’t mandatory because not all chip vendors that were in the U.S. were building management engines into their CPU’s. They could possibly put their foot down saying they’d take money to 0-day the firmware instead which would let us put in better firmware but NSA still hits most targets.

The last thing on my list is an industry practice to get development costs down. The best example was the hard disks which showed different amounts of storage but had same platters with same amount of space. The platters and components for writing them had a fixed cost. So, they used firmware deception to tier the pricing. Another example in an ASIC from a friend in hardware was him discovering a cellular radio in an embedded peripheral that wasn’t supposed to connect to anything. He said it wasn’t malicious: the company just reused a mobile SoC they sell for a different purpose with different packaging to squeeze more ROI out of existing chip. Aside from these oddities, the main form of reuse is just doing pre-proven blocks of hardware in a certain process node on new projects. Once they wire the first CPU instance to a ME, it was possibly cheaper to just reuse that on each iteration of that instance esp given ME’s were originally small (ARC cores).

So, there’s the overall analysis of what parties and concerns are involved. The amounts they’re currently losing are much bigger than anything Hollywood or NSA paid them. Highest payout I saw for NSA was around $100 million per telecom for access to their national networks. That was something they could use constantly whereas this they’d have to use sparingly. Couldn’t be much more. The trick is, like with Raptor Workstation, how many people would actually pay for a computer without the backdoor, how much extra, and what total revenues to project for AMD? I’m less confident in demand side than I am in supply side.

Then they would have just used a different OS. MacOS has slowly been ripping all the GPLv3 code out of their OS. That’s why they use an ancient version of GPLv2 bash and manually backport all the security fixes.

He spends 1/3rd of the letter asking talking about the fact that someone benefitted from his hard work and he didn’t get any acknowledgement of it. Then he goes and says something like: “I don’t mind, of course, and was not expecting any kind of payment since that is not required.” The whole thing feels and reads regretful to me. I don’t know AST, so don’t really know his personality, or anything, but if I spent 1/3rd of the letter talking like that, I know it’d be because I felt I missed a big opportunity and I’m trying to convince myself that it was fine.

If there’s anything that AST might regret is the fact that MINIX hasn’t been released under a permissive license earlier and the fact that Linux and the *BSDs got themselves firmly established.

Him regretting not getting anything back out of it after fighting with the publisher to get the code released under a permissive license? Seriously? ;^)

The way I read the letter is him setting the scene before mentioning that letting him know would have been a polite thing to have done - mentioning that without said background information would have looked a bit weird.

Haskell’s quickcheck does that. The Erlang variants use combinators that you can write and compose, and let you guide the distribution of inputs you want to have rather than just taking ‘types’ in there.

You can then, for example, decide that rather than sending any string, you’re going to take strings that contain 20% emoji, 5% ASCII, 10% sequences that include combining characters, and the rest is taken in linebreaks, escape sequences, and quotes.

This turns out to give you an approach that while definitely reminiscent of fuzzing, sits a bit closer to regular tests in terms of how you approach system design (you can even TDD with properties), whereas I’m more familiar with traditional fuzzers being used as a means of validation.

QuickCheck can generate input based on the types: it has a typeclass called Arbitrary, which provides an arbitrary function that we can think of as a “generic” random generator for those types which implement it (this typeclass is also where shrink is defined).

We can also write completely standalone generators when we want something more specific, like evenGen :: Gen Int which only generates even Ints, and we can use these in properties using the forAll function, e.g. forAll evenGen myProperty.

There are a two other things to consider as well:

Properties can have preconditions, which are implemented using rejection sampling. For example myProperty n = isEven n ==> foo will only evaluate foo if isEven n is True. If we generate an n which isn’t even, the test is skipped. If too many tests get skipped, QuickSpec tells us. We could achieve a similar thing with boolean logic, e.g. myProperty n = not (isEven n) || foo, but in this case we’re replacing skips with passes, which might give us false confidence in the results (e.g. we might get 100% of tests passing, but never actually generate an input which passes the precondition)

We can use newtype to give a different name and Arbitrary instance to existing types. QuickCheck comes with a NonEmpty alias for lists, NonZero aliases for numbers, etc. The important difference between using a newtype and using a normal function (like evenGen) is that we can ensure some invariant when shrinking: e.g. shrinking an even number shouldn’t spit give us odd numbers.

The Personal MBA. It’s a very accessible guide to the things you should know about to run a business, at fairly superficial level but deliberately so. It’s more a curriculum that a reader can choose interesting subjects from to research in more depth than an instructional. I find it very interesting and easy to pick a few topics to skim over breakfast.

Designing Software Architectures. I’m struggling with motivation on this one, because it’s very deep and very detailed, being an SEI book it goes in for defining everything you could need to know rather than giving just the key points.

Invoking Darkness, the third of the Babylon 5 techno-mage books. I’ve found all of these very fast-paced and have got through the trilogy in a couple of weeks.

I also want to find books on the following, does anyone have any recommendations?

a history of the Unix wars (the ‘workstation’ period involving Sun, HP, Apollo, DEC, IBM, NeXT and SGI primarily, but really everything starting from AT&T up to Linux and OS X would be interesting)

a business case study on Apple’s turnaround 1997-2001. I’ve read plenty of 1990s case studies explaining why they’ll fail, and 2010s interpretations of why they’re dominant, and Gil Amelio’s “On the Firing Line” which explains his view of how he stemmed the bleeding, but would like to fill in the gaps: particularly the changes from Dec 1997 to the iPod.

a technical book on Mach (it doesn’t need to still be in print, I’ll try to track it down): I’ve read the source code for xnu, GNU Mach and mkLinux, Tevanien’s papers, and the Mac OS X Internals book, but could still do with more

I liked that one - many business books have an interesting concept that could be pretty well described in about 10 pages. But you can’t sell 10 page books. That book has a lot of those concepts distilled.

You may not “need” to treat your servers as cattle today, but you certainly need to be able to bring one up next week when one of your three pets crashes. How are you going to accomplish that? If your answer is “I don’t know,” then you need to figure things out. If you’re going to figure things out, you may as well codify it so that it can happen at the push of a button. If you’re going to make it happen at the push of a button, you may as well use that to deploy every time. If you don’t, your automation will rot and you’ll be back at step one.

It doesn’t have to be a gold-standard blue/green deployment with full automation of server management, networking, and so on–but you should at minimum have Ansible playbooks or an equivalent that will set up your software on a fresh machine. Doing this is not much harder than merely taking notes of changes you need to make to a machine in order to get something to run.

Yeah. I mean, OK cool, I have no love for Node either, but I’m also of the opinion that “safe” and “dynamically typed” is an oxymoron, so I guess that would exclude Elixir for me too.

And isn’t the point of Node not that it’s rad on the backend but that it’s Javascript for those who live in frontend land? Whenever I look at something like gopher.js just so I don’t have to write Javascript, that’s me doing the same thing but the other direction. I can’t cast the first stone here.

Part of the point of Node is that all the libraries are aware of the cooperative threading model, and the default APIs are non-blocking (with callbacks). Thus you can get the performance advantages of event-based asynchronous I/O without thinking about it too hard. (Which you can then throw away without thinking too hard either, by doing CPU-intensive tasks on your single event loop thread. No free lunch here.)

The point is that if you want to show an ecosystem proper, do it on its own right. Elixir is great, BEAM is too, but most of the parts they are good at are shown much better in isolation, without slinging mud at Node.

The post is neither fish nor fowl: it’s not a good criticism of Node nor a particularly good endorsement of Elixir, because it is based on the poor criticism of Node.

By all means, programming languages must be criticised, but hyperbole rarely is it. Not because it’s out of line, but because it doesn’t stick particularly well. It’s fun to read, but not lasting.

I use Go exclusively on the job and I’m happy. I would like a Supervisor model thingy but Go doesn’t really have one (I think I read a post about why it was hard but I forget) and Google does something similar with the entire process instead (Borg will just restart a dead task), so there’s no real push for it to change.

People have a tendency to submit code using tricks that would normally be avoided or frowned upon. For example, checkout how many of the Haskell samples are using unsafe modules to avoid laziness and immutability.

Also, I’ve tried async Python 3 and it’s honestly somewhat clunky. The awesome thing about ES6 async/await is that it just uses promises, which makes it trivial to integrate with code that doesn’t use async/await syntax.

I think the main reason is that node.js is just a more highly ‘engageable’ keyword. So if you have a title that says “my opinion about Elixir” there would be many fewer clicks than if you have your title as “NODE.JS IS BAD also elixir”.

I have a Dell XPS 13 developer edition and I’ve been very happy with it. It ‘just works’, and is a nice machine. He’s right about the ‘nose cam’, but I don’t do a lot of video calling. Just back away from the computer a bit more, or buy an external one if you do.